text
stringlengths
313
1.33M
# Quantum Physics ## The Particle-Wave Duality ### Learning Objectives By the end of this section, you will be able to: 1. Explain what the term particle-wave duality means, and why it is applied to EM radiation. We have long known that EM radiation is a wave, capable of interference and diffraction. We now see that light can be modeled as photons, which are massless particles. This may seem contradictory, since we ordinarily deal with large objects that never act like both wave and particle. An ocean wave, for example, looks nothing like a rock. To understand small-scale phenomena, we make analogies with the large-scale phenomena we observe directly. When we say something behaves like a wave, we mean it shows interference effects analogous to those seen in overlapping water waves. (See .) Two examples of waves are sound and EM radiation. When we say something behaves like a particle, we mean that it interacts as a discrete unit with no interference effects. Examples of particles include electrons, atoms, and photons of EM radiation. How do we talk about a phenomenon that acts like both a particle and a wave? There is no doubt that EM radiation interferes and has the properties of wavelength and frequency. There is also no doubt that it behaves as particles—photons with discrete energy. We call this twofold nature the particle-wave duality, meaning that EM radiation has both particle and wave properties. This so-called duality is simply a term for properties of the photon analogous to phenomena we can observe directly, on a macroscopic scale. If this term seems strange, it is because we do not ordinarily observe details on the quantum level directly, and our observations yield either particle or wavelike properties, but never both simultaneously. Since we have a particle-wave duality for photons, and since we have seen connections between photons and matter in that both have momentum, it is reasonable to ask whether there is a particle-wave duality for matter as well. If the EM radiation we once thought to be a pure wave has particle properties, is it possible that matter has wave properties? The answer is yes. The consequences are tremendous, as we will begin to see in the next section. ### Test Prep for AP Courses ### Section Summary 1. EM radiation can behave like either a particle or a wave. 2. This is termed particle-wave duality.
# Quantum Physics ## The Wave Nature of Matter ### Learning Objectives By the end of this section, you will be able to: 1. Describe the Davisson-Germer experiment, and explain how it provides evidence for the wave nature of electrons. ### De Broglie Wavelength In 1923 a French physics graduate student named Prince Louis-Victor de Broglie (1892–1987) made a radical proposal based on the hope that nature is symmetric. If EM radiation has both particle and wave properties, then nature would be symmetric if matter also had both particle and wave properties. If what we once thought of as an unequivocal wave (EM radiation) is also a particle, then what we think of as an unequivocal particle (matter) may also be a wave. De Broglie’s suggestion, made as part of his doctoral thesis, was so radical that it was greeted with some skepticism. A copy of his thesis was sent to Einstein, who said it was not only probably correct, but that it might be of fundamental importance. With the support of Einstein and a few other prominent physicists, de Broglie was awarded his doctorate. De Broglie took both relativity and quantum mechanics into account to develop the proposal that all particles have a wavelength, given by where is Planck’s constant and is momentum. This is defined to be the de Broglie wavelength. (Note that we already have this for photons, from the equation .) The hallmark of a wave is interference. If matter is a wave, then it must exhibit constructive and destructive interference. Why isn’t this ordinarily observed? The answer is that in order to see significant interference effects, a wave must interact with an object about the same size as its wavelength. Since is very small, is also small, especially for macroscopic objects. A 3-kg bowling ball moving at 10 m/s, for example, has This means that to see its wave characteristics, the bowling ball would have to interact with something about in size—far smaller than anything known. When waves interact with objects much larger than their wavelength, they show negligible interference effects and move in straight lines (such as light rays in geometric optics). To get easily observed interference effects from particles of matter, the longest wavelength and hence smallest mass possible would be useful. Therefore, this effect was first observed with electrons. American physicists Clinton J. Davisson and Lester H. Germer in 1925 and, independently, British physicist G. P. Thomson (son of J. J. Thomson, discoverer of the electron) in 1926 scattered electrons from crystals and found diffraction patterns. These patterns are exactly consistent with interference of electrons having the de Broglie wavelength and are somewhat analogous to light interacting with a diffraction grating. (See .) De Broglie’s proposal of a wave nature for all particles initiated a remarkably productive era in which the foundations for quantum mechanics were laid. In 1926, the Austrian physicist Erwin Schrödinger (1887–1961) published four papers in which the wave nature of particles was treated explicitly with wave equations. At the same time, many others began important work. Among them was German physicist Werner Heisenberg (1901–1976) who, among many other contributions to quantum mechanics, formulated a mathematical treatment of the wave nature of matter that used matrices rather than wave equations. We will deal with some specifics in later sections, but it is worth noting that de Broglie’s work was a watershed for the development of quantum mechanics. De Broglie was awarded the Nobel Prize in 1929 for his vision, as were Davisson and G. P. Thomson in 1937 for their experimental verification of de Broglie’s hypothesis. ### Electron Microscopes One consequence or use of the wave nature of matter is found in the electron microscope. As we have discussed, there is a limit to the detail observed with any probe having a wavelength. Resolution, or observable detail, is limited to about one wavelength. Since a potential of only 54 V can produce electrons with sub-nanometer wavelengths, it is easy to get electrons with much smaller wavelengths than those of visible light (hundreds of nanometers). Electron microscopes can, thus, be constructed to detect much smaller details than optical microscopes. (See .) There are basically two types of electron microscopes. The transmission electron microscope (TEM) accelerates electrons that are emitted from a hot filament (the cathode). The beam is broadened and then passes through the sample. A magnetic lens focuses the beam image onto a fluorescent screen, a photographic plate, or (most probably) a CCD (light sensitive camera), from which it is transferred to a computer. The TEM is similar to the optical microscope, but it requires a thin sample examined in a vacuum. However it can resolve details as small as 0.1 nm (), providing magnifications of 100 million times the size of the original object. The TEM has allowed us to see individual atoms and structure of cell nuclei. The scanning electron microscope (SEM) provides images by using secondary electrons produced by the primary beam interacting with the surface of the sample (see ). The SEM also uses magnetic lenses to focus the beam onto the sample. However, it moves the beam around electrically to “scan” the sample in the x and y directions. A CCD detector is used to process the data for each electron position, producing images like the one at the beginning of this chapter. The SEM has the advantage of not requiring a thin sample and of providing a 3-D view. However, its resolution is about ten times less than a TEM. Electrons were the first particles with mass to be directly confirmed to have the wavelength proposed by de Broglie. Subsequently, protons, helium nuclei, neutrons, and many others have been observed to exhibit interference when they interact with objects having sizes similar to their de Broglie wavelength. The de Broglie wavelength for massless particles was well established in the 1920s for photons, and it has since been observed that all massless particles have a de Broglie wavelength The wave nature of all particles is a universal characteristic of nature. We shall see in following sections that implications of the de Broglie wavelength include the quantization of energy in atoms and molecules, and an alteration of our basic view of nature on the microscopic scale. The next section, for example, shows that there are limits to the precision with which we may make predictions, regardless of how hard we try. There are even limits to the precision with which we may measure an object’s location or energy. ### Test Prep for AP Courses ### Section Summary 1. Particles of matter also have a wavelength, called the de Broglie wavelength, given by , where is momentum. 2. Matter is found to have the same interference characteristics as any other wave. ### Conceptual Questions ### Problems & Exercises
# Quantum Physics ## Probability: The Heisenberg Uncertainty Principle ### Learning Objectives By the end of this section, you will be able to: 1. Use both versions of Heisenberg’s uncertainty principle in calculations. 2. Explain the implications of Heisenberg’s uncertainty principle for measurements. ### Probability Distribution Matter and photons are waves, implying they are spread out over some distance. What is the position of a particle, such as an electron? Is it at the center of the wave? The answer lies in how you measure the position of an electron. Experiments show that you will find the electron at some definite location, unlike a wave. But if you set up exactly the same situation and measure it again, you will find the electron in a different location, often far outside any experimental uncertainty in your measurement. Repeated measurements will display a statistical distribution of locations that appears wavelike. (See .) After de Broglie proposed the wave nature of matter, many physicists, including Schrödinger and Heisenberg, explored the consequences. The idea quickly emerged that, because of its wave character, a particle’s trajectory and destination cannot be precisely predicted for each particle individually. However, each particle goes to a definite place (as illustrated in ). After compiling enough data, you get a distribution related to the particle’s wavelength and diffraction pattern. There is a certain probability of finding the particle at a given location, and the overall pattern is called a probability distribution. Those who developed quantum mechanics devised equations that predicted the probability distribution in various circumstances. It is somewhat disquieting to think that you cannot predict exactly where an individual particle will go, or even follow it to its destination. Let us explore what happens if we try to follow a particle. Consider the double-slit patterns obtained for electrons and photons in . First, we note that these patterns are identical, following , the equation for double-slit constructive interference developed in Photon Energies and the Electromagnetic Spectrum, where is the slit separation and is the electron or photon wavelength. Both patterns build up statistically as individual particles fall on the detector. This can be observed for photons or electrons—for now, let us concentrate on electrons. You might imagine that the electrons are interfering with one another as any waves do. To test this, you can lower the intensity until there is never more than one electron between the slits and the screen. The same interference pattern builds up! This implies that a particle’s probability distribution spans both slits, and the particles actually interfere with themselves. Does this also mean that the electron goes through both slits? An electron is a basic unit of matter that is not divisible. But it is a fair question, and so we should look to see if the electron traverses one slit or the other, or both. One possibility is to have coils around the slits that detect charges moving through them. What is observed is that an electron always goes through one slit or the other; it does not split to go through both. But there is a catch. If you determine that the electron went through one of the slits, you no longer get a double slit pattern—instead, you get single slit interference. There is no escape by using another method of determining which slit the electron went through. Knowing the particle went through one slit forces a single-slit pattern. If you do not observe which slit the electron goes through, you obtain a double-slit pattern. ### Heisenberg Uncertainty How does knowing which slit the electron passed through change the pattern? The answer is fundamentally important—measurement affects the system being observed. Information can be lost, and in some cases it is impossible to measure two physical quantities simultaneously to exact precision. For example, you can measure the position of a moving electron by scattering light or other electrons from it. Those probes have momentum themselves, and by scattering from the electron, they change its momentum in a manner that loses information. There is a limit to absolute knowledge, even in principle. It was Werner Heisenberg who first stated this limit to knowledge in 1929 as a result of his work on quantum mechanics and the wave characteristics of all particles. (See ). Specifically, consider simultaneously measuring the position and momentum of an electron (it could be any particle). There is an uncertainty in position that is approximately equal to the wavelength of the particle. That is, As discussed above, a wave is not located at one point in space. If the electron’s position is measured repeatedly, a spread in locations will be observed, implying an uncertainty in position . To detect the position of the particle, we must interact with it, such as having it collide with a detector. In the collision, the particle will lose momentum. This change in momentum could be anywhere from close to zero to the total momentum of the particle, . It is not possible to tell how much momentum will be transferred to a detector, and so there is an uncertainty in momentum , too. In fact, the uncertainty in momentum may be as large as the momentum itself, which in equation form means that The uncertainty in position can be reduced by using a shorter-wavelength electron, since . But shortening the wavelength increases the uncertainty in momentum, since . Conversely, the uncertainty in momentum can be reduced by using a longer-wavelength electron, but this increases the uncertainty in position. Mathematically, you can express this trade-off by multiplying the uncertainties. The wavelength cancels, leaving So if one uncertainty is reduced, the other must increase so that their product is . With the use of advanced mathematics, Heisenberg showed that the best that can be done in a simultaneous measurement of position and momentum is This is known as the Heisenberg uncertainty principle. It is impossible to measure position and momentum simultaneously with uncertainties and that multiply to be less than . Neither uncertainty can be zero. Neither uncertainty can become small without the other becoming large. A small wavelength allows accurate position measurement, but it increases the momentum of the probe to the point that it further disturbs the momentum of a system being measured. For example, if an electron is scattered from an atom and has a wavelength small enough to detect the position of electrons in the atom, its momentum can knock the electrons from their orbits in a manner that loses information about their original motion. It is therefore impossible to follow an electron in its orbit around an atom. If you measure the electron’s position, you will find it in a definite location, but the atom will be disrupted. Repeated measurements on identical atoms will produce interesting probability distributions for electrons around the atom, but they will not produce motion information. The probability distributions are referred to as electron clouds or orbitals. The shapes of these orbitals are often shown in general chemistry texts and are discussed in The Wave Nature of Matter Causes Quantization. Why don’t we notice Heisenberg’s uncertainty principle in everyday life? The answer is that Planck’s constant is very small. Thus the lower limit in the uncertainty of measuring the position and momentum of large objects is negligible. We can detect sunlight reflected from Jupiter and follow the planet in its orbit around the Sun. The reflected sunlight alters the momentum of Jupiter and creates an uncertainty in its momentum, but this is totally negligible compared with Jupiter’s huge momentum. The correspondence principle tells us that the predictions of quantum mechanics become indistinguishable from classical physics for large objects, which is the case here. ### Heisenberg Uncertainty for Energy and Time There is another form of Heisenberg’s uncertainty principle for simultaneous measurements of energy and time. In equation form, where is the uncertainty in energy and is the uncertainty in time. This means that within a time interval , it is not possible to measure energy precisely—there will be an uncertainty in the measurement. In order to measure energy more precisely (to make smaller), we must increase . This time interval may be the amount of time we take to make the measurement, or it could be the amount of time a particular state exists, as in the next . The uncertainty principle for energy and time can be of great significance if the lifetime of a system is very short. Then is very small, and is consequently very large. Some nuclei and exotic particles have extremely short lifetimes (as small as ), causing uncertainties in energy as great as many GeV (). Stored energy appears as increased rest mass, and so this means that there is significant uncertainty in the rest mass of short-lived particles. When measured repeatedly, a spread of masses or decay energies are obtained. The spread is . You might ask whether this uncertainty in energy could be avoided by not measuring the lifetime. The answer is no. Nature knows the lifetime, and so its brevity affects the energy of the particle. This is so well established experimentally that the uncertainty in decay energy is used to calculate the lifetime of short-lived states. Some nuclei and particles are so short-lived that it is difficult to measure their lifetime. But if their decay energy can be measured, its spread is , and this is used in the uncertainty principle () to calculate the lifetime . There is another consequence of the uncertainty principle for energy and time. If energy is uncertain by , then conservation of energy can be violated by for a time . Neither the physicist nor nature can tell that conservation of energy has been violated, if the violation is temporary and smaller than the uncertainty in energy. While this sounds innocuous enough, we shall see in later chapters that it allows the temporary creation of matter from nothing and has implications for how nature transmits forces over very small distances. Finally, note that in the discussion of particles and waves, we have stated that individual measurements produce precise or particle-like results. A definite position is determined each time we observe an electron, for example. But repeated measurements produce a spread in values consistent with wave characteristics. The great theoretical physicist Richard Feynman (1918–1988) commented, “What there are, are particles.” When you observe enough of them, they distribute themselves as you would expect for a wave phenomenon. However, what there are as they travel we cannot tell because, when we do try to measure, we affect the traveling. ### Section Summary 1. Matter is found to have the same interference characteristics as any other wave. 2. There is now a probability distribution for the location of a particle rather than a definite position. 3. Another consequence of the wave character of all particles is the Heisenberg uncertainty principle, which limits the precision with which certain physical quantities can be known simultaneously. For position and momentum, the uncertainty principle is , where is the uncertainty in position and is the uncertainty in momentum. 4. For energy and time, the uncertainty principle is where is the uncertainty in energy and is the uncertainty in time. 5. These small limits are fundamentally important on the quantum-mechanical scale. ### Conceptual Questions ### Problems & Exercises
# Quantum Physics ## The Particle-Wave Duality Reviewed ### Learning Objectives By the end of this section, you will be able to: 1. Explain the concept of particle-wave duality, and its scope. Particle-wave duality—the fact that all particles have wave properties—is one of the cornerstones of quantum mechanics. We first came across it in the treatment of photons, those particles of EM radiation that exhibit both particle and wave properties, but not at the same time. Later it was noted that particles of matter have wave properties as well. The dual properties of particles and waves are found for all particles, whether massless like photons, or having a mass like electrons. (See .) There are many submicroscopic particles in nature. Most have mass and are expected to act as particles, or the smallest units of matter. All these masses have wave properties, with wavelengths given by the de Broglie relationship . So, too, do combinations of these particles, such as nuclei, atoms, and molecules. As a combination of masses becomes large, particularly if it is large enough to be called macroscopic, its wave nature becomes difficult to observe. This is consistent with our common experience with matter. Some particles in nature are massless. We have only treated the photon so far, but all massless entities travel at the speed of light, have a wavelength, and exhibit particle and wave behaviors. They have momentum given by a rearrangement of the de Broglie relationship, . In large combinations of these massless particles (such large combinations are common only for photons or EM waves), there is mostly wave behavior upon detection, and the particle nature becomes difficult to observe. This is also consistent with experience. (See .) The particle-wave duality is a universal attribute. It is another connection between matter and energy. Not only has modern physics been able to describe nature for high speeds and small sizes, it has also discovered new connections and symmetries. There is greater unity and symmetry in nature than was known in the classical era—but they were dreamt of. A beautiful poem written by the English poet William Blake some two centuries ago contains the following four lines: To see the World in a Grain of Sand And a Heaven in a Wild Flower Hold Infinity in the palm of your hand And Eternity in an hour ### Integrated Concepts The problem set for this section involves concepts from this chapter and several others. Physics is most interesting when applied to general situations involving more than a narrow set of physical principles. For example, photons have momentum, hence the relevance of Linear Momentum and Collisions. The following topics are involved in some or all of the problems in this section: 1. Dynamics: Newton’s Laws of Motion 2. Work, Energy, and Energy Resources 3. Linear Momentum and Collisions 4. Heat and Heat Transfer Methods 5. Electric Potential and Electric Field 6. Electric Current, Resistance, and Ohm’s Law 7. Wave Optics 8. Special Relativity illustrates how these strategies are applied to an integrated-concept problem. ### Test Prep for AP Courses ### Section Summary 1. The particle-wave duality refers to the fact that all particles—those with mass and those without mass—have wave characteristics. 2. This is a further connection between mass and energy. ### Conceptual Questions ### Problems & Exercises
# Atomic Physics ## Introduction to Atomic Physics From childhood on, we learn that atoms are a substructure of all things around us, from the air we breathe to the autumn leaves that blanket a forest trail. Invisible to the eye, the existence and properties of atoms are used to explain many phenomena—a theme found throughout this text. In this chapter, we discuss the discovery of atoms and their own substructures; we then apply quantum mechanics to the description of atoms, and their properties and interactions. Along the way, we will find, much like the scientists who made the original discoveries, that new concepts emerge with applications far beyond the boundaries of atomic physics.
# Atomic Physics ## Discovery of the Atom ### Learning Objectives By the end of this section, you will be able to: 1. Describe the basic structure of the atom, the substructure of all matter. How do we know that atoms are really there if we cannot see them with our eyes? A brief account of the progression from the proposal of atoms by the Greeks to the first direct evidence of their existence follows. People have long speculated about the structure of matter and the existence of atoms. The earliest significant ideas to survive are due to the ancient Greeks in the fifth century BCE, especially those of the philosophers Leucippus and Democritus. (There is some evidence that philosophers in both India and China made similar speculations, at about the same time.) They considered the question of whether a substance can be divided without limit into ever smaller pieces. There are only a few possible answers to this question. One is that infinitesimally small subdivision is possible. Another is what Democritus in particular believed—that there is a smallest unit that cannot be further subdivided. Democritus called this the atom. We now know that atoms themselves can be subdivided, but their identity is destroyed in the process, so the Greeks were correct in a respect. The Greeks also felt that atoms were in constant motion, another correct notion. The Greeks and others speculated about the properties of atoms, proposing that only a few types existed and that all matter was formed as various combinations of these types. The famous proposal that the basic elements were earth, air, fire, and water was brilliant, but incorrect. The Greeks had identified the most common examples of the four states of matter (solid, gas, plasma, and liquid), rather than the basic elements. More than 2000 years passed before observations could be made with equipment capable of revealing the true nature of atoms. Over the centuries, discoveries were made regarding the properties of substances and their chemical reactions. Certain systematic features were recognized, but similarities between common and rare elements resulted in efforts to transmute them (lead into gold, in particular) for financial gain. Secrecy was endemic. Alchemists discovered and rediscovered many facts but did not make them broadly available. As the Middle Ages ended, alchemy gradually faded, and the science of chemistry arose. It was no longer possible, nor considered desirable, to keep discoveries secret. Collective knowledge grew, and by the beginning of the 19th century, an important fact was well established—the masses of reactants in specific chemical reactions always have a particular mass ratio. This is very strong indirect evidence that there are basic units (atoms and molecules) that have these same mass ratios. The English chemist John Dalton (1766–1844) did much of this work, with significant contributions by the Italian physicist Amedeo Avogadro (1776–1856). It was Avogadro who developed the idea of a fixed number of atoms and molecules in a mole, and this special number is called Avogadro’s number in his honor. The Austrian physicist Johann Josef Loschmidt was the first to measure the value of the constant in 1865 using the kinetic theory of gases. Knowledge of the properties of elements and compounds grew, culminating in the mid-19th-century development of the periodic table of the elements by Dmitri Mendeleev (1834–1907), the great Russian chemist. Mendeleev proposed an ingenious array that highlighted the periodic nature of the properties of elements. Believing in the systematics of the periodic table, he also predicted the existence of then-unknown elements to complete it. Once these elements were discovered and determined to have properties predicted by Mendeleev, his periodic table became universally accepted. Also during the 19th century, the kinetic theory of gases was developed. Kinetic theory is based on the existence of atoms and molecules in random thermal motion and provides a microscopic explanation of the gas laws, heat transfer, and thermodynamics (see Introduction to Temperature, Kinetic Theory, and the Gas Laws and Introduction to Laws of Thermodynamics). Kinetic theory works so well that it is another strong indication of the existence of atoms. But it is still indirect evidence—individual atoms and molecules had not been observed. There were heated debates about the validity of kinetic theory until direct evidence of atoms was obtained. The first truly direct evidence of atoms is credited to Robert Brown, a Scottish botanist. In 1827, he noticed that tiny pollen grains suspended in still water moved about in complex paths. This can be observed with a microscope for any small particles in a fluid. The motion is caused by the random thermal motions of fluid molecules colliding with particles in the fluid, and it is now called Brownian motion. (See .) Statistical fluctuations in the numbers of molecules striking the sides of a visible particle cause it to move first this way, then that. Although the molecules cannot be directly observed, their effects on the particle can be. By examining Brownian motion, the size of molecules can be calculated. The smaller and more numerous they are, the smaller the fluctuations in the numbers striking different sides. It was Albert Einstein who, starting in his epochal year of 1905, published several papers that explained precisely how Brownian motion could be used to measure the size of atoms and molecules. (In 1905 Einstein created special relativity, proposed photons as quanta of EM radiation, and produced a theory of Brownian motion that allowed the size of atoms to be determined. All of this was done in his spare time, since he worked days as a patent examiner. Any one of these very basic works could have been the crowning achievement of an entire career—yet Einstein did even more in later years.) Their sizes were only approximately known to be , based on a comparison of latent heat of vaporization and surface tension made in about 1805 by Thomas Young of double-slit fame and the famous astronomer and mathematician Simon Laplace. Using Einstein’s ideas, the French physicist Jean-Baptiste Perrin (1870–1942) carefully observed Brownian motion; not only did he confirm Einstein’s theory, he also produced accurate sizes for atoms and molecules. Since molecular weights and densities of materials were well established, knowing atomic and molecular sizes allowed a precise value for Avogadro’s number to be obtained. (If we know how big an atom is, we know how many fit into a certain volume.) Perrin also used these ideas to explain atomic and molecular agitation effects in sedimentation, and he received the 1926 Nobel Prize for his achievements. Most scientists were already convinced of the existence of atoms, but the accurate observation and analysis of Brownian motion was conclusive—it was the first truly direct evidence. A huge array of direct and indirect evidence for the existence of atoms now exists. For example, it has become possible to accelerate ions (much as electrons are accelerated in cathode-ray tubes) and to detect them individually as well as measure their masses (see More Applications of Magnetism for a discussion of mass spectrometers). Other devices that observe individual atoms, such as the scanning tunneling electron microscope, will be discussed elsewhere. (See .) All of our understanding of the properties of matter is based on and consistent with the atom. The atom’s substructures, such as electron shells and the nucleus, are both interesting and important. The nucleus in turn has a substructure, as do the particles of which it is composed. These topics, and the question of whether there is a smallest basic structure to matter, will be explored in later parts of the text. ### Section Summary 1. Atoms are the smallest unit of elements; atoms combine to form molecules, the smallest unit of compounds. 2. The first direct observation of atoms was in Brownian motion. 3. Analysis of Brownian motion gave accurate sizes for atoms ( on average) and a precise value for Avogadro’s number. ### Conceptual Questions ### Problems & Exercises
# Atomic Physics ## Discovery of the Parts of the Atom: Electrons and Nuclei ### Learning Objectives By the end of this section, you will be able to: 1. Describe how electrons were discovered. 2. Explain the Millikan oil drop experiment. 3. Describe Rutherford’s gold foil experiment. 4. Describe Rutherford’s planetary model of the atom. Just as atoms are a substructure of matter, electrons and nuclei are substructures of the atom. The experiments that were used to discover electrons and nuclei reveal some of the basic properties of atoms and can be readily understood using ideas such as electrostatic and magnetic force, already covered in previous chapters. ### The Electron Gas discharge tubes, such as that shown in , consist of an evacuated glass tube containing two metal electrodes and a rarefied gas. When a high voltage is applied to the electrodes, the gas glows. These tubes were the precursors to today’s neon lights. They were first studied seriously by Heinrich Geissler, a German inventor and glassblower, starting in the 1860s. The English scientist William Crookes, among others, continued to study what for some time were called Crookes tubes, wherein electrons are freed from atoms and molecules in the rarefied gas inside the tube and are accelerated from the cathode (negative) to the anode (positive) by the high potential. These “cathode rays” collide with the gas atoms and molecules and excite them, resulting in the emission of electromagnetic (EM) radiation that makes the electrons’ path visible as a ray that spreads and fades as it moves away from the cathode. Gas discharge tubes today are most commonly called cathode-ray tubes, because the rays originate at the cathode. Crookes showed that the electrons carry momentum (they can make a small paddle wheel rotate). He also found that their normally straight path is bent by a magnet in the direction expected for a negative charge moving away from the cathode. These were the first direct indications of electrons and their charge. The English physicist J. J. Thomson (1856–1940) improved and expanded the scope of experiments with gas discharge tubes. (See and .) He verified the negative charge of the cathode rays with both magnetic and electric fields. Additionally, he collected the rays in a metal cup and found an excess of negative charge. Thomson was also able to measure the ratio of the charge of the electron to its mass, —an important step to finding the actual values of both and . shows a cathode-ray tube, which produces a narrow beam of electrons that passes through charging plates connected to a high-voltage power supply. An electric field is produced between the charging plates, and the cathode-ray tube is placed between the poles of a magnet so that the electric field is perpendicular to the magnetic field of the magnet. These fields, being perpendicular to each other, produce opposing forces on the electrons. As discussed for mass spectrometers in More Applications of Magnetism, if the net force due to the fields vanishes, then the velocity of the charged particle is . In this manner, Thomson determined the velocity of the electrons and then moved the beam up and down by adjusting the electric field. To see how the amount of deflection is used to calculate , note that the deflection is proportional to the electric force on the electron: But the vertical deflection is also related to the electron’s mass, since the electron’s acceleration is The value of is not known, since was not yet known. Substituting the expression for electric force into the expression for acceleration yields Gathering terms, we have The deflection is analyzed to get , and is determined from the applied voltage and distance between the plates; thus, can be determined. With the velocity known, another measurement of can be obtained by bending the beam of electrons with the magnetic field. Since , we have . Consistent results are obtained using magnetic deflection. What is so important about , the ratio of the electron’s charge to its mass? The value obtained is This is a huge number, as Thomson realized, and it implies that the electron has a very small mass. It was known from electroplating that about is needed to plate a material, a factor of about 1000 less than the charge per kilogram of electrons. Thomson went on to do the same experiment for positively charged hydrogen ions (now known to be bare protons) and found a charge per kilogram about 1000 times smaller than that for the electron, implying that the proton is about 1000 times more massive than the electron. Today, we know more precisely that where is the charge of the proton and is its mass. This ratio (to four significant figures) is 1836 times less charge per kilogram than for the electron. Since the charges of electrons and protons are equal in magnitude, this implies . Thomson performed a variety of experiments using differing gases in discharge tubes and employing other methods, such as the photoelectric effect, for freeing electrons from atoms. He always found the same properties for the electron, proving it to be an independent particle. For his work, the important pieces of which he began to publish in 1897, Thomson was awarded the 1906 Nobel Prize in Physics. In retrospect, it is difficult to appreciate how astonishing it was to find that the atom has a substructure. Thomson himself said, “It was only when I was convinced that the experiment left no escape from it that I published my belief in the existence of bodies smaller than atoms.” Thomson attempted to measure the charge of individual electrons, but his method could determine its charge only to the order of magnitude expected. Since Faraday’s experiments with electroplating in the 1830s, it had been known that about 100,000 C per mole was needed to plate singly ionized ions. Dividing this by the number of ions per mole (that is, by Avogadro’s number), which was approximately known, the charge per ion was calculated to be about , close to the actual value. An American physicist, Robert Millikan (1868–1953) (see ), decided to improve upon Thomson’s experiment for measuring and was eventually forced to try another approach, which is now a classic experiment performed by students. The Millikan oil drop experiment is shown in . In the Millikan oil drop experiment, fine drops of oil are sprayed from an atomizer. Some of these are charged by the process and can then be suspended between metal plates by a voltage between the plates. In this situation, the weight of the drop is balanced by the electric force: The electric field is produced by the applied voltage, hence, , and is adjusted to just balance the drop’s weight. The drops can be seen as points of reflected light using a microscope, but they are too small to directly measure their size and mass. The mass of the drop is determined by observing how fast it falls when the voltage is turned off. Since air resistance is very significant for these submicroscopic drops, the more massive drops fall faster than the less massive, and sophisticated sedimentation calculations can reveal their mass. Oil is used rather than water, because it does not readily evaporate, and so mass is nearly constant. Once the mass of the drop is known, the charge of the electron is given by rearranging the previous equation: where is the separation of the plates and is the voltage that holds the drop motionless. (The same drop can be observed for several hours to see that it really is motionless.) By 1913 Millikan had measured the charge of the electron to an accuracy of 1%, and he improved this by a factor of 10 within a few years to a value of . He also observed that all charges were multiples of the basic electron charge and that sudden changes could occur in which electrons were added or removed from the drops. For this very fundamental direct measurement of and for his studies of the photoelectric effect, Millikan was awarded the 1923 Nobel Prize in Physics. With the charge of the electron known and the charge-to-mass ratio known, the electron’s mass can be calculated. It is Substituting known values yields or where the round-off errors have been corrected. The mass of the electron has been verified in many subsequent experiments and is now known to an accuracy of better than one part in one million. It is an incredibly small mass and remains the smallest known mass of any particle that has mass. (Some particles, such as photons, are massless and cannot be brought to rest, but travel at the speed of light.) A similar calculation gives the masses of other particles, including the proton. To three digits, the mass of the proton is now known to be which is nearly identical to the mass of a hydrogen atom. What Thomson and Millikan had done was to prove the existence of one substructure of atoms, the electron, and further to show that it had only a tiny fraction of the mass of an atom. The nucleus of an atom contains most of its mass, and the nature of the nucleus was completely unanticipated. Another important characteristic of quantum mechanics was also beginning to emerge. All electrons are identical to one another. The charge and mass of electrons are not average values; rather, they are unique values that all electrons have. This is true of other fundamental entities at the submicroscopic level. All protons are identical to one another, and so on. ### The Nucleus Here, we examine the first direct evidence of the size and mass of the nucleus. In later chapters, we will examine many other aspects of nuclear physics, but the basic information on nuclear size and mass is so important to understanding the atom that we consider it here. Nuclear radioactivity was discovered in 1896, and it was soon the subject of intense study by a number of the best scientists in the world. Among them was New Zealander Lord Ernest Rutherford, who made numerous fundamental discoveries and earned the title of “father of nuclear physics.” Born in Nelson, Rutherford did his postgraduate studies at the Cavendish Laboratories in England before taking up a position at McGill University in Canada where he did the work that earned him a Nobel Prize in Chemistry in 1908. In the area of atomic and nuclear physics, there is much overlap between chemistry and physics, with physics providing the fundamental enabling theories. He returned to England in later years and had six future Nobel Prize winners as students. Rutherford used nuclear radiation to directly examine the size and mass of the atomic nucleus. The experiment he devised is shown in . A radioactive source that emits alpha radiation was placed in a lead container with a hole in one side to produce a beam of alpha particles, which are a type of ionizing radiation ejected by the nuclei of a radioactive source. A thin gold foil was placed in the beam, and the scattering of the alpha particles was observed by the glow they caused when they struck a phosphor screen. Alpha particles were known to be the doubly charged positive nuclei of helium atoms that had kinetic energies on the order of when emitted in nuclear decay, which is the disintegration of the nucleus of an unstable nuclide by the spontaneous emission of charged particles. These particles interact with matter mostly via the Coulomb force, and the manner in which they scatter from nuclei can reveal nuclear size and mass. This is analogous to observing how a bowling ball is scattered by an object you cannot see directly. Because the alpha particle’s energy is so large compared with the typical energies associated with atoms ( versus ), you would expect the alpha particles to simply crash through a thin foil much like a supersonic bowling ball would crash through a few dozen rows of bowling pins. Thomson had envisioned the atom to be a small sphere in which equal amounts of positive and negative charge were distributed evenly. The incident massive alpha particles would suffer only small deflections in such a model. Instead, Rutherford and his collaborators found that alpha particles occasionally were scattered to large angles, some even back in the direction from which they came! Detailed analysis using conservation of momentum and energy—particularly of the small number that came straight back—implied that gold nuclei are very small compared with the size of a gold atom, contain almost all of the atom’s mass, and are tightly bound. Since the gold nucleus is several times more massive than the alpha particle, a head-on collision would scatter the alpha particle straight back toward the source. In addition, the smaller the nucleus, the fewer alpha particles that would hit one head on. Although the results of the experiment were published by his colleagues in 1909, it took Rutherford two years to convince himself of their meaning. Like Thomson before him, Rutherford was reluctant to accept such radical results. Nature on a small scale is so unlike our classical world that even those at the forefront of discovery are sometimes surprised. Rutherford later wrote: “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backwards ... [meant] ... the greatest part of the mass of the atom was concentrated in a tiny nucleus.” In 1911, Rutherford published his analysis together with a proposed model of the atom. The size of the nucleus was determined to be about , or 100,000 times smaller than the atom. This implies a huge density, on the order of , vastly unlike any macroscopic matter. Also implied is the existence of previously unknown nuclear forces to counteract the huge repulsive Coulomb forces among the positive charges in the nucleus. Huge forces would also be consistent with the large energies emitted in nuclear radiation. The small size of the nucleus also implies that the atom is mostly empty inside. In fact, in Rutherford’s experiment, most alphas went straight through the gold foil with very little scattering, since electrons have such small masses and since the atom was mostly empty with nothing for the alpha to hit. There were already hints of this at the time Rutherford performed his experiments, since energetic electrons had been observed to penetrate thin foils more easily than expected. shows a schematic of the atoms in a thin foil with circles representing the size of the atoms (about ) and dots representing the nuclei. (The dots are not to scale—if they were, you would need a microscope to see them.) Most alpha particles miss the small nuclei and are only slightly scattered by electrons. Occasionally, (about once in 8000 times in Rutherford’s experiment), an alpha hits a nucleus head-on and is scattered straight backward. Based on the size and mass of the nucleus revealed by his experiment, as well as the mass of electrons, Rutherford proposed the planetary model of the atom. The planetary model of the atom pictures low-mass electrons orbiting a large-mass nucleus. The sizes of the electron orbits are large compared with the size of the nucleus, with mostly vacuum inside the atom. This picture is analogous to how low-mass planets in our solar system orbit the large-mass Sun at distances large compared with the size of the sun. In the atom, the attractive Coulomb force is analogous to gravitation in the planetary system. (See .) Note that a model or mental picture is needed to explain experimental results, since the atom is too small to be directly observed with visible light. Rutherford’s planetary model of the atom was crucial to understanding the characteristics of atoms, and their interactions and energies, as we shall see in the next few sections. Also, it was an indication of how different nature is from the familiar classical world on the small, quantum mechanical scale. The discovery of a substructure to all matter in the form of atoms and molecules was now being taken a step further to reveal a substructure of atoms that was simpler than the 92 elements then known. We have continued to search for deeper substructures, such as those inside the nucleus, with some success. In later chapters, we will follow this quest in the discussion of quarks and other elementary particles, and we will look at the direction the search seems now to be heading. ### Test Prep for AP Courses ### Section Summary 1. Atoms are composed of negatively charged electrons, first proved to exist in cathode-ray-tube experiments, and a positively charged nucleus. 2. All electrons are identical and have a charge-to-mass ratio of 3. The positive charge in the nuclei is carried by particles called protons, which have a charge-to-mass ratio of 4. Mass of electron, 5. Mass of proton, 6. The planetary model of the atom pictures electrons orbiting the nucleus in the same way that planets orbit the sun. ### Conceptual Questions ### Problem Exercises
# Atomic Physics ## Bohr’s Theory of the Hydrogen Atom ### Learning Objectives By the end of this section, you will be able to: 1. Describe the mysteries of atomic spectra. 2. Explain Bohr’s theory of the hydrogen atom. 3. Explain Bohr’s planetary model of the atom. 4. Illustrate energy state using the energy-level diagram. 5. Describe the triumphs and limits of Bohr’s theory. The great Danish physicist Niels Bohr (1885–1962) made immediate use of Rutherford’s planetary model of the atom. (). Bohr became convinced of its validity and spent part of 1912 at Rutherford’s laboratory. In 1913, after returning to Copenhagen, he began publishing his theory of the simplest atom, hydrogen, based on the planetary model of the atom. For decades, many questions had been asked about atomic characteristics. From their sizes to their spectra, much was known about atoms, but little had been explained in terms of the laws of physics. Bohr’s theory explained the atomic spectrum of hydrogen and established new and broadly applicable principles in quantum mechanics. ### Mysteries of Atomic Spectra As noted in Quantization of Energy , the energies of some small systems are quantized. Atomic and molecular emission and absorption spectra have been known for over a century to be discrete (or quantized). (See .) Maxwell and others had realized that there must be a connection between the spectrum of an atom and its structure, something like the resonant frequencies of musical instruments. But, in spite of years of efforts by many great minds, no one had a workable theory. (It was a running joke that any theory of atomic and molecular spectra could be destroyed by throwing a book of data at it, so complex were the spectra.) Following Einstein’s proposal of photons with quantized energies directly proportional to their wavelengths, it became even more evident that electrons in atoms can exist only in discrete orbits. In some cases, it had been possible to devise formulas that described the emission spectra. As you might expect, the simplest atom—hydrogen, with its single electron—has a relatively simple spectrum. The hydrogen spectrum had been observed in the infrared (IR), visible, and ultraviolet (UV), and several series of spectral lines had been observed. (See .) These series are named after early researchers who studied them in particular depth. The observed hydrogen-spectrum wavelengths can be calculated using the following formula: where is the wavelength of the emitted EM radiation and is the Rydberg constant, determined by the experiment to be The constant is a positive integer associated with a specific series. For the Lyman series, ; for the Balmer series, ; for the Paschen series, ; and so on. The Lyman series is entirely in the UV, while part of the Balmer series is visible with the remainder UV. The Paschen series and all the rest are entirely IR. There are apparently an unlimited number of series, although they lie progressively farther into the infrared and become difficult to observe as increases. The constant is a positive integer, but it must be greater than . Thus, for the Balmer series, and . Note that can approach infinity. While the formula in the wavelengths equation was just a recipe designed to fit data and was not based on physical principles, it did imply a deeper meaning. Balmer first devised the formula for his series alone, and it was later found to describe all the other series by using different values of . Bohr was the first to comprehend the deeper meaning. Again, we see the interplay between experiment and theory in physics. Experimentally, the spectra were well established, an equation was found to fit the experimental data, but the theoretical foundation was missing. ### Bohr’s Solution for Hydrogen Bohr was able to derive the formula for the hydrogen spectrum using basic physics, the planetary model of the atom, and some very important new proposals. His first proposal is that only certain orbits are allowed: we say that the orbits of electrons in atoms are quantized. Each orbit has a different energy, and electrons can move to a higher orbit by absorbing energy and drop to a lower orbit by emitting energy. If the orbits are quantized, the amount of energy absorbed or emitted is also quantized, producing discrete spectra. Photon absorption and emission are among the primary methods of transferring energy into and out of atoms. The energies of the photons are quantized, and their energy is explained as being equal to the change in energy of the electron when it moves from one orbit to another. In equation form, this is Here, is the change in energy between the initial and final orbits, and is the energy of the absorbed or emitted photon. It is quite logical (that is, expected from our everyday experience) that energy is involved in changing orbits. A blast of energy is required for the space shuttle, for example, to climb to a higher orbit. What is not expected is that atomic orbits should be quantized. This is not observed for satellites or planets, which can have any orbit given the proper energy. (See .) shows an energy-level diagram, a convenient way to display energy states. In the present discussion, we take these to be the allowed energy levels of the electron. Energy is plotted vertically with the lowest or ground state at the bottom and with excited states above. Given the energies of the lines in an atomic spectrum, it is possible (although sometimes very difficult) to determine the energy levels of an atom. Energy-level diagrams are used for many systems, including molecules and nuclei. A theory of the atom or any other system must predict its energies based on the physics of the system. Bohr was clever enough to find a way to calculate the electron orbital energies in hydrogen. This was an important first step that has been improved upon, but it is well worth repeating here, because it does correctly describe many characteristics of hydrogen. Assuming circular orbits, Bohr proposed that the angular momentum , that is, it has only specific, discrete values. The value for is given by the formula where is the angular momentum, is the electron’s mass, is the radius of the th orbit, and is Planck’s constant. Note that angular momentum is . For a small object at a radius and , so that . Quantization says that this value of can only be equal to , etc. At the time, Bohr himself did not know why angular momentum should be quantized, but using this assumption he was able to calculate the energies in the hydrogen spectrum, something no one else had done at the time. From Bohr’s assumptions, we will now derive a number of important properties of the hydrogen atom from the classical physics we have covered in the text. We start by noting the centripetal force causing the electron to follow a circular path is supplied by the Coulomb force. To be more general, we note that this analysis is valid for any single-electron atom. So, if a nucleus has protons ( for hydrogen, 2 for helium, etc.) and only one electron, that atom is called a hydrogen-like atom. The spectra of hydrogen-like ions are similar to hydrogen, but shifted to higher energy by the greater attractive force between the electron and nucleus. The magnitude of the centripetal force is , while the Coulomb force is . The tacit assumption here is that the nucleus is more massive than the stationary electron, and the electron orbits about it. This is consistent with the planetary model of the atom. Equating these, Angular momentum quantization is stated in an earlier equation. We solve that equation for , substitute it into the above, and rearrange the expression to obtain the radius of the orbit. This yields: where is defined to be the Bohr radius, since for the lowest orbit and for hydrogen , . It is left for this chapter’s Problems and Exercises to show that the Bohr radius is These last two equations can be used to calculate the radii of the allowed (quantized) electron orbits in any hydrogen-like atom. It is impressive that the formula gives the correct size of hydrogen, which is measured experimentally to be very close to the Bohr radius. The earlier equation also tells us that the orbital radius is proportional to , as illustrated in . To get the electron orbital energies, we start by noting that the electron energy is the sum of its kinetic and potential energy: Kinetic energy is the familiar , assuming the electron is not moving at relativistic speeds. Potential energy for the electron is electrical, or , where is the potential due to the nucleus, which looks like a point charge. The nucleus has a positive charge ; thus, , recalling an earlier equation for the potential due to a point charge. Since the electron’s charge is negative, we see that . Entering the expressions for and , we find Now we substitute and from earlier equations into the above expression for energy. Algebraic manipulation yields for the orbital energies of hydrogen-like atoms. Here, is the ground-state energy for hydrogen and is given by Thus, for hydrogen, shows an energy-level diagram for hydrogen that also illustrates how the various spectral series for hydrogen are related to transitions between energy levels. Electron total energies are negative, since the electron is bound to the nucleus, analogous to being in a hole without enough kinetic energy to escape. As approaches infinity, the total energy becomes zero. This corresponds to a free electron with no kinetic energy, since gets very large for large , and the electric potential energy thus becomes zero. Thus, 13.6 eV is needed to ionize hydrogen (to go from –13.6 eV to 0, or unbound), an experimentally verified number. Given more energy, the electron becomes unbound with some kinetic energy. For example, giving 15.0 eV to an electron in the ground state of hydrogen strips it from the atom and leaves it with 1.4 eV of kinetic energy. Finally, let us consider the energy of a photon emitted in a downward transition, given by the equation to be Substituting , we see that Dividing both sides of this equation by gives an expression for : It can be shown that is the Rydberg constant. Thus, we have used Bohr’s assumptions to derive the formula first proposed by Balmer years earlier as a recipe to fit experimental data. We see that Bohr’s theory of the hydrogen atom answers the question as to why this previously known formula describes the hydrogen spectrum. It is because the energy levels are proportional to , where is a non-negative integer. A downward transition releases energy, and so must be greater than . The various series are those where the transitions end on a certain level. For the Lyman series, — that is, all the transitions end in the ground state (see also ). For the Balmer series, , or all the transitions end in the first excited state; and so on. What was once a recipe is now based in physics, and something new is emerging—angular momentum is quantized. ### Triumphs and Limits of the Bohr Theory Bohr did what no one had been able to do before. Not only did he explain the spectrum of hydrogen, he correctly calculated the size of the atom from basic physics. Some of his ideas are broadly applicable. Electron orbital energies are quantized in all atoms and molecules. Angular momentum is quantized. The electrons do not spiral into the nucleus, as expected classically (accelerated charges radiate, so that the electron orbits classically would decay quickly, and the electrons would sit on the nucleus—matter would collapse). These are major triumphs. But there are limits to Bohr’s theory. It cannot be applied to multielectron atoms, even one as simple as a two-electron helium atom. Bohr’s model is what we call semiclassical. The orbits are quantized (nonclassical) but are assumed to be simple circular paths (classical). As quantum mechanics was developed, it became clear that there are no well-defined orbits; rather, there are clouds of probability. Bohr’s theory also did not explain that some spectral lines are doublets (split into two) when examined closely. We shall examine many of these aspects of quantum mechanics in more detail, but it should be kept in mind that Bohr did not fail. Rather, he made very important steps along the path to greater knowledge and laid the foundation for all of atomic physics that has since evolved. ### Test Prep for AP Courses ### Section Summary 1. The planetary model of the atom pictures electrons orbiting the nucleus in the way that planets orbit the sun. Bohr used the planetary model to develop the first reasonable theory of hydrogen, the simplest atom. Atomic and molecular spectra are quantized, with hydrogen spectrum wavelengths given by the formula where is the wavelength of the emitted EM radiation and is the Rydberg constant, which has the value 2. The constants and are positive integers, and must be greater than . 3. Bohr correctly proposed that the energy and radii of the orbits of electrons in atoms are quantized, with energy for transitions between orbits given by where is the change in energy between the initial and final orbits and is the energy of an absorbed or emitted photon. It is useful to plot orbital energies on a vertical graph called an energy-level diagram. 4. Bohr proposed that the allowed orbits are circular and must have quantized orbital angular momentum given by where is the angular momentum, is the radius of the orbit, and is Planck’s constant. For all one-electron (hydrogen-like) atoms, the radius of an orbit is given by is the atomic number of an element (the number of electrons is has when neutral) and is defined to be the Bohr radius, which is 5. Furthermore, the energies of hydrogen-like atoms are given by where is the ground-state energy and is given by Thus, for hydrogen, 6. The Bohr Theory gives accurate values for the energy levels in hydrogen-like atoms, but it has been improved upon in several respects. ### Conceptual Questions ### Problems & Exercises
# Atomic Physics ## X Rays: Atomic Origins and Applications ### Learning Objectives By the end of this section, you will be able to: 1. Define x-ray tube and its spectrum. 2. Show the x-ray characteristic energy. 3. Specify the use of x rays in medical observations. 4. Explain the use of x rays in CT scanners in diagnostics. Each type of atom (or element) has its own characteristic electromagnetic spectrum. X rays lie at the high-frequency end of an atom’s spectrum and are characteristic of the atom as well. In this section, we explore characteristic x rays and some of their important applications. We have previously discussed x rays as a part of the electromagnetic spectrum in Photon Energies and the Electromagnetic Spectrum. That module illustrated how an x-ray tube (a specialized CRT) produces x rays. Electrons emitted from a hot filament are accelerated with a high voltage, gaining significant kinetic energy and striking the anode. There are two processes by which x rays are produced in the anode of an x-ray tube. In one process, the deceleration of electrons produces x rays, and these x rays are called bremsstrahlung, or braking radiation. The second process is atomic in nature and produces characteristic x rays, so called because they are characteristic of the anode material. The x-ray spectrum in is typical of what is produced by an x-ray tube, showing a broad curve of bremsstrahlung radiation with characteristic x-ray peaks on it. The spectrum in is collected over a period of time in which many electrons strike the anode, with a variety of possible outcomes for each hit. The broad range of x-ray energies in the bremsstrahlung radiation indicates that an incident electron’s energy is not usually converted entirely into photon energy. The highest-energy x ray produced is one for which all of the electron’s energy was converted to photon energy. Thus the accelerating voltage and the maximum x-ray energy are related by conservation of energy. Electric potential energy is converted to kinetic energy and then to photon energy, so that Units of electron volts are convenient. For example, a 100-kV accelerating voltage produces x-ray photons with a maximum energy of 100 keV. Some electrons excite atoms in the anode. Part of the energy that they deposit by collision with an atom results in one or more of the atom’s inner electrons being knocked into a higher orbit or the atom being ionized. When the anode’s atoms de-excite, they emit characteristic electromagnetic radiation. The most energetic of these are produced when an inner-shell vacancy is filled—that is, when an or shell electron has been excited to a higher level, and another electron falls into the vacant spot. A characteristic x ray (see Photon Energies and the Electromagnetic Spectrum) is electromagnetic (EM) radiation emitted by an atom when an inner-shell vacancy is filled. shows a representative energy-level diagram that illustrates the labeling of characteristic x rays. X rays created when an electron falls into an shell vacancy are called when they come from the next higher level; that is, an to transition. The labels come from the older alphabetical labeling of shells starting with rather than using the principal quantum numbers 1, 2, 3, …. A more energetic x ray is produced when an electron falls into an shell vacancy from the shell; that is, an to transition. Similarly, when an electron falls into the shell from the shell, an x ray is created. The energies of these x rays depend on the energies of electron states in the particular atom and, thus, are characteristic of that element: every element has it own set of x-ray energies. This property can be used to identify elements, for example, to find trace (small) amounts of an element in an environmental or biological sample. ### Medical and Other Diagnostic Uses of X-rays All of us can identify diagnostic uses of x-ray photons. Among these are the universal dental and medical x rays that have become an essential part of medical diagnostics. (See and .) X rays are also used to inspect our luggage at airports, as shown in , and for early detection of cracks in crucial aircraft components. An x ray is not only a noun meaning high-energy photon, it also is an image produced by x rays, and it has been made into a familiar verb—to be x-rayed. The most common x-ray images are simple shadows. Since x-ray photons have high energies, they penetrate materials that are opaque to visible light. The more energy an x-ray photon has, the more material it will penetrate. So an x-ray tube may be operated at 50.0 kV for a chest x ray, whereas it may need to be operated at 100 kV to examine a broken leg in a cast. The depth of penetration is related to the density of the material as well as to the energy of the photon. The denser the material, the fewer x-ray photons get through and the darker the shadow. Thus x rays excel at detecting breaks in bones and in imaging other physiological structures, such as some tumors, that differ in density from surrounding material. Because of their high photon energy, x rays produce significant ionization in materials and damage cells in biological organisms. Modern uses minimize exposure to the patient and eliminate exposure to others. Biological effects of x rays will be explored in the next chapter along with other types of ionizing radiation such as those produced by nuclei. As the x-ray energy increases, the Compton effect (see Photon Momentum) becomes more important in the attenuation of the x rays. Here, the x ray scatters from an outer electron shell of the atom, giving the ejected electron some kinetic energy while losing energy itself. The probability for attenuation of the x rays depends upon the number of electrons present (the material’s density) as well as the thickness of the material. Chemical composition of the medium, as characterized by its atomic number , is not important here. Low-energy x rays provide better contrast (sharper images). However, due to greater attenuation and less scattering, they are more absorbed by thicker materials. Greater contrast can be achieved by injecting a substance with a large atomic number, such as barium or iodine. The structure of the part of the body that contains the substance (e.g., the gastro-intestinal tract or the abdomen) can easily be seen this way. Breast cancer is the second-leading cause of death among women worldwide. Early detection can be very effective, hence the importance of x-ray diagnostics. A mammogram cannot diagnose a malignant tumor, only give evidence of a lump or region of increased density within the breast. X-ray absorption by different types of soft tissue is very similar, so contrast is difficult; this is especially true for younger women, who typically have denser breasts. For older women who are at greater risk of developing breast cancer, the presence of more fat in the breast gives the lump or tumor more contrast. MRI (Magnetic resonance imaging) has recently been used as a supplement to conventional x rays to improve detection and eliminate false positives. The subject’s radiation dose from x rays will be treated in a later chapter. A standard x ray gives only a two-dimensional view of the object. Dense bones might hide images of soft tissue or organs. If you took another x ray from the side of the person (the first one being from the front), you would gain additional information. While shadow images are sufficient in many applications, far more sophisticated images can be produced with modern technology. shows the use of a computed tomography (CT) scanner, also called computed axial tomography (CAT) scanner. X rays are passed through a narrow section (called a slice) of the patient’s body (or body part) over a range of directions. An array of many detectors on the other side of the patient registers the x rays. The system is then rotated around the patient and another image is taken, and so on. The x-ray tube and detector array are mechanically attached and so rotate together. Complex computer image processing of the relative absorption of the x rays along different directions produces a highly-detailed image. Different slices are taken as the patient moves through the scanner on a table. Multiple images of different slices can also be computer analyzed to produce three-dimensional information, sometimes enhancing specific types of tissue, as shown in . G. Hounsfield (UK) and A. Cormack (US) won the Nobel Prize in Medicine in 1979 for their development of computed tomography. ### X-Ray Diffraction and Crystallography Since x-ray photons are very energetic, they have relatively short wavelengths. For example, the 54.4-keV x ray of has a wavelength . Thus, typical x-ray photons act like rays when they encounter macroscopic objects, like teeth, and produce sharp shadows; however, since atoms are on the order of 0.1 nm in size, x rays can be used to detect the location, shape, and size of atoms and molecules. The process is called x-ray diffraction, because it involves the diffraction and interference of x rays to produce patterns that can be analyzed for information about the structures that scattered the x rays. Perhaps the most famous example of x-ray diffraction is the discovery of the double-helix structure of DNA in 1953 by an international team of scientists working at the Cavendish Laboratory—American James Watson, Englishman Francis Crick, and New Zealand–born Maurice Wilkins. Using x-ray diffraction data produced by Rosalind Franklin, they were the first to discern the structure of DNA that is so crucial to life. For this, Watson, Crick, and Wilkins were awarded the 1962 Nobel Prize in Physiology or Medicine. There is much debate and controversy over the issue that Rosalind Franklin was not included in the prize. shows a diffraction pattern produced by the scattering of x rays from a crystal. This process is known as x-ray crystallography because of the information it can yield about crystal structure, and it was the type of data Rosalind Franklin supplied to Watson and Crick for DNA. Not only do x rays confirm the size and shape of atoms, they give information on the atomic arrangements in materials. For example, current research in high-temperature superconductors involves complex materials whose lattice arrangements are crucial to obtaining a superconducting material. These can be studied using x-ray crystallography. Historically, the scattering of x rays from crystals was used to prove that x rays are energetic EM waves. This was suspected from the time of the discovery of x rays in 1895, but it was not until 1912 that the German Max von Laue (1879–1960) convinced two of his colleagues to scatter x rays from crystals. If a diffraction pattern is obtained, he reasoned, then the x rays must be waves, and their wavelength could be determined. (The spacing of atoms in various crystals was reasonably well known at the time, based on good values for Avogadro’s number.) The experiments were convincing, and the 1914 Nobel Prize in Physics was given to von Laue for his suggestion leading to the proof that x rays are EM waves. In 1915, the unique father-and-son team of Sir William Henry Bragg and his son Sir William Lawrence Bragg were awarded a joint Nobel Prize for inventing the x-ray spectrometer and the then-new science of x-ray analysis. The elder Bragg had migrated to Australia from England just after graduating in mathematics. He learned physics and chemistry during his career at the University of Adelaide. The younger Bragg was born in Adelaide but went back to the Cavendish Laboratories in England to a career in x-ray and neutron crystallography; he provided support for Watson, Crick, and Wilkins for their work on unraveling the mysteries of DNA and to Max Perutz for his 1962 Nobel Prize-winning work on the structure of hemoglobin. Here again, we witness the enabling nature of physics—establishing instruments and designing experiments as well as solving mysteries in the biomedical sciences. Certain other uses for x rays will be studied in later chapters. X rays are useful in the treatment of cancer because of the inhibiting effect they have on cell reproduction. X rays observed coming from outer space are useful in determining the nature of their sources, such as neutron stars and possibly black holes. Created in nuclear bomb explosions, x rays can also be used to detect clandestine atmospheric tests of these weapons. X rays can cause excitations of atoms, which then fluoresce (emitting characteristic EM radiation), making x-ray-induced fluorescence a valuable analytical tool in a range of fields from art to archaeology. ### Section Summary 1. X rays are relatively high-frequency EM radiation. They are produced by transitions between inner-shell electron levels, which produce x rays characteristic of the atomic element, or by decelerating electrons. 2. X rays have many uses, including medical diagnostics and x-ray diffraction. ### Conceptual Questions ### Problem Exercises
# Atomic Physics ## Applications of Atomic Excitations and De-Excitations ### Learning Objectives By the end of this section, you will be able to: 1. Define and discuss fluorescence. 2. Define metastable. 3. Describe how laser emission is produced. 4. Explain population inversion. 5. Define and discuss holography. Many properties of matter and phenomena in nature are directly related to atomic energy levels and their associated excitations and de-excitations. The color of a rose, the output of a laser, and the transparency of air are but a few examples. (See .) While it may not appear that glow-in-the-dark pajamas and lasers have much in common, they are in fact different applications of similar atomic de-excitations. The color of a material is due to the ability of its atoms to absorb certain wavelengths while reflecting or reemitting others. A simple red material, for example a tomato, absorbs all visible wavelengths except red. This is because the atoms of its hydrocarbon pigment (lycopene) have levels separated by a variety of energies corresponding to all visible photon energies except red. Air is another interesting example. It is transparent to visible light, because there are few energy levels that visible photons can excite in air molecules and atoms. Visible light, thus, cannot be absorbed. Furthermore, visible light is only weakly scattered by air, because visible wavelengths are so much greater than the sizes of the air molecules and atoms. Light must pass through kilometers of air to scatter enough to cause red sunsets and blue skies. ### Fluorescence and Phosphorescence The ability of a material to emit various wavelengths of light is similarly related to its atomic energy levels. shows a scorpion illuminated by a UV lamp, sometimes called a black light. Some rocks also glow in black light, the particular colors being a function of the rock’s mineral composition. Black lights are also used to make certain posters glow. In the fluorescence process, an atom is excited to a level several steps above its ground state by the absorption of a relatively high-energy UV photon. This is called atomic excitation. Once it is excited, the atom can de-excite in several ways, one of which is to re-emit a photon of the same energy as excited it, a single step back to the ground state. This is called atomic de-excitation. All other paths of de-excitation involve smaller steps, in which lower-energy (longer wavelength) photons are emitted. Some of these may be in the visible range, such as for the scorpion in . Fluorescence is defined to be any process in which an atom or molecule, excited by a photon of a given energy, and de-excites by emission of a lower-energy photon. Fluorescence can be induced by many types of energy input. Fluorescent paint, dyes, and even soap residues in clothes make colors seem brighter in sunlight by converting some UV into visible light. X rays can induce fluorescence, as is done in x-ray fluoroscopy to make brighter visible images. Electric discharges can induce fluorescence, as in so-called neon lights and in gas-discharge tubes that produce atomic and molecular spectra. Common fluorescent lights use an electric discharge in mercury vapor to cause atomic emissions from mercury atoms. The inside of a fluorescent light is coated with a fluorescent material that emits visible light over a broad spectrum of wavelengths. By choosing an appropriate coating, fluorescent lights can be made more like sunlight or like the reddish glow of candlelight, depending on needs. Fluorescent lights are more efficient in converting electrical energy into visible light than incandescent filaments (about four times as efficient), the blackbody radiation of which is primarily in the infrared due to temperature limitations. This atom is excited to one of its higher levels by absorbing a UV photon. It can de-excite in a single step, re-emitting a photon of the same energy, or in several steps. The process is called fluorescence if the atom de-excites in smaller steps, emitting energy different from that which excited it. Fluorescence can be induced by a variety of energy inputs, such as UV, x-rays, and electrical discharge. The spectacular Waitomo caves on North Island in New Zealand provide a natural habitat for glow-worms. The glow-worms hang up to 70 silk threads of about 30 or 40 cm each to trap prey that fly towards them in the dark. The fluorescence process is very efficient, with nearly 100% of the energy input turning into light. (In comparison, fluorescent lights are about 20% efficient.) Fluorescence has many uses in biology and medicine. It is commonly used to label and follow a molecule within a cell. Such tagging allows one to study the structure of DNA and proteins. Fluorescent dyes and antibodies are usually used to tag the molecules, which are then illuminated with UV light and their emission of visible light is observed. Since the fluorescence of each element is characteristic, identification of elements within a sample can be done this way. shows a commonly used fluorescent dye called fluorescein. Below that, reveals the diffusion of a fluorescent dye in water by observing it under UV light. Once excited, an atom or molecule will usually spontaneously de-excite quickly. (The electrons raised to higher levels are attracted to lower ones by the positive charge of the nucleus.) Spontaneous de-excitation has a very short mean lifetime of typically about . However, some levels have significantly longer lifetimes, ranging up to milliseconds to minutes or even hours. These energy levels are inhibited and are slow in de-exciting because their quantum numbers differ greatly from those of available lower levels. Although these level lifetimes are short in human terms, they are many orders of magnitude longer than is typical and, thus, are said to be metastable, meaning relatively stable. Phosphorescence is the de-excitation of a metastable state. Glow-in-the-dark materials, such as luminous dials on some watches and clocks and on children’s toys and pajamas, are made of phosphorescent substances. Visible light excites the atoms or molecules to metastable states that decay slowly, releasing the stored excitation energy partially as visible light. In some ceramics, atomic excitation energy can be frozen in after the ceramic has cooled from its firing. It is very slowly released, but the ceramic can be induced to phosphoresce by heating—a process called “thermoluminescence.” Since the release is slow, thermoluminescence can be used to date antiquities. The less light emitted, the older the ceramic. (See .) ### Lasers Lasers today are commonplace. Lasers are used to read bar codes at stores and in libraries, laser shows are staged for entertainment, laser printers produce high-quality images at relatively low cost, and lasers send prodigious numbers of telephone messages through optical fibers. Among other things, lasers are also employed in surveying, weapons guidance, tumor eradication, retinal welding, and for reading DVDs, Blu-rays, and computer or game console CD-ROMs. Why do lasers have so many varied applications? The answer is that lasers produce single-wavelength EM radiation that is also very coherent—that is, the emitted photons are in phase. Laser output can, thus, be more precisely manipulated than incoherent mixed-wavelength EM radiation from other sources. The reason laser output is so pure and coherent is based on how it is produced, which in turn depends on a metastable state in the lasing material. Suppose a material had the energy levels shown in . When energy is put into a large collection of these atoms, electrons are raised to all possible levels. Most return to the ground state in less than about , but those in the metastable state linger. This includes those electrons originally excited to the metastable state and those that fell into it from above. It is possible to get a majority of the atoms into the metastable state, a condition called a population inversion. Once a population inversion is achieved, a very interesting thing can happen, as shown in . An electron spontaneously falls from the metastable state, emitting a photon. This photon finds another atom in the metastable state and stimulates it to decay, emitting a second photon of the same wavelength and in phase with the first, and so on. Stimulated emission is the emission of electromagnetic radiation in the form of photons of a given frequency, triggered by photons of the same frequency. For example, an excited atom, with an electron in an energy orbit higher than normal, releases a photon of a specific frequency when the electron drops back to a lower energy orbit. If this photon then strikes another electron in the same high-energy orbit in another atom, another photon of the same frequency is released. The emitted photons and the triggering photons are always in phase, have the same polarization, and travel in the same direction. The probability of absorption of a photon is the same as the probability of stimulated emission, and so a majority of atoms must be in the metastable state to produce energy. Einstein (again Einstein, and back in 1917!) was one of the important contributors to the understanding of stimulated emission of radiation. Decades before the technology was invented to even experiment with laser generation, Einstein was the first to realize that stimulated emission and absorption are equally probable. The laser acts as a temporary energy storage device that subsequently produces a massive energy output of single-wavelength, in-phase photons. The name laser is an acronym for light amplification by stimulated emission of radiation, the process just described. The process was proposed and developed following the advances in quantum physics. A joint Nobel Prize was awarded in 1964 to American Charles Townes (1915–), and Nikolay Basov (1922–2001) and Aleksandr Prokhorov (1916–2002), from the Soviet Union, for the development of lasers. The Nobel Prize in 1981 went to Arthur Schawlow (1921-1999) for pioneering laser applications. The original devices were called masers, because they produced microwaves. The first working laser was created in 1960 at Hughes Research labs (CA) by T. Maiman. It used a pulsed high-powered flash lamp and a ruby rod to produce red light. Today the name laser is used for all such devices developed to produce a variety of wavelengths, including microwave, infrared, visible, and ultraviolet radiation. shows how a laser can be constructed to enhance the stimulated emission of radiation. Energy input can be from a flash tube, electrical discharge, or other sources, in a process sometimes called optical pumping. A large percentage of the original pumping energy is dissipated in other forms, but a population inversion must be achieved. Mirrors can be used to enhance stimulated emission by multiple passes of the radiation back and forth through the lasing material. One of the mirrors is semitransparent to allow some of the light to pass through. The laser output from a laser is a mere 1% of the light passing back and forth in a laser. As described earlier in the section on laser vision correction, Donna Strickland and Gérard Mourou, working at University of Rochester, developed a method to greatly increase the power of lasers, while also enabling them to be miniaturized. By passing the light over a specific type of grating, their method segments (or chirps) the delivery of the beam components in a matter that generates little heat at the source. Chirped pulse amplification is now used in some of the world’s most powerful lasers as well as those commonly used to make precise microcuts or burns in medical applications. Strickland and Mourou were awarded the Nobel Prize in 1918. Lasers are constructed from many types of lasing materials, including gases, liquids, solids, and semiconductors. But all lasers are based on the existence of a metastable state or a phosphorescent material. Some lasers produce continuous output; others are pulsed in bursts as brief as . Some laser outputs are fantastically powerful—some greater than —but the more common, everyday lasers produce something on the order of . The helium-neon laser that produces a familiar red light is very common. shows the energy levels of helium and neon, a pair of noble gases that work well together. An electrical discharge is passed through a helium-neon gas mixture in which the number of atoms of helium is ten times that of neon. The first excited state of helium is metastable and, thus, stores energy. This energy is easily transferred by collision to neon atoms, because they have an excited state at nearly the same energy as that in helium. That state in neon is also metastable, and this is the one that produces the laser output. (The most likely transition is to the nearby state, producing 1.96 eV photons, which have a wavelength of 633 nm and appear red.) A population inversion can be produced in neon, because there are so many more helium atoms and these put energy into the neon. Helium-neon lasers often have continuous output, because the population inversion can be maintained even while lasing occurs. Probably the most common lasers in use today, including the common laser pointer, are semiconductor or diode lasers, made of silicon. Here, energy is pumped into the material by passing a current in the device to excite the electrons. Special coatings on the ends and fine cleavings of the semiconductor material allow light to bounce back and forth and a tiny fraction to emerge as laser light. Diode lasers can usually run continually and produce outputs in the milliwatt range. There are many medical applications of lasers. Lasers have the advantage that they can be focused to a small spot. They also have a well-defined wavelength. Many types of lasers are available today that provide wavelengths from the ultraviolet to the infrared. This is important, as one needs to be able to select a wavelength that will be preferentially absorbed by the material of interest. Objects appear a certain color because they absorb all other visible colors incident upon them. What wavelengths are absorbed depends upon the energy spacing between electron orbitals in that molecule. Unlike the hydrogen atom, biological molecules are complex and have a variety of absorption wavelengths or lines. But these can be determined and used in the selection of a laser with the appropriate wavelength. Water is transparent to the visible spectrum but will absorb light in the UV and IR regions. Blood (hemoglobin) strongly reflects red but absorbs most strongly in the UV. Laser surgery uses a wavelength that is strongly absorbed by the tissue it is focused upon. One example of a medical application of lasers is shown in . A detached retina can result in total loss of vision. Burns made by a laser focused to a small spot on the retina form scar tissue that can hold the retina in place, salvaging the patient’s vision. Other light sources cannot be focused as precisely as a laser due to refractive dispersion of different wavelengths. Similarly, laser surgery in the form of cutting or burning away tissue is made more accurate because laser output can be very precisely focused and is preferentially absorbed because of its single wavelength. Depending upon what part or layer of the retina needs repairing, the appropriate type of laser can be selected. For the repair of tears in the retina, a green argon laser is generally used. This light is absorbed well by tissues containing blood, so coagulation or “welding” of the tear can be done. In dentistry, the use of lasers is rising. Lasers are most commonly used for surgery on the soft tissue of the mouth. They can be used to remove ulcers, stop bleeding, and reshape gum tissue. Their use in cutting into bones and teeth is not quite so common; here the erbium YAG (yttrium aluminum garnet) laser is used. The massive combination of lasers shown in can be used to induce nuclear fusion, the energy source of the sun and hydrogen bombs. Since lasers can produce very high power in very brief pulses, they can be used to focus an enormous amount of energy on a small glass sphere containing fusion fuel. Not only does the incident energy increase the fuel temperature significantly so that fusion can occur, it also compresses the fuel to great density, enhancing the probability of fusion. The compression or implosion is caused by the momentum of the impinging laser photons. Before being largely replaced by streaming services and other storage methods, music CDs and DVDs were extremely common. They store information digitally and have a much larger information-storage capacity than their predecessors, audio and video cassette tapes. An entire encyclopedia can be stored on a single CD. illustrates how the information is stored and read from the CD. Pits made in the CD by a laser can be tiny and very accurately spaced to record digital information. These are read by having an inexpensive solid-state infrared laser beam scatter from pits as the CD spins, revealing their digital pattern and the information encoded upon them. Holograms, such as those in , are true three-dimensional images recorded on film by lasers. Holograms are used for amusement, decoration on novelty items and magazine covers, security on credit cards and driver’s licenses (a laser and other equipment is needed to reproduce them), and for serious three-dimensional information storage. You can see that a hologram is a true three-dimensional image, because objects change relative position in the image when viewed from different angles. The name hologram means “entire picture” (from the Greek holo, as in holistic), because the image is three-dimensional. Holography is the process of producing holograms and, although they are recorded on photographic film, the process is quite different from normal photography. Holography uses light interference or wave optics, whereas normal photography uses geometric optics. shows one method of producing a hologram. Coherent light from a laser is split by a mirror, with part of the light illuminating the object. The remainder, called the reference beam, shines directly on a piece of film. Light scattered from the object interferes with the reference beam, producing constructive and destructive interference. As a result, the exposed film looks foggy, but close examination reveals a complicated interference pattern stored on it. Where the interference was constructive, the film (a negative actually) is darkened. Holography is sometimes called lensless photography, because it uses the wave characteristics of light as contrasted to normal photography, which uses geometric optics and so requires lenses. Light falling on a hologram can form a three-dimensional image. The process is complicated in detail, but the basics can be understood as shown in , in which a laser of the same type that exposed the film is now used to illuminate it. The myriad tiny exposed regions of the film are dark and block the light, while less exposed regions allow light to pass. The film thus acts much like a collection of diffraction gratings with various spacings. Light passing through the hologram is diffracted in various directions, producing both real and virtual images of the object used to expose the film. The interference pattern is the same as that produced by the object. Moving your eye to various places in the interference pattern gives you different perspectives, just as looking directly at the object would. The image thus looks like the object and is three-dimensional like the object. The hologram illustrated in is a transmission hologram. Holograms that are viewed with reflected light, such as the white light holograms on credit cards, are reflection holograms and are more common. White light holograms often appear a little blurry with rainbow edges, because the diffraction patterns of various colors of light are at slightly different locations due to their different wavelengths. Further uses of holography include all types of 3-D information storage, such as of statues in museums and engineering studies of structures and 3-D images of human organs. Invented in the late 1940s by Dennis Gabor (1900–1979), who won the 1971 Nobel Prize in Physics for his work, holography became far more practical with the development of the laser. Since lasers produce coherent single-wavelength light, their interference patterns are more pronounced. The precision is so great that it is even possible to record numerous holograms on a single piece of film by just changing the angle of the film for each successive image. This is how the holograms that move as you walk by them are produced—a kind of lensless movie. In a similar way, in the medical field, holograms have allowed complete 3-D holographic displays of objects from a stack of images. Storing these images for future use is relatively easy. With the use of an endoscope, high-resolution 3-D holographic images of internal organs and tissues can be made. ### Test Prep for AP Courses ### Section Summary 1. An important atomic process is fluorescence, defined to be any process in which an atom or molecule is excited by absorbing a photon of a given energy and de-excited by emitting a photon of a lower energy. 2. Some states live much longer than others and are termed metastable. 3. Phosphorescence is the de-excitation of a metastable state. 4. Lasers produce coherent single-wavelength EM radiation by stimulated emission, in which a metastable state is stimulated to decay. 5. Lasing requires a population inversion, in which a majority of the atoms or molecules are in their metastable state. ### Conceptual Questions ### Problem Exercises
# Atomic Physics ## The Wave Nature of Matter Causes Quantization ### Learning Objectives By the end of this section, you will be able to: 1. Explain Bohr’s model of atom. 2. Define and describe quantization of angular momentum. 3. Calculate the angular momentum for an orbit of atom. 4. Define and describe the wave-like properties of matter. After visiting some of the applications of different aspects of atomic physics, we now return to the basic theory that was built upon Bohr’s atom. Einstein once said it was important to keep asking the questions we eventually teach children not to ask. Why is angular momentum quantized? You already know the answer. Electrons have wave-like properties, as de Broglie later proposed. They can exist only where they interfere constructively, and only certain orbits meet proper conditions, as we shall see in the next module. Following Bohr’s initial work on the hydrogen atom, a decade was to pass before de Broglie proposed that matter has wave properties. The wave-like properties of matter were subsequently confirmed by observations of electron interference when scattered from crystals. Electrons can exist only in locations where they interfere constructively. How does this affect electrons in atomic orbits? When an electron is bound to an atom, its wavelength must fit into a small space, something like a standing wave on a string. (See .) Allowed orbits are those orbits in which an electron constructively interferes with itself. Not all orbits produce constructive interference. Thus only certain orbits are allowed—the orbits are quantized. For a circular orbit, constructive interference occurs when the electron’s wavelength fits neatly into the circumference, so that wave crests always align with crests and wave troughs align with troughs, as shown in (b). More precisely, when an integral multiple of the electron’s wavelength equals the circumference of the orbit, constructive interference is obtained. In equation form, the condition for constructive interference and an allowed electron orbit is where is the electron’s wavelength and is the radius of that circular orbit. The de Broglie wavelength is , and so here . Substituting this into the previous condition for constructive interference produces an interesting result: Rearranging terms, and noting that for a circular orbit, we obtain the quantization of angular momentum as the condition for allowed orbits: This is what Bohr was forced to hypothesize as the rule for allowed orbits, as stated earlier. We now realize that it is the condition for constructive interference of an electron in a circular orbit. illustrates this for and Because of the wave character of matter, the idea of well-defined orbits gives way to a model in which there is a cloud of probability, consistent with Heisenberg’s uncertainty principle. shows how this applies to the ground state of hydrogen. If you try to follow the electron in some well-defined orbit using a probe that has a small enough wavelength to get some details, you will instead knock the electron out of its orbit. Each measurement of the electron’s position will find it to be in a definite location somewhere near the nucleus. Repeated measurements reveal a cloud of probability like that in the figure, with each speck the location determined by a single measurement. There is not a well-defined, circular-orbit type of distribution. Nature again proves to be different on a small scale than on a macroscopic scale. There are many examples in which the wave nature of matter causes quantization in bound systems such as the atom. Whenever a particle is confined or bound to a small space, its allowed wavelengths are those which fit into that space. For example, the particle in a box model describes a particle free to move in a small space surrounded by impenetrable barriers. This is true in blackbody radiators (atoms and molecules) as well as in atomic and molecular spectra. Various atoms and molecules will have different sets of electron orbits, depending on the size and complexity of the system. When a system is large, such as a grain of sand, the tiny particle waves in it can fit in so many ways that it becomes impossible to see that the allowed states are discrete. Thus the correspondence principle is satisfied. As systems become large, they gradually look less grainy, and quantization becomes less evident. Unbound systems (small or not), such as an electron freed from an atom, do not have quantized energies, since their wavelengths are not constrained to fit in a certain volume. ### Test Prep for AP Courses ### Section Summary 1. Quantization of orbital energy is caused by the wave nature of matter. Allowed orbits in atoms occur for constructive interference of electrons in the orbit, requiring an integral number of wavelengths to fit in an orbit’s circumference; that is, where is the electron’s de Broglie wavelength. 2. Owing to the wave nature of electrons and the Heisenberg uncertainty principle, there are no well-defined orbits; rather, there are clouds of probability. 3. Bohr correctly proposed that the energy and radii of the orbits of electrons in atoms are quantized, with energy for transitions between orbits given by where is the change in energy between the initial and final orbits and is the energy of an absorbed or emitted photon. 4. It is useful to plot orbit energies on a vertical graph called an energy-level diagram. 5. The allowed orbits are circular, Bohr proposed, and must have quantized orbital angular momentum given by where is the angular momentum, is the radius of orbit , and is Planck’s constant. ### Conceptual Questions
# Atomic Physics ## Patterns in Spectra Reveal More Quantization ### Learning Objectives By the end of this section, you will be able to: 1. State and discuss the Zeeman effect. 2. Define orbital magnetic field. 3. Define orbital angular momentum. 4. Define space quantization. High-resolution measurements of atomic and molecular spectra show that the spectral lines are even more complex than they first appear. In this section, we will see that this complexity has yielded important new information about electrons and their orbits in atoms. In order to explore the substructure of atoms (and knowing that magnetic fields affect moving charges), the Dutch physicist Hendrik Lorentz (1853–1930) suggested that his student Pieter Zeeman (1865–1943) study how spectra might be affected by magnetic fields. What they found became known as the Zeeman effect, which involved spectral lines being split into two or more separate emission lines by an external magnetic field, as shown in . For their discoveries, Zeeman and Lorentz shared the 1902 Nobel Prize in Physics. Zeeman splitting is complex. Some lines split into three lines, some into five, and so on. But one general feature is that the amount the split lines are separated is proportional to the applied field strength, indicating an interaction with a moving charge. The splitting means that the quantized energy of an orbit is affected by an external magnetic field, causing the orbit to have several discrete energies instead of one. Even without an external magnetic field, very precise measurements showed that spectral lines are doublets (split into two), apparently by magnetic fields within the atom itself. Bohr’s theory of circular orbits is useful for visualizing how an electron’s orbit is affected by a magnetic field. The circular orbit forms a current loop, which creates a magnetic field of its own, as seen in . Note that the orbital magnetic field and the orbital angular momentum are along the same line. The external magnetic field and the orbital magnetic field interact; a torque is exerted to align them. A torque rotating a system through some angle does work so that there is energy associated with this interaction. Thus, orbits at different angles to the external magnetic field have different energies. What is remarkable is that the energies are quantized—the magnetic field splits the spectral lines into several discrete lines that have different energies. This means that only certain angles are allowed between the orbital angular momentum and the external field, as seen in . We already know that the magnitude of angular momentum is quantized for electron orbits in atoms. The new insight is that the direction of the orbital angular momentum is also quantized. The fact that the orbital angular momentum can have only certain directions is called space quantization. Like many aspects of quantum mechanics, this quantization of direction is totally unexpected. On the macroscopic scale, orbital angular momentum, such as that of the moon around the earth, can have any magnitude and be in any direction. Detailed treatment of space quantization began to explain some complexities of atomic spectra, but certain patterns seemed to be caused by something else. As mentioned, spectral lines are actually closely spaced doublets, a characteristic called fine structure, as shown in . The doublet changes when a magnetic field is applied, implying that whatever causes the doublet interacts with a magnetic field. In 1925, Sem Goudsmit and George Uhlenbeck, two Dutch physicists, successfully argued that electrons have properties analogous to a macroscopic charge spinning on its axis. Electrons, in fact, have an internal or intrinsic angular momentum called intrinsic spin . Since electrons are charged, their intrinsic spin creates an intrinsic magnetic field , which interacts with their orbital magnetic field . Furthermore, electron, analogous to the situation for orbital angular momentum. The spin of the electron can have only one magnitude, and its direction can be at only one of two angles relative to a magnetic field, as seen in . We refer to this as spin up or spin down for the electron. Each spin direction has a different energy; hence, spectroscopic lines are split into two. Spectral doublets are now understood as being due to electron spin. These two new insights—that the direction of angular momentum, whether orbital or spin, is quantized, and that electrons have intrinsic spin—help to explain many of the complexities of atomic and molecular spectra. In magnetic resonance imaging, it is the way that the intrinsic magnetic field of hydrogen and biological atoms interact with an external field that underlies the diagnostic fundamentals. ### Section Summary 1. The Zeeman effect—the splitting of lines when a magnetic field is applied—is caused by other quantized entities in atoms. 2. Both the magnitude and direction of orbital angular momentum are quantized. 3. The same is true for the magnitude and direction of the intrinsic spin of electrons. ### Conceptual Questions
# Atomic Physics ## Quantum Numbers and Rules ### Learning Objectives By the end of this section, you will be able to: 1. Define quantum number. 2. Calculate angle of angular momentum vector with an axis. 3. Define spin quantum number. Physical characteristics that are quantized—such as energy, charge, and angular momentum—are of such importance that names and symbols are given to them. The values of quantized entities are expressed in terms of quantum numbers, and the rules governing them are of the utmost importance in determining what nature is and does. This section covers some of the more important quantum numbers and rules—all of which apply in chemistry, material science, and far beyond the realm of atomic physics, where they were first discovered. Once again, we see how physics makes discoveries which enable other fields to grow. The energy states of bound systems are quantized, because the particle wavelength can fit into the bounds of the system in only certain ways. This was elaborated for the hydrogen atom, for which the allowed energies are expressed as , where . We define to be the principal quantum number that labels the basic states of a system. The lowest-energy state has , the first excited state has , and so on. Thus the allowed values for the principal quantum number are This is more than just a numbering scheme, since the energy of the system, such as the hydrogen atom, can be expressed as some function of , as can other characteristics (such as the orbital radii of the hydrogen atom). The fact that the magnitude of angular momentum is quantized was first recognized by Bohr in relation to the hydrogen atom; it is now known to be true in general. With the development of quantum mechanics, it was found that the magnitude of angular momentum can have only the values where is defined to be the angular momentum quantum number. The rule for in atoms is given in the parentheses. Given , the value of can be any integer from zero up to . For example, if , then can be 0, 1, 2, or 3. Note that for , can only be zero. This means that the ground-state angular momentum for hydrogen is actually zero, not as Bohr proposed. The picture of circular orbits is not valid, because there would be angular momentum for any circular orbit. A more valid picture is the cloud of probability shown for the ground state of hydrogen in . The electron actually spends time in and near the nucleus. The reason the electron does not remain in the nucleus is related to Heisenberg’s uncertainty principle—the electron’s energy would have to be much too large to be confined to the small space of the nucleus. Now the first excited state of hydrogen has , so that can be either 0 or 1, according to the rule in . Similarly, for , can be 0, 1, or 2. It is often most convenient to state the value of , a simple integer, rather than calculating the value of from . For example, for , we see that It is much simpler to state . As recognized in the Zeeman effect, the direction of angular momentum is quantized. We now know this is true in all circumstances. It is found that the component of angular momentum along one direction in space, usually called the -axis, can have only certain values of . The direction in space must be related to something physical, such as the direction of the magnetic field at that location. This is an aspect of relativity. Direction has no meaning if there is nothing that varies with direction, as does magnetic force. The allowed values of are where is the and is the angular momentum projection quantum number. The rule in parentheses for the values of is that it can range from to in steps of one. For example, if , then can have the five values –2, –1, 0, 1, and 2. Each corresponds to a different energy in the presence of a magnetic field, so that they are related to the splitting of spectral lines into discrete parts, as discussed in the preceding section. If the -component of angular momentum can have only certain values, then the angular momentum can have only certain directions, as illustrated in . ### Intrinsic Spin Angular Momentum Is Quantized in Magnitude and Direction There are two more quantum numbers of immediate concern. Both were first discovered for electrons in conjunction with fine structure in atomic spectra. It is now well established that electrons and other fundamental particles have intrinsic spin, roughly analogous to a planet spinning on its axis. This spin is a fundamental characteristic of particles, and only one magnitude of intrinsic spin is allowed for a given type of particle. Intrinsic angular momentum is quantized independently of orbital angular momentum. Additionally, the direction of the spin is also quantized. It has been found that the magnitude of the intrinsic (internal) spin angular momentum, , of an electron is given by where is defined to be the spin quantum number. This is very similar to the quantization of given in , except that the only value allowed for for electrons is 1/2. The direction of intrinsic spin is quantized, just as is the direction of orbital angular momentum. The direction of spin angular momentum along one direction in space, again called the -axis, can have only the values for electrons. is the and is the spin projection quantum number. For electrons, can only be 1/2, and can be either +1/2 or –1/2. Spin projection is referred to as spin up, whereas is called spin down. These are illustrated in . To summarize, the state of a system, such as the precise nature of an electron in an atom, is determined by its particular quantum numbers. These are expressed in the form —see For electrons in atoms, the principal quantum number can have the values . Once is known, the values of the angular momentum quantum number are limited to . For a given value of , the angular momentum projection quantum number can have only the values . Electron spin is independent of and , always having . The spin projection quantum number can have two values, . shows several hydrogen states corresponding to different sets of quantum numbers. Note that these clouds of probability are the locations of electrons as determined by making repeated measurements—each measurement finds the electron in a definite location, with a greater chance of finding the electron in some places rather than others. With repeated measurements, the pattern of probability shown in the figure emerges. The clouds of probability do not look like nor do they correspond to classical orbits. The uncertainty principle actually prevents us and nature from knowing how the electron gets from one place to another, and so an orbit really does not exist as such. Nature on a small scale is again much different from that on the large scale. We will see that the quantum numbers discussed in this section are valid for a broad range of particles and other systems, such as nuclei. Some quantum numbers, such as intrinsic spin, are related to fundamental classifications of subatomic particles, and they obey laws that will give us further insight into the substructure of matter and its interactions. ### Section Summary 1. Quantum numbers are used to express the allowed values of quantized entities. The principal quantum number labels the basic states of a system and is given by 2. The magnitude of angular momentum is given by where is the angular momentum quantum number. The direction of angular momentum is quantized, in that its component along an axis defined by a magnetic field, called the -axis is given by is the -component of the angular momentum and is the angular momentum projection quantum number. Similarly, the electron’s intrinsic spin angular momentum is given by is defined to be the spin quantum number. Finally, the direction of the electron’s spin along the -axis is given by where is the -component of spin angular momentum and is the spin projection quantum number. Spin projection is referred to as spin up, whereas is called spin down. summarizes the atomic quantum numbers and their allowed values. ### Conceptual Questions ### Problem Exercises
# Atomic Physics ## The Pauli Exclusion Principle ### Learning Objectives By the end of this section, you will be able to: 1. Define the composition of an atom along with its electrons, neutrons, and protons. 2. Explain the Pauli exclusion principle and its application to the atom. 3. Specify the shell and subshell symbols and their positions. 4. Define the position of electrons in different shells of an atom. 5. State the position of each element in the periodic table according to shell filling. ### Multiple-Electron Atoms All atoms except hydrogen are multiple-electron atoms. The physical and chemical properties of elements are directly related to the number of electrons a neutral atom has. The periodic table of the elements groups elements with similar properties into columns. This systematic organization is related to the number of electrons in a neutral atom, called the atomic number, . We shall see in this section that the exclusion principle is key to the underlying explanations, and that it applies far beyond the realm of atomic physics. In 1925, the Austrian physicist Wolfgang Pauli (see ) proposed the following rule: No two electrons can have the same set of quantum numbers. That is, no two electrons can be in the same state. This statement is known as the Pauli exclusion principle, because it excludes electrons from being in the same state. The Pauli exclusion principle is extremely powerful and very broadly applicable. It applies to any identical particles with half-integral intrinsic spin—that is, having Thus no two electrons can have the same set of quantum numbers. Let us examine how the exclusion principle applies to electrons in atoms. The quantum numbers involved were defined in Quantum Numbers and Rules as , and . Since is always for electrons, it is redundant to list , and so we omit it and specify the state of an electron by a set of four numbers . For example, the quantum numbers completely specify the state of an electron in an atom. Since no two electrons can have the same set of quantum numbers, there are limits to how many of them can be in the same energy state. Note that determines the energy state in the absence of a magnetic field. So we first choose , and then we see how many electrons can be in this energy state or energy level. Consider the level, for example. The only value can have is 0 (see for a list of possible values once is known), and thus can only be 0. The spin projection can be either or , and so there can be two electrons in the state. One has quantum numbers , and the other has . illustrates that there can be one or two electrons having , but not three. ### Shells and Subshells Because of the Pauli exclusion principle, only hydrogen and helium can have all of their electrons in the state. Lithium (see the periodic table) has three electrons, and so one must be in the level. This leads to the concept of shells and shell filling. As we progress up in the number of electrons, we go from hydrogen to helium, lithium, beryllium, boron, and so on, and we see that there are limits to the number of electrons for each value of . Higher values of the shell correspond to higher energies, and they can allow more electrons because of the various combinations of , and that are possible. Each value of the principal quantum number thus corresponds to an atomic shell into which a limited number of electrons can go. Shells and the number of electrons in them determine the physical and chemical properties of atoms, since it is the outermost electrons that interact most with anything outside the atom. The probability clouds of electrons with the lowest value of are closest to the nucleus and, thus, more tightly bound. Thus when shells fill, they start with , progress to , and so on. Each value of thus corresponds to a subshell. The table given below lists symbols traditionally used to denote shells and subshells. To denote shells and subshells, we write with a number for and a letter for . For example, an electron in the state must have , and it is denoted as a electron. Two electrons in the state is denoted as . Another example is an electron in the state with , written as . The case of three electrons with these quantum numbers is written . This notation, called spectroscopic notation, is generalized as shown in . Counting the number of possible combinations of quantum numbers allowed by the exclusion principle, we can determine how many electrons it takes to fill each subshell and shell. The number of electrons that can be in a subshell depends entirely on the value of . Once is known, there are a fixed number of values of , each of which can have two values for First, since goes from to l in steps of 1, there are possibilities. This number is multiplied by 2, since each electron can be spin up or spin down. Thus the maximum number of electrons that can be in a subshell is . For example, the subshell in has a maximum of 2 electrons in it, since for this subshell. Similarly, the subshell has a maximum of 6 electrons, since . For a shell, the maximum number is the sum of what can fit in the subshells. Some algebra shows that the maximum number of electrons that can be in a shell is . For example, for the first shell , and so . We have already seen that only two electrons can be in the shell. Similarly, for the second shell, , and so . As found in , the total number of electrons in the shell is 8. ### Shell Filling and the Periodic Table shows electron configurations for the first 20 elements in the periodic table, starting with hydrogen and its single electron and ending with calcium. The Pauli exclusion principle determines the maximum number of electrons allowed in each shell and subshell. But the order in which the shells and subshells are filled is complicated because of the large numbers of interactions between electrons. Examining the above table, you can see that as the number of electrons in an atom increases from 1 in hydrogen to 2 in helium and so on, the lowest-energy shell gets filled first—that is, the shell fills first, and then the shell begins to fill. Within a shell, the subshells fill starting with the lowest , or with the subshell, then the , and so on, usually until all subshells are filled. The first exception to this occurs for potassium, where the subshell begins to fill before any electrons go into the subshell. The next exception is not shown in ; it occurs for rubidium, where the subshell starts to fill before the subshell. The reason for these exceptions is that electrons have probability clouds that penetrate closer to the nucleus and, thus, are more tightly bound (lower in energy). shows the periodic table of the elements, through element 118. Of special interest are elements in the main groups, namely, those in the columns numbered 1, 2, 13, 14, 15, 16, 17, and 18. The number of electrons in the outermost subshell determines the atom’s chemical properties, since it is these electrons that are farthest from the nucleus and thus interact most with other atoms. If the outermost subshell can accept or give up an electron easily, then the atom will be highly reactive chemically. Each group in the periodic table is characterized by its outermost electron configuration. Perhaps the most familiar is Group 18 (Group VIII), the noble gases (helium, neon, argon, etc.). These gases are all characterized by a filled outer subshell that is particularly stable. This means that they have large ionization energies and do not readily give up an electron. Furthermore, if they were to accept an extra electron, it would be in a significantly higher level and thus loosely bound. Chemical reactions often involve sharing electrons. Noble gases can be forced into unstable chemical compounds only under high pressure and temperature. Group 17 (Group VII) contains the halogens, such as fluorine, chlorine, iodine and bromine, each of which has one less electron than a neighboring noble gas. Each halogen has 5 electrons (a configuration), while the subshell can hold 6 electrons. This means the halogens have one vacancy in their outermost subshell. They thus readily accept an extra electron (it becomes tightly bound, closing the shell as in noble gases) and are highly reactive chemically. The halogens are also likely to form singly negative ions, such as , fitting an extra electron into the vacancy in the outer subshell. In contrast, alkali metals, such as sodium and potassium, all have a single electron in their outermost subshell (an configuration) and are members of Group 1 (Group I). These elements easily give up their extra electron and are thus highly reactive chemically. As you might expect, they also tend to form singly positive ions, such as , by losing their loosely bound outermost electron. They are metals (conductors), because the loosely bound outer electron can move freely. Of course, other groups are also of interest. Carbon, silicon, and germanium, for example, have similar chemistries and are in Group 4 (Group IV). Carbon, in particular, is extraordinary in its ability to form many types of bonds and to be part of long chains, such as inorganic molecules. The large group of what are called transitional elements is characterized by the filling of the subshells and crossing of energy levels. Heavier groups, such as the lanthanide series, are more complex—their shells do not fill in simple order. But the groups recognized by chemists such as Mendeleev have an explanation in the substructure of atoms. ### Section Summary 1. The state of a system is completely described by a complete set of quantum numbers. This set is written as . 2. The Pauli exclusion principle says that no two electrons can have the same set of quantum numbers; that is, no two electrons can be in the same state. 3. This exclusion limits the number of electrons in atomic shells and subshells. Each value of corresponds to a shell, and each value of corresponds to a subshell. 4. The maximum number of electrons that can be in a subshell is . 5. The maximum number of electrons that can be in a shell is . ### Conceptual Questions ### Problem Exercises
# Radioactivity and Nuclear Physics ## Introduction to Radioactivity and Nuclear Physics There is an ongoing quest to find substructures of matter. At one time, it was thought that atoms would be the ultimate substructure, but just when the first direct evidence of atoms was obtained, it became clear that they have a substructure and a tiny nucleus. The nucleus itself has spectacular characteristics. For example, certain nuclei are unstable, and their decay emits radiations with energies millions of times greater than atomic energies. Some of the mysteries of nature, such as why the core of the earth remains molten and how the sun produces its energy, are explained by nuclear phenomena. The exploration of radioactivity and the nucleus revealed fundamental and previously unknown particles, forces, and conservation laws. That exploration has evolved into a search for further underlying structures, such as quarks. In this chapter, the fundamentals of nuclear radioactivity and the nucleus are explored. The following two chapters explore the more important applications of nuclear physics in the field of medicine. We will also explore the basics of what we know about quarks and other substructures smaller than nuclei.
# Radioactivity and Nuclear Physics ## Nuclear Radioactivity ### Learning Objectives By the end of this section, you will be able to: 1. Explain nuclear radiation. 2. Explain the types of radiation—alpha emission, beta emission, and gamma emission. 3. Explain the ionization of radiation in an atom. 4. Define the range of radiation. The discovery and study of nuclear radioactivity quickly revealed evidence of revolutionary new physics. In addition, uses for nuclear radiation also emerged quickly—for example, people such as Ernest Rutherford used it to determine the size of the nucleus and devices were painted with radon-doped paint to make them glow in the dark (see ). We therefore begin our study of nuclear physics with the discovery and basic features of nuclear radioactivity. ### Discovery of Nuclear Radioactivity In 1896, the French physicist Antoine Henri Becquerel (1852–1908) accidentally found that a uranium-rich mineral called pitchblende emits invisible, penetrating rays that can darken a photographic plate enclosed in an opaque envelope. The rays therefore carry energy; but amazingly, the pitchblende emits them continuously without any energy input. This is an apparent violation of the law of conservation of energy, one that we now understand is due to the conversion of a small amount of mass into energy, as related in Einstein’s famous equation . It was soon evident that Becquerel’s rays originate in the nuclei of the atoms and have other unique characteristics. The emission of these rays is called nuclear radioactivity or simply radioactivity. The rays themselves are called nuclear radiation. A nucleus that spontaneously destroys part of its mass to emit radiation is said to decay (a term also used to describe the emission of radiation by atoms in excited states). A substance or object that emits nuclear radiation is said to be radioactive. Two types of experimental evidence imply that Becquerel’s rays originate deep in the heart (or nucleus) of an atom. First, the radiation is found to be associated with certain elements, such as uranium. Radiation does not vary with chemical state—that is, uranium is radioactive whether it is in the form of an element or compound. In addition, radiation does not vary with temperature, pressure, or ionization state of the uranium atom. Since all of these factors affect electrons in an atom, the radiation cannot come from electron transitions, as atomic spectra do. The huge energy emitted during each event is the second piece of evidence that the radiation cannot be atomic. Nuclear radiation has energies of the order of per event, which is much greater than the typical atomic energies (a few ), such as that observed in spectra and chemical reactions, and more than ten times as high as the most energetic characteristic x rays. Becquerel did not vigorously pursue his discovery for very long. In 1898, Marie Curie (1867–1934), then a graduate student married to the already well-known French physicist Pierre Curie (1859–1906), began her doctoral study of Becquerel’s rays. She and her husband soon discovered two new radioactive elements, which she named polonium (after her native land) and radium (because it radiates). These two new elements filled holes in the periodic table and, further, displayed much higher levels of radioactivity per gram of material than uranium. Over a period of four years, working under poor conditions and spending their own funds, the Curies processed more than a ton of uranium ore to isolate a gram of radium salt. Radium became highly sought after, because it was about two million times as radioactive as uranium. Curie’s radium salt glowed visibly from the radiation that took its toll on them and other unaware researchers. Shortly after completing her Ph.D., both Curies and Becquerel shared the 1903 Nobel Prize in physics for their work on radioactivity. Pierre was killed in a horse cart accident in 1906, but Marie continued her study of radioactivity for nearly 30 more years. Awarded the 1911 Nobel Prize in chemistry for her discovery of two new elements, she remains the only person to win Nobel Prizes in physics and chemistry. Marie’s radioactive fingerprints on some pages of her notebooks can still expose film, and she suffered from radiation-induced lesions. She died of leukemia likely caused by radiation, but she was active in research almost until her death in 1934. The following year, her daughter and son-in-law, Irene and Frederic Joliot-Curie, were awarded the Nobel Prize in chemistry for their discovery of artificially induced radiation, adding to a remarkable family legacy. ### Alpha, Beta, and Gamma Research begun by people such as New Zealander Ernest Rutherford soon after the discovery of nuclear radiation indicated that different types of rays are emitted. Eventually, three types were distinguished and named alpha, beta, and gamma, because, like x-rays, their identities were initially unknown. shows what happens if the rays are passed through a magnetic field. The s are unaffected, while the s and s are deflected in opposite directions, indicating the s are positive, the s negative, and the s uncharged. Rutherford used both magnetic and electric fields to show that s have a positive charge twice the magnitude of an electron, or . In the process, he found the s charge to mass ratio to be several thousand times smaller than the electron’s. Later on, Rutherford collected s from a radioactive source and passed an electric discharge through them, obtaining the spectrum of recently discovered helium gas. Among many important discoveries made by Rutherford and his collaborators was the proof that radiation is the emission of a helium nucleus. Rutherford won the Nobel Prize in chemistry in 1908 for his early work. He continued to make important contributions until his death in 1934. Other researchers had already proved that s are negative and have the same mass and same charge-to-mass ratio as the recently discovered electron. By 1902, it was recognized that radiation is the emission of an electron. Although s are electrons, they do not exist in the nucleus before it decays and are not ejected atomic electrons—the electron is created in the nucleus at the instant of decay. Since s remain unaffected by electric and magnetic fields, it is natural to think they might be photons. Evidence for this grew, but it was not until 1914 that this was proved by Rutherford and collaborators. By scattering radiation from a crystal and observing interference, they demonstrated that radiation is the emission of a high-energy photon by a nucleus. In fact, radiation comes from the de-excitation of a nucleus, just as an x ray comes from the de-excitation of an atom. The names " ray" and "x ray" identify the source of the radiation. At the same energy, rays and x rays are otherwise identical. ### Ionization and Range Two of the most important characteristics of , , and rays were recognized very early. All three types of nuclear radiation produce ionization in materials, but they penetrate different distances in materials—that is, they have different ranges. Let us examine why they have these characteristics and what are some of the consequences. Like x rays, nuclear radiation in the form of s, s, and s has enough energy per event to ionize atoms and molecules in any material. The energy emitted in various nuclear decays ranges from a few to more than , while only a few are needed to produce ionization. The effects of x rays and nuclear radiation on biological tissues and other materials, such as solid state electronics, are directly related to the ionization they produce. All of them, for example, can damage electronics or kill cancer cells. In addition, methods for detecting x rays and nuclear radiation are based on ionization, directly or indirectly. All of them can ionize the air between the plates of a capacitor, for example, causing it to discharge. This is the basis of inexpensive personal radiation monitors, such as pictured in . Apart from , , and , there are other forms of nuclear radiation as well, and these also produce ionization with similar effects. We define ionizing radiation as any form of radiation that produces ionization whether nuclear in origin or not, since the effects and detection of the radiation are related to ionization. The range of radiation is defined to be the distance it can travel through a material. Range is related to several factors, including the energy of the radiation, the material encountered, and the type of radiation (see ). The higher the energy, the greater the range, all other factors being the same. This makes good sense, since radiation loses its energy in materials primarily by producing ionization in them, and each ionization of an atom or a molecule requires energy that is removed from the radiation. The amount of ionization is, thus, directly proportional to the energy of the particle of radiation, as is its range. Radiation can be absorbed or shielded by materials, such as the lead aprons dentists drape on us when taking x rays. Lead is a particularly effective shield compared with other materials, such as plastic or air. How does the range of radiation depend on material? Ionizing radiation interacts best with charged particles in a material. Since electrons have small masses, they most readily absorb the energy of the radiation in collisions. The greater the density of a material and, in particular, the greater the density of electrons within a material, the smaller the range of radiation. Different types of radiation have different ranges when compared at the same energy and in the same material. Alphas have the shortest range, betas penetrate farther, and gammas have the greatest range. This is directly related to charge and speed of the particle or type of radiation. At a given energy, each , , or will produce the same number of ionizations in a material (each ionization requires a certain amount of energy on average). The more readily the particle produces ionization, the more quickly it will lose its energy. The effect of charge is as follows: The has a charge of , the has a charge of , and the is uncharged. The electromagnetic force exerted by the is thus twice as strong as that exerted by the and it is more likely to produce ionization. Although chargeless, the does interact weakly because it is an electromagnetic wave, but it is less likely to produce ionization in any encounter. More quantitatively, the change in momentum given to a particle in the material is , where is the force the , , or exerts over a time . The smaller the charge, the smaller is and the smaller is the momentum (and energy) lost. Since the speed of alphas is about 5% to 10% of the speed of light, classical (non-relativistic) formulas apply. The speed at which they travel is the other major factor affecting the range of s, s, and s. The faster they move, the less time they spend in the vicinity of an atom or a molecule, and the less likely they are to interact. Since s and s are particles with mass (helium nuclei and electrons, respectively), their energy is kinetic, given classically by . The mass of the particle is thousands of times less than that of the s, so that s must travel much faster than s to have the same energy. Since s move faster (most at relativistic speeds), they have less time to interact than s. Gamma rays are photons, which must travel at the speed of light. They are even less likely to interact than a , since they spend even less time near a given atom (and they have no charge). The range of s is thus greater than the range of s. Alpha radiation from radioactive sources has a range much less than a millimeter of biological tissues, usually not enough to even penetrate the dead layers of our skin. On the other hand, the same radiation can penetrate a few centimeters of air, so mere distance from a source prevents radiation from reaching us. This makes radiation relatively safe for our body compared to and radiation. Typical radiation can penetrate a few millimeters of tissue or about a meter of air. Beta radiation is thus hazardous even when not ingested. The range of s in lead is about a millimeter, and so it is easy to store sources in lead radiation-proof containers. Gamma rays have a much greater range than either s or s. In fact, if a given thickness of material, like a lead brick, absorbs 90% of the s, then a second lead brick will only absorb 90% of what got through the first. Thus, s do not have a well-defined range; we can only cut down the amount that gets through. Typically, s can penetrate many meters of air, go right through our bodies, and are effectively shielded (that is, reduced in intensity to acceptable levels) by many centimeters of lead. One benefit of s is that they can be used as radioactive tracers (see ). ### Test Prep for AP Courses ### Section Summary 1. Some nuclei are radioactive—they spontaneously decay destroying some part of their mass and emitting energetic rays, a process called nuclear radioactivity. 2. Nuclear radiation, like x rays, is ionizing radiation, because energy sufficient to ionize matter is emitted in each decay. 3. The range (or distance traveled in a material) of ionizing radiation is directly related to the charge of the emitted particle and its energy, with greater-charge and lower-energy particles having the shortest ranges. 4. Radiation detectors are based directly or indirectly upon the ionization created by radiation, as are the effects of radiation on living and inert materials. ### Conceptual Questions
# Radioactivity and Nuclear Physics ## Radiation Detection and Detectors ### Learning Objectives By the end of this section, you will be able to: 1. Explain the working principle of a Geiger tube. 2. Define and discuss radiation detectors. It is well known that ionizing radiation affects us but does not trigger nerve impulses. Newspapers carry stories about unsuspecting victims of radiation poisoning who fall ill with radiation sickness, such as burns and blood count changes, but who never felt the radiation directly. This makes the detection of radiation by instruments more than an important research tool. This section is a brief overview of radiation detection and some of its applications. ### Human Application The first direct detection of radiation was Becquerel’s fogged photographic plate. Photographic film is still the most common detector of ionizing radiation, being used routinely in medical and dental x rays. Nuclear radiation is also captured on film, such as seen in . The mechanism for film exposure by ionizing radiation is similar to that by photons. A quantum of energy interacts with the emulsion and alters it chemically, thus exposing the film. The quantum come from an -particle, -particle, or photon, provided it has more than the few eV of energy needed to induce the chemical change (as does all ionizing radiation). The process is not 100% efficient, since not all incident radiation interacts and not all interactions produce the chemical change. The amount of film darkening is related to exposure, but the darkening also depends on the type of radiation, so that absorbers and other devices must be used to obtain energy, charge, and particle-identification information. Another very common radiation detector is the Geiger tube. The clicking and buzzing sound we hear in dramatizations and documentaries, as well as in our own physics labs, is usually an audio output of events detected by a Geiger counter. These relatively inexpensive radiation detectors are based on the simple and sturdy Geiger tube, shown schematically in (b). A conducting cylinder with a wire along its axis is filled with an insulating gas so that a voltage applied between the cylinder and wire produces almost no current. Ionizing radiation passing through the tube produces free ion pairs (each pair consisting of one positively charged particle and one negatively charged particle) that are attracted to the wire and cylinder, forming a current that is detected as a count. The word count implies that there is no information on energy, charge, or type of radiation with a simple Geiger counter. They do not detect every particle, since some radiation can pass through without producing enough ionization to be detected. However, Geiger counters are very useful in producing a prompt output that reveals the existence and relative intensity of ionizing radiation. Another radiation detection method records light produced when radiation interacts with materials. The energy of the radiation is sufficient to excite atoms in a material that may fluoresce, such as the phosphor used by Rutherford’s group. Materials called scintillators use a more complex collaborative process to convert radiation energy into light. Scintillators may be liquid or solid, and they can be very efficient. Their light output can provide information about the energy, charge, and type of radiation. Scintillator light flashes are very brief in duration, enabling the detection of a huge number of particles in short periods of time. Scintillator detectors are used in a variety of research and diagnostic applications. Among these are the detection by satellite-mounted equipment of the radiation from distant galaxies, the analysis of radiation from a person indicating body burdens, and the detection of exotic particles in accelerator laboratories. Light from a scintillator is converted into electrical signals by devices such as the photomultiplier tube shown schematically in . These tubes are based on the photoelectric effect, which is multiplied in stages into a cascade of electrons, hence the name photomultiplier. Light entering the photomultiplier strikes a metal plate, ejecting an electron that is attracted by a positive potential difference to the next plate, giving it enough energy to eject two or more electrons, and so on. The final output current can be made proportional to the energy of the light entering the tube, which is in turn proportional to the energy deposited in the scintillator. Very sophisticated information can be obtained with scintillators, including energy, charge, particle identification, direction of motion, and so on. Solid-state radiation detectors convert ionization produced in a semiconductor (like those found in computer chips) directly into an electrical signal. Semiconductors can be constructed that do not conduct current in one particular direction. When a voltage is applied in that direction, current flows only when ionization is produced by radiation, similar to what happens in a Geiger tube. Further, the amount of current in a solid-state detector is closely related to the energy deposited and, since the detector is solid, it can have a high efficiency (since ionizing radiation is stopped in a shorter distance in solids fewer particles escape detection). As with scintillators, very sophisticated information can be obtained from solid-state detectors. ### Section Summary 1. Radiation detectors are based directly or indirectly upon the ionization created by radiation, as are the effects of radiation on living and inert materials. ### Conceptual Questions ### Problems & Exercises
# Radioactivity and Nuclear Physics ## Substructure of the Nucleus ### Learning Objectives By the end of this section, you will be able to: 1. Define and discuss the nucleus in an atom. 2. Define atomic number. 3. Define and discuss isotopes. 4. Calculate the density of the nucleus. 5. Explain nuclear force. What is inside the nucleus? Why are some nuclei stable while others decay? (See .) Why are there different types of decay (, and )? Why are nuclear decay energies so large? Pursuing natural questions like these has led to far more fundamental discoveries than you might imagine. We have already identified protons as the particles that carry positive charge in the nuclei. However, there are actually two types of particles in the nuclei—the proton and the neutron, referred to collectively as nucleons, the constituents of nuclei. As its name implies, the neutron is a neutral particle () that has nearly the same mass and intrinsic spin as the proton. compares the masses of protons, neutrons, and electrons. Note how close the proton and neutron masses are, but the neutron is slightly more massive once you look past the third digit. Both nucleons are much more massive than an electron. In fact, (as noted in Medical Applications of Nuclear Physics and . also gives masses in terms of mass units that are more convenient than kilograms on the atomic and nuclear scale. The first of these is the unified (u), defined as This unit is defined so that a neutral carbon atom has a mass of exactly 12 u. Masses are also expressed in units of . These units are very convenient when considering the conversion of mass into energy (and vice versa), as is so prominent in nuclear processes. Using and units of in , we find that cancels and comes out conveniently in MeV. For example, if the rest mass of a proton is converted entirely into energy, then It is useful to note that 1 u of mass converted to energy produces 931.5 MeV, or All properties of a nucleus are determined by the number of protons and neutrons it has. A specific combination of protons and neutrons is called a nuclide and is a unique nucleus. The following notation is used to represent a particular nuclide: where the symbols , , , and are defined as follows: The number of protons in a nucleus is the atomic number , as defined in Medical Applications of Nuclear Physics. X is the symbol for the element, such as Ca for calcium. However, once is known, the element is known; hence, and are redundant. For example, is always calcium, and calcium always has . is the number of neutrons in a nucleus. In the notation for a nuclide, the subscript is usually omitted. The symbol is defined as the number of nucleons or the total number of protons and neutrons, where is also called the mass number. This name for is logical; the mass of an atom is nearly equal to the mass of its nucleus, since electrons have so little mass. The mass of the nucleus turns out to be nearly equal to the sum of the masses of the protons and neutrons in it, which is proportional to . In this context, it is particularly convenient to express masses in units of u. Both protons and neutrons have masses close to 1 u, and so the mass of an atom is close to u. For example, in an oxygen nucleus with eight protons and eight neutrons, , and its mass is 16 u. As noticed, the unified atomic mass unit is defined so that a neutral carbon atom (actually a atom) has a mass of exactly 12 . Carbon was chosen as the standard, partly because of its importance in organic chemistry (see Appendix A). Let us look at a few examples of nuclides expressed in the notation. The nucleus of the simplest atom, hydrogen, is a single proton, or (the zero for no neutrons is often omitted). To check this symbol, refer to the periodic table—you see that the atomic number of hydrogen is 1. Since you are given that there are no neutrons, the mass number is also 1. Suppose you are told that the helium nucleus or particle has two protons and two neutrons. You can then see that it is written . There is a scarce form of hydrogen found in nature called deuterium; its nucleus has one proton and one neutron and, hence, twice the mass of common hydrogen. The symbol for deuterium is, thus, (sometimes is used, as for deuterated water ). An even rarer—and radioactive—form of hydrogen is called tritium, since it has a single proton and two neutrons, and it is written . These three varieties of hydrogen have nearly identical chemistries, but the nuclei differ greatly in mass, stability, and other characteristics. Nuclei (such as those of hydrogen) having the same and different s are defined to be isotopes of the same element. There is some redundancy in the symbols , , , and . If the element is known, then can be found in a periodic table and is always the same for a given element. If both and are known, then can also be determined (first find ; then, ). Thus the simpler notation for nuclides is which is sufficient and is most commonly used. For example, in this simpler notation, the three isotopes of hydrogen are and while the particle is . We read this backward, saying helium-4 for , or uranium-238 for . So for , should we need to know, we can determine that for uranium from the periodic table, and, thus, . A variety of experiments indicate that a nucleus behaves something like a tightly packed ball of nucleons, as illustrated in . These nucleons have large kinetic energies and, thus, move rapidly in very close contact. Nucleons can be separated by a large force, such as in a collision with another nucleus, but resist strongly being pushed closer together. The most compelling evidence that nucleons are closely packed in a nucleus is that the radius of a nucleus, , is found to be given approximately by where and is the mass number of the nucleus. Note that . Since many nuclei are spherical, and the volume of a sphere is , we see that —that is, the volume of a nucleus is proportional to the number of nucleons in it. This is what would happen if you pack nucleons so closely that there is no empty space between them. Nucleons are held together by nuclear forces and resist both being pulled apart and pushed inside one another. The volume of the nucleus is the sum of the volumes of the nucleons in it, here shown in different colors to represent protons and neutrons. ### Nuclear Forces and Stability What forces hold a nucleus together? The nucleus is very small and its protons, being positive, exert tremendous repulsive forces on one another. (The Coulomb force increases as charges get closer, since it is proportional to , even at the tiny distances found in nuclei.) The answer is that two previously unknown forces hold the nucleus together and make it into a tightly packed ball of nucleons. These forces are called the weak and strong nuclear forces. Nuclear forces are so short ranged that they fall to zero strength when nucleons are separated by only a few fm. However, like glue, they are strongly attracted when the nucleons get close to one another. The strong nuclear force is about 100 times more attractive than the repulsive EM force, easily holding the nucleons together. Nuclear forces become extremely repulsive if the nucleons get too close, making nucleons strongly resist being pushed inside one another, something like ball bearings. The fact that nuclear forces are very strong is responsible for the very large energies emitted in nuclear decay. During decay, the forces do work, and since work is force times the distance (), a large force can result in a large emitted energy. In fact, we know that there are two distinct nuclear forces because of the different types of nuclear decay—the strong nuclear force is responsible for decay, while the weak nuclear force is responsible for decay. The many stable and unstable nuclei we have explored, and the hundreds we have not discussed, can be arranged in a table called the chart of the nuclides, a simplified version of which is shown in . Nuclides are located on a plot of versus . Examination of a detailed chart of the nuclides reveals patterns in the characteristics of nuclei, such as stability, abundance, and types of decay, analogous to but more complex than the systematics in the periodic table of the elements. In principle, a nucleus can have any combination of protons and neutrons, but shows a definite pattern for those that are stable. For low-mass nuclei, there is a strong tendency for and to be nearly equal. This means that the nuclear force is more attractive when . More detailed examination reveals greater stability when and are even numbers—nuclear forces are more attractive when neutrons and protons are in pairs. For increasingly higher masses, there are progressively more neutrons than protons in stable nuclei. This is due to the ever-growing repulsion between protons. Since nuclear forces are short ranged, and the Coulomb force is long ranged, an excess of neutrons keeps the protons a little farther apart, reducing Coulomb repulsion. Decay modes of nuclides out of the region of stability consistently produce nuclides closer to the region of stability. There are more stable nuclei having certain numbers of protons and neutrons, called magic numbers. Magic numbers indicate a shell structure for the nucleus in which closed shells are more stable. Nuclear shell theory has been very successful in explaining nuclear energy levels, nuclear decay, and the greater stability of nuclei with closed shells. We have been producing ever-heavier transuranic elements since the early 1940s, and we have now produced the element with . There are theoretical predictions of an island of relative stability for nuclei with such high s. ### Test Prep for AP Courses ### Section Summary 1. Two particles, both called nucleons, are found inside nuclei. The two types of nucleons are protons and neutrons; they are very similar, except that the proton is positively charged while the neutron is neutral. Some of their characteristics are given in and compared with those of the electron. A mass unit convenient to atomic and nuclear processes is the unified atomic mass unit (u), defined to be 2. A nuclide is a specific combination of protons and neutrons, denoted by is the number of protons or atomic number, X is the symbol for the element, is the number of neutrons, and is the mass number or the total number of protons and neutrons, 3. Nuclides having the same but different are isotopes of the same element. 4. The radius of a nucleus, , is approximately where . Nuclear volumes are proportional to . There are two nuclear forces, the weak and the strong. Systematics in nuclear stability seen on the chart of the nuclides indicate that there are shell closures in nuclei for values of and equal to the magic numbers, which correspond to highly stable nuclei. ### Conceptual Questions ### Problems & Exercises
# Radioactivity and Nuclear Physics ## Nuclear Decay and Conservation Laws ### Learning Objectives By the end of this section, you will be able to: 1. Define and discuss nuclear decay. 2. State the conservation laws. 3. Explain parent and daughter nucleus. 4. Calculate the energy emitted during nuclear decay. Nuclear decay has provided an amazing window into the realm of the very small. Nuclear decay gave the first indication of the connection between mass and energy, and it revealed the existence of two of the four basic forces in nature. In this section, we explore the major modes of nuclear decay; and, like those who first explored them, we will discover evidence of previously unknown particles and conservation laws. Some nuclides are stable, apparently living forever. Unstable nuclides decay (that is, they are radioactive), eventually producing a stable nuclide after many decays. We call the original nuclide the parent and its decay products the daughters. Some radioactive nuclides decay in a single step to a stable nucleus. For example, is unstable and decays directly to , which is stable. Others, such as , decay to another unstable nuclide, resulting in a decay series in which each subsequent nuclide decays until a stable nuclide is finally produced. The decay series that starts from is of particular interest, since it produces the radioactive isotopes and , which the Curies first discovered (see ). Radon gas is also produced ( in the series), an increasingly recognized naturally occurring hazard. Since radon is a noble gas, it emanates from materials, such as soil, containing even trace amounts of and can be inhaled. The decay of radon and its daughters produces internal damage. The decay series ends with , a stable isotope of lead. Note that the daughters of decay shown in always have two fewer protons and two fewer neutrons than the parent. This seems reasonable, since we know that decay is the emission of a nucleus, which has two protons and two neutrons. The daughters of decay have one less neutron and one more proton than their parent. Beta decay is a little more subtle, as we shall see. No decays are shown in the figure, because they do not produce a daughter that differs from the parent. ### Alpha Decay In alpha decay, a nucleus simply breaks away from the parent nucleus, leaving a daughter with two fewer protons and two fewer neutrons than the parent (see ). One example of decay is shown in for . Another nuclide that undergoes decay is The decay equations for these two nuclides are and If you examine the periodic table of the elements, you will find that Th has , two fewer than U, which has . Similarly, in the second decay equation, we see that U has two fewer protons than Pu, which has . The general rule for decay is best written in the format . If a certain nuclide is known to decay (generally this information must be looked up in a table of isotopes, such as in Appendix B), its decay equation is where Y is the nuclide that has two fewer protons than X, such as Th having two fewer than U. So if you were told that decays and were asked to write the complete decay equation, you would first look up which element has two fewer protons (an atomic number two lower) and find that this is uranium. Then since four nucleons have broken away from the original 239, its atomic mass would be 235. It is instructive to examine conservation laws related to decay. You can see from the equation that total charge is conserved. Linear and angular momentum are conserved, too. Although conserved angular momentum is not of great consequence in this type of decay, conservation of linear momentum has interesting consequences. If the nucleus is at rest when it decays, its momentum is zero. In that case, the fragments must fly in opposite directions with equal-magnitude momenta so that total momentum remains zero. This results in the particle carrying away most of the energy, as a bullet from a heavy rifle carries away most of the energy of the powder burned to shoot it. Total mass–energy is also conserved: the energy produced in the decay comes from conversion of a fraction of the original mass. As discussed in Atomic Physics, the general relationship is Here, is the nuclear reaction energy (the reaction can be nuclear decay or any other reaction), and is the difference in mass between initial and final products. When the final products have less total mass, is positive, and the reaction releases energy (is exothermic). When the products have greater total mass, the reaction is endothermic ( is negative) and must be induced with an energy input. For decay to be spontaneous, the decay products must have smaller mass than the parent. ### Beta Decay There are actually three types of beta decay. The first discovered was “ordinary” beta decay and is called decay or electron emission. The symbol represents an electron emitted in nuclear beta decay. Cobalt-60 is a nuclide that decays in the following manner: The neutrino is a particle emitted in beta decay that was unanticipated and is of fundamental importance. The neutrino was not even proposed in theory until more than 20 years after beta decay was known to involve electron emissions. Neutrinos are so difficult to detect that the first direct evidence of them was not obtained until 1953. Neutrinos are nearly massless, have no charge, and do not interact with nucleons via the strong nuclear force. Traveling approximately at the speed of light, they have little time to affect any nucleus they encounter. This is, owing to the fact that they have no charge (and they are not EM waves), they do not interact through the EM force. They do interact via the relatively weak and very short range weak nuclear force. Consequently, neutrinos escape almost any detector and penetrate almost any shielding. However, neutrinos do carry energy, angular momentum (they are fermions with half-integral spin), and linear momentum away from a beta decay. When accurate measurements of beta decay were made, it became apparent that energy, angular momentum, and linear momentum were not accounted for by the daughter nucleus and electron alone. Either a previously unsuspected particle was carrying them away, or three conservation laws were being violated. Wolfgang Pauli made a formal proposal for the existence of neutrinos in 1930. The Italian-born American physicist Enrico Fermi (1901–1954) gave neutrinos their name, meaning little neutral ones, when he developed a sophisticated theory of beta decay (see ). Part of Fermi’s theory was the identification of the weak nuclear force as being distinct from the strong nuclear force and in fact responsible for beta decay. Chinese-born physicist Chien-Shiung Wu, who had developed a number of processes critical to the Manhattan Project and related research, set out to investigate Fermi’s theory and some experiments whose failures had cast the theory in doubt. She first identified a number of flaws in her contemporaries’ methods and materials, and then designed an experimental method that would avoid the same errors. Wu verified Fermi’s theory and went on to establish the core principles of beta decay, which would become critical to further work in nuclear physics. The neutrino also reveals a new conservation law. There are various families of particles, one of which is the electron family. We propose that the number of members of the electron family is constant in any process or any closed system. In our example of beta decay, there are no members of the electron family present before the decay, but after, there is an electron and a neutrino. So electrons are given an electron family number of . The neutrino in decay is an electron’s antineutrino, given the symbol , where is the Greek letter nu, and the subscript e means this neutrino is related to the electron. The bar indicates this is a particle of antimatter. (All particles have antimatter counterparts that are nearly identical except that they have the opposite charge. Antimatter is almost entirely absent on Earth, but it is found in nuclear decay and other nuclear and particle reactions as well as in outer space.) The electron’s antineutrino , being antimatter, has an electron family number of . The total is zero, before and after the decay. The new conservation law, obeyed in all circumstances, states that the total electron family number is constant. An electron cannot be created without also creating an antimatter family member. This law is analogous to the conservation of charge in a situation where total charge is originally zero, and equal amounts of positive and negative charge must be created in a reaction to keep the total zero. If a nuclide is known to decay, then its decay equation is where Y is the nuclide having one more proton than X (see ). So if you know that a certain nuclide decays, you can find the daughter nucleus by first looking up for the parent and then determining which element has atomic number . In the example of the decay of given earlier, we see that for Co and is Ni. It is as if one of the neutrons in the parent nucleus decays into a proton, electron, and neutrino. In fact, neutrons outside of nuclei do just that—they live only an average of a few minutes and decay in the following manner: We see that charge is conserved in decay, since the total charge is before and after the decay. For example, in decay, total charge is 27 before decay, since cobalt has . After decay, the daughter nucleus is Ni, which has , and there is an electron, so that the total charge is also or 27. Angular momentum is conserved, but not obviously (you have to examine the spins and angular momenta of the final products in detail to verify this). Linear momentum is also conserved, again imparting most of the decay energy to the electron and the antineutrino, since they are of low and zero mass, respectively. Another new conservation law is obeyed here and elsewhere in nature. The total number of nucleons . In decay, for example, there are 60 nucleons before and after the decay. Note that total is also conserved in decay. Also note that the total number of protons changes, as does the total number of neutrons, so that total and total are not conserved in decay, as they are in decay. Energy released in decay can be calculated given the masses of the parent and products. The second type of beta decay is less common than the first. It is decay. Certain nuclides decay by the emission of a positive electron. This is antielectron or positron decay (see ). The antielectron is often represented by the symbol , but in beta decay it is written as to indicate the antielectron was emitted in a nuclear decay. Antielectrons are the antimatter counterpart to electrons, being nearly identical, having the same mass, spin, and so on, but having a positive charge and an electron family number of . When a positron encounters an electron, there is a mutual annihilation in which all the mass of the antielectron-electron pair is converted into pure photon energy. (The reaction, , conserves electron family number as well as all other conserved quantities.) If a nuclide is known to decay, then its decay equation is where Y is the nuclide having one less proton than X (to conserve charge) and is the symbol for the electron’s neutrino, which has an electron family number of . Since an antimatter member of the electron family (the ) is created in the decay, a matter member of the family (here the ) must also be created. Given, for example, that decays, you can write its full decay equation by first finding that for , so that the daughter nuclide will have , the atomic number for neon. Thus the decay equation for is In decay, it is as if one of the protons in the parent nucleus decays into a neutron, a positron, and a neutrino. Protons do not do this outside of the nucleus, and so the decay is due to the complexities of the nuclear force. Note again that the total number of nucleons is constant in this and any other reaction. To find the energy emitted in decay, you must again count the number of electrons in the neutral atoms, since atomic masses are used. The daughter has one less electron than the parent, and one electron mass is created in the decay. Thus, in decay, since we use the masses of neutral atoms. Electron capture is the third type of beta decay. Here, a nucleus captures an inner-shell electron and undergoes a nuclear reaction that has the same effect as decay. Electron capture is sometimes denoted by the letters EC. We know that electrons cannot reside in the nucleus, but this is a nuclear reaction that consumes the electron and occurs spontaneously only when the products have less mass than the parent plus the electron. If a nuclide is known to undergo electron capture, then its electron capture equation is Any nuclide that can decay can also undergo electron capture (and often does both). The same conservation laws are obeyed for EC as for decay. It is good practice to confirm these for yourself. All forms of beta decay occur because the parent nuclide is unstable and lies outside the region of stability in the chart of nuclides. Those nuclides that have relatively more neutrons than those in the region of stability will decay to produce a daughter with fewer neutrons, producing a daughter nearer the region of stability. Similarly, those nuclides having relatively more protons than those in the region of stability will decay or undergo electron capture to produce a daughter with fewer protons, nearer the region of stability. ### Gamma Decay Gamma decay is the simplest form of nuclear decay—it is the emission of energetic photons by nuclei left in an excited state by some earlier process. Protons and neutrons in an excited nucleus are in higher orbitals, and they fall to lower levels by photon emission (analogous to electrons in excited atoms). Nuclear excited states have lifetimes typically of only about s, an indication of the great strength of the forces pulling the nucleons to lower states. The decay equation is simply where the asterisk indicates the nucleus is in an excited state. There may be one or more s emitted, depending on how the nuclide de-excites. In radioactive decay, emission is common and is preceded by or decay. For example, when decays, it most often leaves the daughter nucleus in an excited state, written . Then the nickel nucleus quickly decays by the emission of two penetrating s: These are called cobalt rays, although they come from nickel—they are used for cancer therapy, for example. It is again constructive to verify the conservation laws for gamma decay. Finally, since decay does not change the nuclide to another species, it is not prominently featured in charts of decay series, such as that in . There are other types of nuclear decay, but they occur less commonly than , , and decay. Spontaneous fission is the most important of the other forms of nuclear decay because of its applications in nuclear power and weapons. It is covered in the next chapter. ### Test Prep for AP Courses ### Section Summary 1. When a parent nucleus decays, it produces a daughter nucleus following rules and conservation laws. There are three major types of nuclear decay, called alpha beta and gamma . The decay equation is 2. Nuclear decay releases an amount of energy related to the mass destroyed by 3. There are three forms of beta decay. The decay equation is 4. The decay equation is 5. The electron capture equation is 6. is an electron, is an antielectron or positron, represents an electron’s neutrino, and is an electron’s antineutrino. In addition to all previously known conservation laws, two new ones arise— conservation of electron family number and conservation of the total number of nucleons. The decay equation is is a high-energy photon originating in a nucleus. ### Conceptual Questions ### Problems & Exercises In the following eight problems, write the complete decay equation for the given nuclide in the complete notation. Refer to the periodic table for values of . In the following four problems, identify the parent nuclide and write the complete decay equation in the notation. Refer to the periodic table for values of .
# Radioactivity and Nuclear Physics ## Half-Life and Activity ### Learning Objectives By the end of this section, you will be able to: 1. Define half-life. 2. Define dating. 3. Calculate age of old objects by radioactive dating. Unstable nuclei decay. However, some nuclides decay faster than others. For example, radium and polonium, discovered by the Curies, decay faster than uranium. This means they have shorter lifetimes, producing a greater rate of decay. In this section we explore half-life and activity, the quantitative terms for lifetime and rate of decay. ### Half-Life Why use a term like half-life rather than lifetime? The answer can be found by examining , which shows how the number of radioactive nuclei in a sample decreases with time. The time in which half of the original number of nuclei decay is defined as the half-life, . Half of the remaining nuclei decay in the next half-life. Further, half of that amount decays in the following half-life. Therefore, the number of radioactive nuclei decreases from to in one half-life, then to in the next, and to in the next, and so on. If is a large number, then many half-lives (not just two) pass before all of the nuclei decay. Nuclear decay is an example of a purely statistical process. A more precise definition of half-life is that each nucleus has a 50% chance of living for a time equal to one half-life . Thus, if is reasonably large, half of the original nuclei decay in a time of one half-life. If an individual nucleus makes it through that time, it still has a 50% chance of surviving through another half-life. Even if it happens to make it through hundreds of half-lives, it still has a 50% chance of surviving through one more. The probability of decay is the same no matter when you start counting. This is like random coin flipping. The chance of heads is 50%, no matter what has happened before. There is a tremendous range in the half-lives of various nuclides, from as short as s for the most unstable, to more than y for the least unstable, or about 46 orders of magnitude. Nuclides with the shortest half-lives are those for which the nuclear forces are least attractive, an indication of the extent to which the nuclear force can depend on the particular combination of neutrons and protons. The concept of half-life is applicable to other subatomic particles, as will be discussed in Particle Physics. It is also applicable to the decay of excited states in atoms and nuclei. The following equation gives the quantitative relationship between the original number of nuclei present at time zero () and the number () at a later time : where is the base of the natural logarithm, and is the decay constant for the nuclide. The shorter the half-life, the larger is the value of , and the faster the exponential decreases with time. The relationship between the decay constant and the half-life is To see how the number of nuclei declines to half its original value in one half-life, let in the exponential in the equation . This gives . For integral numbers of half-lives, you can just divide the original number by 2 over and over again, rather than using the exponential relationship. For example, if ten half-lives have passed, we divide by 2 ten times. This reduces it to . For an arbitrary time, not just a multiple of the half-life, the exponential relationship must be used. Radioactive dating is a clever use of naturally occurring radioactivity. Its most famous application is carbon-14 dating. Carbon-14 has a half-life of 5730 years and is produced in a nuclear reaction induced when solar neutrinos strike in the atmosphere. Radioactive carbon has the same chemistry as stable carbon, and so it mixes into the ecosphere, where it is consumed and becomes part of every living organism. Carbon-14 has an abundance of 1.3 parts per trillion of normal carbon. Thus, if you know the number of carbon nuclei in an object (perhaps determined by mass and Avogadro’s number), you multiply that number by to find the number of nuclei in the object. When an organism dies, carbon exchange with the environment ceases, and is not replenished as it decays. By comparing the abundance of in an artifact, such as mummy wrappings, with the normal abundance in living tissue, it is possible to determine the artifact’s age (or time since death). Carbon-14 dating can be used for biological tissues as old as 50 or 60 thousand years, but is most accurate for younger samples, since the abundance of nuclei in them is greater. Very old biological materials contain no at all. There are instances in which the date of an artifact can be determined by other means, such as historical knowledge or tree-ring counting. These cross-references have confirmed the validity of carbon-14 dating and permitted us to calibrate the technique as well. Carbon-14 dating revolutionized parts of archaeology and is of such importance that it earned the 1960 Nobel Prize in chemistry for its developer, the American chemist Willard Libby (1908–1980). One of the most famous cases of carbon-14 dating involves the Shroud of Turin, a long piece of fabric purported to be the burial shroud of Jesus (see ). This relic was first displayed in Turin in 1354 and was denounced as a fraud at that time by a French bishop. Its remarkable negative imprint of an apparently crucified body resembles the then-accepted image of Jesus, and so the shroud was never disregarded completely and remained controversial over the centuries. Carbon-14 dating was not performed on the shroud until 1988, when the process had been refined to the point where only a small amount of material needed to be destroyed. Samples were tested at three independent laboratories, each being given four pieces of cloth, with only one unidentified piece from the shroud, to avoid prejudice. All three laboratories found samples of the shroud contain 92% of the found in living tissues, allowing the shroud to be dated (see ). There are other forms of radioactive dating. Rocks, for example, can sometimes be dated based on the decay of . The decay series for ends with , so that the ratio of these nuclides in a rock is an indication of how long it has been since the rock solidified. The original composition of the rock, such as the absence of lead, must be known with some confidence. However, as with carbon-14 dating, the technique can be verified by a consistent body of knowledge. Since has a half-life of y, it is useful for dating only very old materials, showing, for example, that the oldest rocks on Earth solidified about years ago. ### Activity, the Rate of Decay What do we mean when we say a source is highly radioactive? Generally, this means the number of decays per unit time is very high. We define activity to be the rate of decay expressed in decays per unit time. In equation form, this is where is the number of decays that occur in time . The SI unit for activity is one decay per second and is given the name becquerel (Bq) in honor of the discoverer of radioactivity. That is, Activity is often expressed in other units, such as decays per minute or decays per year. One of the most common units for activity is the curie (Ci), defined to be the activity of 1 g of , in honor of Marie Curie’s work with radium. The definition of curie is or decays per second. A curie is a large unit of activity, while a becquerel is a relatively small unit. . In countries like Australia and New Zealand that adhere more to SI units, most radioactive sources, such as those used in medical diagnostics or in physics laboratories, are labeled in Bq or megabecquerel (MBq). Intuitively, you would expect the activity of a source to depend on two things: the amount of the radioactive substance present, and its half-life. The greater the number of radioactive nuclei present in the sample, the more will decay per unit of time. The shorter the half-life, the more decays per unit time, for a given number of nuclei. So activity should be proportional to the number of radioactive nuclei, , and inversely proportional to their half-life, . In fact, your intuition is correct. It can be shown that the activity of a source is where is the number of radioactive nuclei present, having half-life . This relationship is useful in a variety of calculations, as the next two examples illustrate. Human-made (or artificial) radioactivity has been produced for decades and has many uses. Some of these include medical therapy for cancer, medical imaging and diagnostics, and food preservation by irradiation. Many applications as well as the biological effects of radiation are explored in Medical Applications of Nuclear Physics, but it is clear that radiation is hazardous. A number of tragic examples of this exist, one of the most disastrous being the meltdown and fire at the Chernobyl reactor complex in the Ukraine (see ). Several radioactive isotopes were released in huge quantities, contaminating many thousands of square kilometers and directly affecting hundreds of thousands of people. The most significant releases were of , , , , , and . Estimates are that the total amount of radiation released was about 100 million curies. ### Human and Medical Applications Activity decreases in time, going to half its original value in one half-life, then to one-fourth its original value in the next half-life, and so on. Since , the activity decreases as the number of radioactive nuclei decreases. The equation for as a function of time is found by combining the equations and , yielding where is the activity at . This equation shows exponential decay of radioactive nuclei. For example, if a source originally has a 1.00-mCi activity, it declines to 0.500 mCi in one half-life, to 0.250 mCi in two half-lives, to 0.125 mCi in three half-lives, and so on. For times other than whole half-lives, the equation must be used to find . ### Test Prep for AP Courses ### Section Summary 1. Half-life is the time in which there is a 50% chance that a nucleus will decay. The number of nuclei as a function of time is where is the number present at , and is the decay constant, related to the half-life by 2. One of the applications of radioactive decay is radioactive dating, in which the age of a material is determined by the amount of radioactive decay that occurs. The rate of decay is called the activity : 3. The SI unit for is the becquerel (Bq), defined by 4. is also expressed in terms of curies (Ci), where 5. The activity of a source is related to and by 6. Since has an exponential behavior as in the equation , the activity also has an exponential behavior, given by where is the activity at . ### Conceptual Questions ### Problems & Exercises Data from the appendices and the periodic table may be needed for these problems.
# Radioactivity and Nuclear Physics ## Binding Energy ### Learning Objectives By the end of this section, you will be able to: 1. Define and discuss binding energy. 2. Calculate the binding energy per nucleon of a particle. The more tightly bound a system is, the stronger the forces that hold it together and the greater the energy required to pull it apart. We can therefore learn about nuclear forces by examining how tightly bound the nuclei are. We define the binding energy (BE) of a nucleus to be the energy required to completely disassemble it into separate protons and neutrons. We can determine the BE of a nucleus from its rest mass. The two are connected through Einstein’s famous relationship . A bound system has a smaller mass than its separate constituents; the more tightly the nucleons are bound together, the smaller the mass of the nucleus. Imagine pulling a nuclide apart as illustrated in . Work done to overcome the nuclear forces holding the nucleus together puts energy into the system. By definition, the energy input equals the binding energy BE. The pieces are at rest when separated, and so the energy put into them increases their total rest mass compared with what it was when they were glued together as a nucleus. That mass increase is thus . This difference in mass is known as mass defect. It implies that the mass of the nucleus is less than the sum of the masses of its constituent protons and neutrons. A nuclide has protons and neutrons, so that the difference in mass is Thus, where is the mass of the nuclide , is the mass of a proton, and is the mass of a neutron. Traditionally, we deal with the masses of neutral atoms. To get atomic masses into the last equation, we first add electrons to , which gives , the atomic mass of the nuclide. We then add electrons to the protons, which gives , or times the mass of a hydrogen atom. Thus the binding energy of a nuclide is The atomic masses can be found in Appendix A, most conveniently expressed in unified atomic mass units u (). BE is thus calculated from known atomic masses. What patterns and insights are gained from an examination of the binding energy of various nuclides? First, we find that BE is approximately proportional to the number of nucleons in any nucleus. About twice as much energy is needed to pull apart a nucleus like compared with pulling apart , for example. To help us look at other effects, we divide BE by and consider the binding energy per nucleon, . The graph of in reveals some very interesting aspects of nuclei. We see that the binding energy per nucleon averages about 8 MeV, but is lower for both the lightest and heaviest nuclei. This overall trend, in which nuclei with equal to about 60 have the greatest and are thus the most tightly bound, is due to the combined characteristics of the attractive nuclear forces and the repulsive Coulomb force. It is especially important to note two things—the strong nuclear force is about 100 times stronger than the Coulomb force, and the nuclear forces are shorter in range compared to the Coulomb force. So, for low-mass nuclei, the nuclear attraction dominates and each added nucleon forms bonds with all others, causing progressively heavier nuclei to have progressively greater values of . This continues up to , roughly corresponding to the mass number of iron. Beyond that, new nucleons added to a nucleus will be too far from some others to feel their nuclear attraction. Added protons, however, feel the repulsion of all other protons, since the Coulomb force is longer in range. Coulomb repulsion grows for progressively heavier nuclei, but nuclear attraction remains about the same, and so becomes smaller. This is why stable nuclei heavier than have more neutrons than protons. Coulomb repulsion is reduced by having more neutrons to keep the protons farther apart (see ). There are some noticeable spikes on the graph, which represent particularly tightly bound nuclei. These spikes reveal further details of nuclear forces, such as confirming that closed-shell nuclei (those with magic numbers of protons or neutrons or both) are more tightly bound. The spikes also indicate that some nuclei with even numbers for and , and with , are exceptionally tightly bound. This finding can be correlated with some of the cosmic abundances of the elements. The most common elements in the universe, as determined by observations of atomic spectra from outer space, are hydrogen, followed by , with much smaller amounts of and other elements. It should be noted that the heavier elements are created in supernova explosions, while the lighter ones are produced by nuclear fusion during the normal life cycles of stars, as will be discussed in subsequent chapters. The most common elements have the most tightly bound nuclei. It is also no accident that one of the most tightly bound light nuclei is , emitted in decay. There is more to be learned from nuclear binding energies. The general trend in is fundamental to energy production in stars, and to fusion and fission energy sources on Earth, for example. This is one of the applications of nuclear physics covered in Medical Applications of Nuclear Physics. The abundance of elements on Earth, in stars, and in the universe as a whole is related to the binding energy of nuclei and has implications for the continued expansion of the universe. ### Problem-Solving Strategies ### For Reaction And Binding Energies and Activity Calculations in Nuclear Physics 1. Identify exactly what needs to be determined in the problem (identify the unknowns). This will allow you to decide whether the energy of a decay or nuclear reaction is involved, for example, or whether the problem is primarily concerned with activity (rate of decay). 2. Make a list of what is given or can be inferred from the problem as stated (identify the knowns). 3. For reaction and binding-energy problems, we use atomic rather than nuclear masses. Since the masses of neutral atoms are used, you must count the number of electrons involved. If these do not balance (such as in decay), then an energy adjustment of 0.511 MeV per electron must be made. Also note that atomic masses may not be given in a problem; they can be found in tables. 4. For problems involving activity, the relationship of activity to half-life, and the number of nuclei given in the equation Owing to the fact that number of nuclei is involved, you will also need to be familiar with moles and Avogadro’s number. 5. Perform the desired calculation; keep careful track of plus and minus signs as well as powers of 10. 6. Check the answer to see if it is reasonable: Does it make sense? Compare your results with worked examples and other information in the text. (Heeding the advice in Step 5 will also help you to be certain of your result.) You must understand the problem conceptually to be able to determine whether the numerical result is reasonable. ### Test Prep for AP Courses ### Section Summary 1. The binding energy (BE) of a nucleus is the energy needed to separate it into individual protons and neutrons. In terms of atomic masses, where is the mass of a hydrogen atom, is the atomic mass of the nuclide, and is the mass of a neutron. Patterns in the binding energy per nucleon, , reveal details of the nuclear force. The larger the , the more stable the nucleus. ### Conceptual Questions ### Problems & Exercises
# Radioactivity and Nuclear Physics ## Tunneling ### Learning Objectives By the end of this section, you will be able to: 1. Define and discuss tunneling. 2. Define potential barrier. 3. Explain quantum tunneling. Protons and neutrons are bound inside nuclei, that means energy must be supplied to break them away. The situation is analogous to a marble in a bowl that can roll around but lacks the energy to get over the rim. It is bound inside the bowl (see ). If the marble could get over the rim, it would gain kinetic energy by rolling down outside. However classically, if the marble does not have enough kinetic energy to get over the rim, it remains forever trapped in its well. In a nucleus, the attractive nuclear potential is analogous to the bowl at the top of a volcano (where the “volcano” refers only to the shape). Protons and neutrons have kinetic energy, but it is about 8 MeV less than that needed to get out (see ). That is, they are bound by an average of 8 MeV per nucleon. The slope of the hill outside the bowl is analogous to the repulsive Coulomb potential for a nucleus, such as for an particle outside a positive nucleus. In decay, two protons and two neutrons spontaneously break away as a unit. Yet the protons and neutrons do not have enough kinetic energy to get over the rim. So how does the particle get out? The answer was supplied in 1928 by the Russian physicist George Gamow (1904–1968). The particle tunnels through a region of space it is forbidden to be in, and it comes out of the side of the nucleus. Like an electron making a transition between orbits around an atom, it travels from one point to another without ever having been in between. indicates how this works. The wave function of a quantum mechanical particle varies smoothly, going from within an atomic nucleus (on one side of a potential energy barrier) to outside the nucleus (on the other side of the potential energy barrier). Inside the barrier, the wave function does not become zero but decreases exponentially, and we do not observe the particle inside the barrier. The probability of finding a particle is related to the square of its wave function, and so there is a small probability of finding the particle outside the barrier, which implies that the particle can tunnel through the barrier. This process is called barrier penetration or quantum mechanical tunneling. This concept was developed in theory by J. Robert Oppenheimer (who led the development of the first nuclear bombs during World War II) and was used by Gamow and others to describe decay. Good ideas explain more than one thing. In addition to qualitatively explaining how the four nucleons in an particle can get out of the nucleus, the detailed theory also explains quantitatively the half-life of various nuclei that undergo decay. This description is what Gamow and others devised, and it works for decay half-lives that vary by 17 orders of magnitude. Experiments have shown that the more energetic the decay of a particular nuclide is, the shorter is its half-life. Tunneling explains this in the following manner: For the decay to be more energetic, the nucleons must have more energy in the nucleus and should be able to ascend a little closer to the rim. The barrier is therefore not as thick for more energetic decay, and the exponential decrease of the wave function inside the barrier is not as great. Thus the probability of finding the particle outside the barrier is greater, and the half-life is shorter. Tunneling as an effect also occurs in quantum mechanical systems other than nuclei. Electrons trapped in solids can tunnel from one object to another if the barrier between the objects is thin enough. The process is the same in principle as described for decay. It is far more likely for a thin barrier than a thick one. Scanning tunneling electron microscopes function on this principle. The current of electrons that travels between a probe and a sample tunnels through a barrier and is very sensitive to its thickness, allowing detection of individual atoms as shown in . ### Section Summary 1. Tunneling is a quantum mechanical process of potential energy barrier penetration. The concept was first applied to explain decay, but tunneling is found to occur in other quantum mechanical systems. ### Conceptual Questions ### Problems-Exercises
# Medical Applications of Nuclear Physics ## Introduction to Applications of Nuclear Physics Applications of nuclear physics have become an integral part of modern life. From the bone scan that detects a cancer to the radioiodine treatment that cures another, nuclear radiation has diagnostic and therapeutic effects on medicine. From the fission power reactor to the hope of controlled fusion, nuclear energy is now commonplace and is a part of our plans for the future. Yet, the destructive potential of nuclear weapons haunts us, as does the possibility of nuclear reactor accidents. Certainly, several applications of nuclear physics escape our view, as seen in . Not only has nuclear physics revealed secrets of nature, it has an inevitable impact based on its applications, as they are intertwined with human values. Because of its potential for alleviation of suffering, and its power as an ultimate destructor of life, nuclear physics is often viewed with ambivalence. But it provides perhaps the best example that applications can be good or evil, while knowledge itself is neither.
# Medical Applications of Nuclear Physics ## Diagnostics and Medical Imaging ### Learning Objectives By the end of this section, you will be able to: 1. Explain the working principle behind an anger camera. 2. Describe the SPECT and PET imaging techniques. Most medical and related applications of nuclear physics are driven, at their core, by the difference between a radioactive substance and a non-radioactive substance. One of the first such methods is the precision measurement and detection method known as radioimmunoassay (RIA). Developed by Rosalyn Sussman Yalow and Solomon Berson in the late 1950s, RIA relies on the principle of competitive binding. For the particular substance being measured, a sample containing a radioactive isotope is prepared. A known quantity of antibodies is then introduced. By measuring the amount of "unbound" antibodies after the reaction, technicians can detect and measure the precise amount of the target substance. Radioimmunoassay is essential in cancer screening, hepatitis diagnosis, narcotics investigation, and other analyses. A host of medical imaging techniques employ nuclear radiation. What makes nuclear radiation so useful? First, radiation can easily penetrate tissue; hence, it is a useful probe to monitor conditions inside the body. Second, nuclear radiation depends on the nuclide and not on the chemical compound it is in, so that a radioactive nuclide can be put into a compound designed for specific purposes. The compound is said to be tagged. A tagged compound used for medical purposes is called a radiopharmaceutical. Radiation detectors external to the body can determine the location and concentration of a radiopharmaceutical to yield medically useful information. For example, certain drugs are concentrated in inflamed regions of the body, and this information can aid diagnosis and treatment as seen in . Another application utilizes a radiopharmaceutical which the body sends to bone cells, particularly those that are most active, to detect cancerous tumors or healing points. Images can then be produced of such bone scans. Radioisotopes are also used to determine the functioning of body organs, such as blood flow, heart muscle activity, and iodine uptake in the thyroid gland. ### Medical Application lists certain medical diagnostic uses of radiopharmaceuticals, including isotopes and activities that are typically administered. Many organs can be imaged with a variety of nuclear isotopes replacing a stable element by a radioactive isotope. One common diagnostic employs iodine to image the thyroid, since iodine is concentrated in that organ. The most active thyroid cells, including cancerous cells, concentrate the most iodine and, therefore, emit the most radiation. Conversely, hypothyroidism is indicated by lack of iodine uptake. Note that there is more than one isotope that can be used for several types of scans. Another common nuclear diagnostic is the thallium scan for the cardiovascular system, particularly used to evaluate blockages in the coronary arteries and examine heart activity. The salt TlCl can be used, because it acts like NaCl and follows the blood. Gallium-67 accumulates where there is rapid cell growth, such as in tumors and sites of infection. Hence, it is useful in cancer imaging. Usually, the patient receives the injection one day and has a whole body scan 3 or 4 days later because it can take several days for the gallium to build up. Note that lists many diagnostic uses for , where “m” stands for a metastable state of the technetium nucleus. Perhaps 80 percent of all radiopharmaceutical procedures employ because of its many advantages. One is that the decay of its metastable state produces a single, easily identified 0.142-MeV ray. Additionally, the radiation dose to the patient is limited by the short 6.0-h half-life of . And, although its half-life is short, it is easily and continuously produced on site. The basic process for production is neutron activation of molybdenum, which quickly decays into . Technetium-99m can be attached to many compounds to allow the imaging of the skeleton, heart, lungs, kidneys, etc. shows one of the simpler methods of imaging the concentration of nuclear activity, employing a device called an Anger camera or gamma camera. A piece of lead with holes bored through it collimates rays emerging from the patient, allowing detectors to receive rays from specific directions only. The computer analysis of detector signals produces an image. One of the disadvantages of this detection method is that there is no depth information (i.e., it provides a two-dimensional view of the tumor as opposed to a three-dimensional view), because radiation from any location under that detector produces a signal. Imaging techniques much like those in x-ray computed tomography (CT) scans use nuclear activity in patients to form three-dimensional images. shows a patient in a circular array of detectors that may be stationary or rotated, with detector output used by a computer to construct a detailed image. This technique is called single-photon-emission computed tomography(SPECT) or sometimes simply SPET. The spatial resolution of this technique is poor, about 1 cm, but the contrast (i.e. the difference in visual properties that makes an object distinguishable from other objects and the background) is good. Images produced by emitters have become important in recent years. When the emitted positron ( ) encounters an electron, mutual annihilation occurs, producing two rays. These rays have identical 0.511-MeV energies (the energy comes from the destruction of an electron or positron mass) and they move directly away from one another, allowing detectors to determine their point of origin accurately, as shown in . The system is called positron emission tomography (PET). It requires detectors on opposite sides to simultaneously (i.e., at the same time) detect photons of 0.511-MeV energy and utilizes computer imaging techniques similar to those in SPECT and CT scans. Examples of -emitting isotopes used in PET are , , , and , as seen in . This list includes C, N, and O, and so they have the advantage of being able to function as tags for natural body compounds. Its resolution of 0.5 cm is better than that of SPECT; the accuracy and sensitivity of PET scans make them useful for examining the brain’s anatomy and function. The brain’s use of oxygen and water can be monitored with . PET is used extensively for diagnosing brain disorders. It can note decreased metabolism in certain regions prior to a confirmation of Alzheimer’s disease. PET can locate regions in the brain that become active when a person carries out specific activities, such as speaking, closing their eyes, and so on. ### Section Summary 1. Radiopharmaceuticals are compounds that are used for medical imaging and therapeutics. 2. The process of attaching a radioactive substance is called tagging. 3. lists certain diagnostic uses of radiopharmaceuticals including the isotope and activity typically used in diagnostics. 4. One common imaging device is the Anger camera, which consists of a lead collimator, radiation detectors, and an analysis computer. 5. Tomography performed with -emitting radiopharmaceuticals is called SPECT and has the advantages of x-ray CT scans coupled with organ- and function-specific drugs. 6. PET is a similar technique that uses emitters and detects the two annihilation rays, which aid to localize the source. ### Conceptual Questions ### Problems & Exercises
# Medical Applications of Nuclear Physics ## Biological Effects of Ionizing Radiation ### Learning Objectives By the end of this section, you will be able to: 1. Define various units of radiation. 2. Describe RBE. We hear many seemingly contradictory things about the biological effects of ionizing radiation. It can cause cancer, burns, and hair loss, yet it is used to treat and even cure cancer. How do we understand these effects? Once again, there is an underlying simplicity in nature, even in complicated biological organisms. All the effects of ionizing radiation on biological tissue can be understood by knowing that ionizing radiation affects molecules within cells, particularly DNA molecules. Let us take a brief look at molecules within cells and how cells operate. Cells have long, double-helical DNA molecules containing chemical codes called genetic codes that govern the function and processes undertaken by the cell. It is for unraveling the double-helical structure of DNA that James Watson, Francis Crick, and Maurice Wilkins received the Nobel Prize. Damage to DNA consists of breaks in chemical bonds or other changes in the structural features of the DNA chain, leading to changes in the genetic code. In human cells, we can have as many as a million individual instances of damage to DNA per cell per day. It is remarkable that DNA contains codes that check whether the DNA is damaged or can repair itself. It is like an auto check and repair mechanism. This repair ability of DNA is vital for maintaining the integrity of the genetic code and for the normal functioning of the entire organism. It should be constantly active and needs to respond rapidly. The rate of DNA repair depends on various factors such as the cell type and age of the cell. A cell with a damaged ability to repair DNA, which could have been induced by ionizing radiation, can do one of the following: 1. The cell can go into an irreversible state of dormancy, known as senescence. 2. The cell can initiate programmed cell death. 3. The cell can go into unregulated cell division leading to tumors and cancers. Since ionizing radiation damages the DNA, which is critical in cell reproduction, it has its greatest effect on cells that rapidly reproduce, including most types of cancer. Thus, cancer cells are more sensitive to radiation than normal cells and can be killed by it easily. Cancer is characterized by a malfunction of cell reproduction, and can also be caused by ionizing radiation. Without contradiction, ionizing radiation can be both a cure and a cause. To discuss quantitatively the biological effects of ionizing radiation, we need a radiation dose unit that is directly related to those effects. All effects of radiation are assumed to be directly proportional to the amount of ionization produced in the biological organism. The amount of ionization is in turn proportional to the amount of deposited energy. Therefore, we define a radiation dose unit called the rad, as of a joule of ionizing energy deposited per kilogram of tissue, which is For example, if a 50.0-kg person is exposed to ionizing radiation over her entire body and she absorbs 1.00 J, then her whole-body radiation dose is If the same 1.00 J of ionizing energy were absorbed in her 2.00-kg forearm alone, then the dose to the forearm would be and the unaffected tissue would have a zero rad dose. While calculating radiation doses, you divide the energy absorbed by the mass of affected tissue. You must specify the affected region, such as the whole body or forearm in addition to giving the numerical dose in rads. The SI unit for radiation dose is the gray (Gy), which is defined to be However, the rad is still commonly used. Although the energy per kilogram in 1 rad is small, it has significant effects since the energy causes ionization. The energy needed for a single ionization is a few eV, or less than . Thus, 0.01 J of ionizing energy can create a huge number of ion pairs and have an effect at the cellular level. The effects of ionizing radiation may be directly proportional to the dose in rads, but they also depend on the type of radiation and the type of tissue. That is, for a given dose in rads, the effects depend on whether the radiation is x-ray, or some other type of ionizing radiation. In the earlier discussion of the range of ionizing radiation, it was noted that energy is deposited in a series of ionizations and not in a single interaction. Each ion pair or ionization requires a certain amount of energy, so that the number of ion pairs is directly proportional to the amount of the deposited ionizing energy. But, if the range of the radiation is small, as it is for s, then the ionization and the damage created is more concentrated and harder for the organism to repair, as seen in . Concentrated damage is more difficult for biological organisms to repair than damage that is spread out, so short-range particles have greater biological effects. The relative biological effectiveness (RBE) or quality factor (QF) is given in for several types of ionizing radiation—the effect of the radiation is directly proportional to the RBE. A dose unit more closely related to effects in biological tissue is called the roentgen equivalent man or rem and is defined to be the dose in rads multiplied by the relative biological effectiveness. So, if a person had a whole-body dose of 2.00 rad of radiation, the dose in rem would be . If the person had a whole-body dose of 2.00 rad of radiation, then the dose in rem would be . The s would have 20 times the effect on the person than the s for the same deposited energy. The SI equivalent of the rem is the sievert (Sv), defined to be , so that The RBEs given in are approximate, but they yield certain insights. For example, the eyes are more sensitive to radiation, because the cells of the lens do not repair themselves. Neutrons cause more damage than rays, although both are neutral and have large ranges, because neutrons often cause secondary radiation when they are captured. Note that the RBEs are 1 for higher-energy s, s, and x-rays, three of the most common types of radiation. For those types of radiation, the numerical values of the dose in rem and rad are identical. For example, 1 rad of radiation is also 1 rem. For that reason, rads are still widely quoted rather than rem. summarizes the units that are used for radiation. A high level of activity doesn’t mean much if a person is far away from the source. The activity of a source depends upon the quantity of material (kg) as well as the half-life. A short half-life will produce many more disintegrations per second. Recall that . Also, the activity decreases exponentially, which is seen in the equation . The large-scale effects of radiation on humans can be divided into two categories: immediate effects and long-term effects. gives the immediate effects of whole-body exposures received in less than one day. If the radiation exposure is spread out over more time, greater doses are needed to cause the effects listed. This is due to the body’s ability to partially repair the damage. Any dose less than 100 mSv (10 rem) is called a low dose, 0.1 Sv to 1 Sv (10 to 100 rem) is called a moderate dose, and anything greater than 1 Sv (100 rem) is called a high dose. There is no known way to determine after the fact if a person has been exposed to less than 10 mSv. Immediate effects are explained by the effects of radiation on cells and the sensitivity of rapidly reproducing cells to radiation. The first clue that a person has been exposed to radiation is a change in blood count, which is not surprising since blood cells are the most rapidly reproducing cells in the body. At higher doses, nausea and hair loss are observed, which may be due to interference with cell reproduction. Cells in the lining of the digestive system also rapidly reproduce, and their destruction causes nausea. When the growth of hair cells slows, the hair follicles become thin and break off. High doses cause significant cell death in all systems, but the lowest doses that cause fatalities do so by weakening the immune system through the loss of white blood cells. The two known long-term effects of radiation are cancer and genetic defects. Both are directly attributable to the interference of radiation with cell reproduction. For high doses of radiation, the risk of cancer is reasonably well known from studies of exposed groups. Hiroshima and Nagasaki survivors and a smaller number of people exposed by their occupation, such as radium dial painters, have been fully documented. Chernobyl victims will be studied for many decades, with some data already available. For example, a significant increase in childhood thyroid cancer has been observed. The risk of a radiation-induced cancer for low and moderate doses is generally assumed to be proportional to the risk known for high doses. Under this assumption, any dose of radiation, no matter how small, involves a risk to human health. This is called the linear hypothesis and it may be prudent, but it is controversial. There is some evidence that, unlike the immediate effects of radiation, the long-term effects are cumulative and there is little self-repair. This is analogous to the risk of skin cancer from UV exposure, which is known to be cumulative. There is a latency period for the onset of radiation-induced cancer of about 2 years for leukemia and 15 years for most other forms. The person is at risk for at least 30 years after the latency period. Omitting many details, the overall risk of a radiation-induced cancer death per year per rem of exposure is about 10 in a million, which can be written as . If a person receives a dose of 1 rem, his risk each year of dying from radiation-induced cancer is 10 in a million and that risk continues for about 30 years. The lifetime risk is thus 300 in a million, or 0.03 percent. Since about 20 percent of all worldwide deaths are from cancer, the increase due to a 1 rem exposure is impossible to detect demographically. But 100 rem (1 Sv), which was the dose received by the average Hiroshima and Nagasaki survivor, causes a 3 percent risk, which can be observed in the presence of a 20 percent normal or natural incidence rate. The incidence of genetic defects induced by radiation is about one-third that of cancer deaths, but is much more poorly known. The lifetime risk of a genetic defect due to a 1 rem exposure is about 100 in a million or , but the normal incidence is 60,000 in a million. Evidence of such a small increase, tragic as it is, is nearly impossible to obtain. For example, there is no evidence of increased genetic defects among the offspring of Hiroshima and Nagasaki survivors. Animal studies do not seem to correlate well with effects on humans and are not very helpful. For both cancer and genetic defects, the approach to safety has been to use the linear hypothesis, which is likely to be an overestimate of the risks of low doses. Certain researchers even claim that low doses are beneficial. Hormesis is a term used to describe generally favorable biological responses to low exposures of toxins or radiation. Such low levels may help certain repair mechanisms to develop or enable cells to adapt to the effects of the low exposures. Positive effects may occur at low doses that could be a problem at high doses. Even the linear hypothesis estimates of the risks are relatively small, and the average person is not exposed to large amounts of radiation. lists average annual background radiation doses from natural and artificial sources for Australia, the United States, Germany, and world-wide averages. Cosmic rays are partially shielded by the atmosphere, and the dose depends upon altitude and latitude, but the average is about 0.40 mSv/y. A good example of the variation of cosmic radiation dose with altitude comes from the airline industry. Monitored personnel show an average of 2 mSv/y. A 12-hour flight might give you an exposure of 0.02 to 0.03 mSv. Doses from the Earth itself are mainly due to the isotopes of uranium, thorium, and potassium, and vary greatly by location. Some places have great natural concentrations of uranium and thorium, yielding doses ten times as high as the average value. Internal doses come from foods and liquids that we ingest. Fertilizers containing phosphates have potassium and uranium. So we are all a little radioactive. Carbon-14 has about 66 Bq/kg radioactivity whereas fertilizers may have more than 3000 Bq/kg radioactivity. Medical and dental diagnostic exposures are mostly from x-rays. It should be noted that x-ray doses tend to be localized and are becoming much smaller with improved techniques. shows typical doses received during various diagnostic x-ray examinations. Note the large dose from a CT scan. While CT scans only account for less than 20 percent of the x-ray procedures done today, they account for about 50 percent of the annual dose received. Radon is usually more pronounced underground and in buildings with low air exchange with the outside world. Almost all soil contains some and , but radon is lower in mainly sedimentary soils and higher in granite soils. Thus, the exposure to the public can vary greatly, even within short distances. Radon can diffuse from the soil into homes, especially basements. The estimated exposure for is controversial. Recent studies indicate there is more radon in homes than had been realized, and it is speculated that radon may be responsible for 20 percent of lung cancers, being particularly hazardous to those who also smoke. Many countries have introduced limits on allowable radon concentrations in indoor air, often requiring the measurement of radon concentrations in a house prior to its sale. Ironically, it could be argued that the higher levels of radon exposure and their geographic variability, taken with the lack of demographic evidence of any effects, means that low-level radiation is less dangerous than previously thought. ### Radiation Protection Laws regulate radiation doses to which people can be exposed. The greatest occupational whole-body dose that is allowed depends upon the country and is about 20 to 50 mSv/y and is rarely reached by medical and nuclear power workers. Higher doses are allowed for the hands. Much lower doses are permitted for the reproductive organs and the fetuses of pregnant women. Inadvertent doses to the public are limited to of occupational doses, except for those caused by nuclear power, which cannot legally expose the public to more than of the occupational limit or 0.05 mSv/y (5 mrem/y). This has been exceeded in the United States only at the time of the Three Mile Island (TMI) accident in 1979. Chernobyl is another story. Extensive monitoring with a variety of radiation detectors is performed to assure radiation safety. Increased ventilation in uranium mines has lowered the dose there to about 1 mSv/y. To physically limit radiation doses, we use shielding, increase the distance from a source, and limit the time of exposure. illustrates how these are used to protect both the patient and the dental technician when an x-ray is taken. Shielding absorbs radiation and can be provided by any material, including sufficient air. The greater the distance from the source, the more the radiation spreads out. The less time a person is exposed to a given source, the smaller is the dose received by the person. Doses from most medical diagnostics have decreased in recent years due to faster films that require less exposure time. ### Problem-Solving Strategy You need to follow certain steps for dose calculations, which are Examine the situation to determine that a person is exposed to ionizing radiation. Identify exactly what needs to be determined in the problem (identify the unknowns). The most straightforward problems ask for a dose calculation. Make a list of what is given or can be inferred from the problem as stated (identify the knowns). Look for information on the type of radiation, the energy per event, the activity, and the mass of tissue affected. For dose calculations, you need to determine the energy deposited. This may take one or more steps, depending on the given information. Divide the deposited energy by the mass of the affected tissue. Use units of joules for energy and kilograms for mass. To calculate the dose in Gy, use the definition that 1 Gy = 1 J/kg. To calculate the dose in mSv, determine the RBE (QF) of the radiation. Recall that . Check the answer to see if it is reasonable: Does it make sense? The dose should be consistent with the numbers given in the text for diagnostic, occupational, and therapeutic exposures. ### Risk versus Benefit Medical doses of radiation are also limited. Diagnostic doses are generally low and have further lowered with improved techniques and faster films. With the possible exception of routine dental x-rays, radiation is used diagnostically only when needed so that the low risk is justified by the benefit of the diagnosis. Chest x-rays give the lowest doses—about 0.1 mSv to the tissue affected, with less than 5 percent scattering into tissues that are not directly imaged. Other x-ray procedures range upward to about 10 mSv in a CT scan, and about 5 mSv (0.5 rem) per dental x-ray, again both only affecting the tissue imaged. Medical images with radiopharmaceuticals give doses ranging from 1 to 5 mSv, usually localized. One exception is the thyroid scan using . Because of its relatively long half-life, it exposes the thyroid to about 0.75 Sv. The isotope is more difficult to produce, but its short half-life limits thyroid exposure to about 15 mSv. ### Test Prep for AP Courses ### Section Summary 1. The biological effects of ionizing radiation are due to two effects it has on cells: interference with cell reproduction, and destruction of cell function. 2. A radiation dose unit called the rad is defined in terms of the ionizing energy deposited per kilogram of tissue: 3. The SI unit for radiation dose is the gray (Gy), which is defined to be 4. To account for the effect of the type of particle creating the ionization, we use the relative biological effectiveness (RBE) or quality factor (QF) given in and define a unit called the roentgen equivalent man (rem) as 5. Particles that have short ranges or create large ionization densities have RBEs greater than unity. The SI equivalent of the rem is the sievert (Sv), defined to be 6. Whole-body, single-exposure doses of 0.1 Sv or less are low doses while those of 0.1 to 1 Sv are moderate, and those over 1 Sv are high doses. Some immediate radiation effects are given in . Effects due to low doses are not observed, but their risk is assumed to be directly proportional to those of high doses, an assumption known as the linear hypothesis. Long-term effects are cancer deaths at the rate of and genetic defects at roughly one-third this rate. Background radiation doses and sources are given in . World-wide average radiation exposure from natural sources, including radon, is about 3 mSv, or 300 mrem. Radiation protection utilizes shielding, distance, and time to limit exposure. ### Conceptual Questions ### Problems & Exercises
# Medical Applications of Nuclear Physics ## Therapeutic Uses of Ionizing Radiation ### Learning Objectives By the end of this section, you will be able to: 1. Explain the concept of radiotherapy and list typical doses for cancer therapy. Therapeutic applications of ionizing radiation, called radiation therapy or radiotherapy, have existed since the discovery of x-rays and nuclear radioactivity. Today, radiotherapy is used almost exclusively for cancer therapy, where it saves thousands of lives and improves the quality of life and longevity of many it cannot save. Radiotherapy may be used alone or in combination with surgery and chemotherapy (drug treatment) depending on the type of cancer and the response of the patient. A careful examination of all available data has established that radiotherapy’s beneficial effects far outweigh its long-term risks. ### Medical Application The earliest uses of ionizing radiation on humans were mostly harmful, with many at the level of snake oil as seen in . Radium-doped cosmetics that glowed in the dark were used around the time of World War I. As recently as the 1950s, radon mine tours were promoted as healthful and rejuvenating—those who toured were exposed but gained no benefits. Radium salts were sold as health elixirs for many years. The gruesome death of a wealthy industrialist, who became psychologically addicted to the brew, alerted the unsuspecting to the dangers of radium salt elixirs. Most abuses finally ended after the legislation in the 1950s. Radiotherapy is effective against cancer because cancer cells reproduce rapidly and, consequently, are more sensitive to radiation. The central problem in radiotherapy is to make the dose for cancer cells as high as possible while limiting the dose for normal cells. The ratio of abnormal cells killed to normal cells killed is called the therapeutic ratio, and all radiotherapy techniques are designed to enhance this ratio. Radiation can be concentrated in cancerous tissue by a number of techniques. One of the most prevalent techniques for well-defined tumors is a geometric technique shown in . A narrow beam of radiation is passed through the patient from a variety of directions with a common crossing point in the tumor. This concentrates the dose in the tumor while spreading it out over a large volume of normal tissue. The external radiation can be x-rays, rays, or ionizing-particle beams produced by accelerators. Accelerator-produced beams of neutrons, , and heavy ions such as nitrogen nuclei have been employed, and these can be quite effective. These particles have larger QFs or RBEs and sometimes can be better localized, producing a greater therapeutic ratio. But accelerator radiotherapy is much more expensive and less frequently employed than other forms. Another form of radiotherapy uses chemically inert radioactive implants. One use is for prostate cancer. Radioactive seeds (about 40 to 100 and the size of a grain of rice) are placed in the prostate region. The isotopes used are usually (6-month half life) or (3-month half life). Alpha emitters have the dual advantages of a large QF and a small range for better localization. Radiopharmaceuticals are used for cancer therapy when they can be localized well enough to produce a favorable therapeutic ratio. Thyroid cancer is commonly treated utilizing radioactive iodine. Thyroid cells concentrate iodine, and cancerous thyroid cells are more aggressive in doing this. An ingenious use of radiopharmaceuticals in cancer therapy tags antibodies with radioisotopes. Antibodies produced by a patient to combat his cancer are extracted, cultured, loaded with a radioisotope, and then returned to the patient. The antibodies are concentrated almost entirely in the tissue they developed to fight, thus localizing the radiation in abnormal tissue. The therapeutic ratio can be quite high for short-range radiation. There is, however, a significant dose for organs that eliminate radiopharmaceuticals from the body, such as the liver, kidneys, and bladder. As with most radiotherapy, the technique is limited by the tolerable amount of damage to the normal tissue. lists typical therapeutic doses of radiation used against certain cancers. The doses are large, but not fatal because they are localized and spread out in time. Protocols for treatment vary with the type of cancer and the condition and response of the patient. Three to five 200-rem treatments per week for a period of several weeks is typical. Time between treatments allows the body to repair normal tissue. This effect occurs because damage is concentrated in the abnormal tissue, and the abnormal tissue is more sensitive to radiation. Damage to normal tissue limits the doses. You will note that the greatest doses are given to any tissue that is not rapidly reproducing, such as in the adult brain. Lung cancer, on the other end of the scale, cannot ordinarily be cured with radiation because of the sensitivity of lung tissue and blood to radiation. But radiotherapy for lung cancer does alleviate symptoms and prolong life and is therefore justified in some cases. Finally, it is interesting to note that chemotherapy employs drugs that interfere with cell division and is, thus, also effective against cancer. It also has almost the same side effects, such as nausea and hair loss, and risks, such as the inducement of another cancer. ### Section Summary 1. Radiotherapy is the use of ionizing radiation to treat ailments, now limited to cancer therapy. 2. The sensitivity of cancer cells to radiation enhances the ratio of cancer cells killed to normal cells killed, which is called the therapeutic ratio. 3. Doses for various organs are limited by the tolerance of normal tissue for radiation. Treatment is localized in one region of the body and spread out in time. ### Conceptual Questions ### Problems & Exercises
# Medical Applications of Nuclear Physics ## Food Irradiation ### Learning Objectives By the end of this section, you will be able to: 1. Define food irradiation low dose, and free radicals. Ionizing radiation is widely used to sterilize medical supplies, such as bandages, and consumer products, such as tampons. Worldwide, it is also used to irradiate food, an application that promises to grow in the future. Food irradiation is the treatment of food with ionizing radiation. It is used to reduce pest infestation and to delay spoilage and prevent illness caused by microorganisms. Food irradiation is controversial. Proponents see it as superior to pasteurization, preservatives, and insecticides, supplanting dangerous chemicals with a more effective process. Opponents see its safety as unproven, perhaps leaving worse toxic residues as well as presenting an environmental hazard at treatment sites. In developing countries, food irradiation might increase crop production by 25.0% or more, and reduce food spoilage by a similar amount. It is used chiefly to treat spices and some fruits, and in some countries, red meat, poultry, and vegetables. Over 40 countries have approved food irradiation at some level. Food irradiation exposes food to large doses of rays, x-rays, or electrons. These photons and electrons induce no nuclear reactions and thus create no residual radioactivity. (Some forms of ionizing radiation, such as neutron irradiation, cause residual radioactivity. These are not used for food irradiation.) The source is usually or , the latter isotope being a major by-product of nuclear power. Cobalt-60 rays average 1.25 MeV, while those of are 0.67 MeV and are less penetrating. X-rays used for food irradiation are created with voltages of up to 5 million volts and, thus, have photon energies up to 5 MeV. Electrons used for food irradiation are accelerated to energies up to 10 MeV. The higher the energy per particle, the more penetrating the radiation is and the more ionization it can create. shows a typical -irradiation plant. Owing to the fact that food irradiation seeks to destroy organisms such as insects and bacteria, much larger doses than those fatal to humans must be applied. Generally, the simpler the organism, the more radiation it can tolerate. (Cancer cells are a partial exception, because they are rapidly reproducing and, thus, more sensitive.) Current licensing allows up to 1000 Gy to be applied to fresh fruits and vegetables, called a low dose in food irradiation. Such a dose is enough to prevent or reduce the growth of many microorganisms, but about 10,000 Gy is needed to kill salmonella, and even more is needed to kill fungi. Doses greater than 10,000 Gy are considered to be high doses in food irradiation and product sterilization. The effectiveness of food irradiation varies with the type of food. Spices and many fruits and vegetables have dramatically longer shelf lives. These also show no degradation in taste and no loss of food value or vitamins. If not for the mandatory labeling, such foods subjected to low-level irradiation (up to 1000 Gy) could not be distinguished from untreated foods in quality. However, some foods actually spoil faster after irradiation, particularly those with high water content like lettuce and peaches. Others, such as milk, are given a noticeably unpleasant taste. High-level irradiation produces significant and chemically measurable changes in foods. It produces about a 15% loss of nutrients and a 25% loss of vitamins, as well as some change in taste. Such losses are similar to those that occur in ordinary freezing and cooking. How does food irradiation work? Ionization produces a random assortment of broken molecules and ions, some with unstable oxygen- or hydrogen-containing molecules known as free radicals. These undergo rapid chemical reactions, producing perhaps four or five thousand different compounds called radiolytic products, some of which make cell function impossible by breaking cell membranes, fracturing DNA, and so on. How safe is the food afterward? Critics argue that the radiolytic products present a lasting hazard, perhaps being carcinogenic. However, the safety of irradiated food is not known precisely. We do know that low-level food irradiation produces no compounds in amounts that can be measured chemically. This is not surprising, since trace amounts of several thousand compounds may be created. We also know that there have been no observable negative short-term effects on consumers. Long-term effects may show up if large number of people consume large quantities of irradiated food, but no effects have appeared due to the small amounts of irradiated food that are consumed regularly. The case for safety is supported by testing of animal diets that were irradiated; no transmitted genetic effects have been observed. Food irradiation (at least up to a million rad) has been endorsed by the World Health Organization and the UN Food and Agricultural Organization. Finally, the hazard to consumers, if it exists, must be weighed against the benefits in food production and preservation. It must also be weighed against the very real hazards of existing insecticides and food preservatives. ### Section Summary 1. Food irradiation is the treatment of food with ionizing radiation. 2. Irradiating food can destroy insects and bacteria by creating free radicals and radiolytic products that can break apart cell membranes. 3. Food irradiation has produced no observable negative short-term effects for humans, but its long-term effects are unknown. ### Conceptual Questions
# Medical Applications of Nuclear Physics ## Fusion ### Learning Objectives By the end of this section, you will be able to: 1. Define nuclear fusion. 2. Discuss processes to achieve practical fusion energy generation. While basking in the warmth of the summer sun, a student reads of the latest breakthrough in achieving sustained thermonuclear power and vaguely recalls hearing about the cold fusion controversy. The three are connected. The Sun’s energy is produced by nuclear fusion (see ). Thermonuclear power is the name given to the use of controlled nuclear fusion as an energy source. While research in the area of thermonuclear power is progressing, high temperatures and containment difficulties remain. The cold fusion controversy centered around unsubstantiated claims of practical fusion power at room temperatures. Nuclear fusion is a reaction in which two nuclei are combined, or fused, to form a larger nucleus. We know that all nuclei have less mass than the sum of the masses of the protons and neutrons that form them. The missing mass times equals the binding energy of the nucleus—the greater the binding energy, the greater the missing mass. We also know that , the binding energy per nucleon, is greater for medium-mass nuclei and has a maximum at Fe (iron). This means that if two low-mass nuclei can be fused together to form a larger nucleus, energy can be released. The larger nucleus has a greater binding energy and less mass per nucleon than the two that combined. Thus mass is destroyed in the fusion reaction, and energy is released (see ). On average, fusion of low-mass nuclei releases energy, but the details depend on the actual nuclides involved. The major obstruction to fusion is the Coulomb repulsion between nuclei. Since the attractive nuclear force that can fuse nuclei together is short ranged, the repulsion of like positive charges must be overcome to get nuclei close enough to induce fusion. shows an approximate graph of the potential energy between two nuclei as a function of the distance between their centers. The graph is analogous to a hill with a well in its center. A ball rolled from the right must have enough kinetic energy to get over the hump before it falls into the deeper well with a net gain in energy. So it is with fusion. If the nuclei are given enough kinetic energy to overcome the electric potential energy due to repulsion, then they can combine, release energy, and fall into a deep well. One way to accomplish this is to heat fusion fuel to high temperatures so that the kinetic energy of thermal motion is sufficient to get the nuclei together. You might think that, in the core of our Sun, nuclei are coming into contact and fusing. However, in fact, temperatures on the order of are needed to actually get the nuclei in contact, exceeding the core temperature of the Sun. Quantum mechanical tunneling is what makes fusion in the Sun possible, and tunneling is an important process in most other practical applications of fusion, too. Since the probability of tunneling is extremely sensitive to barrier height and width, increasing the temperature greatly increases the rate of fusion. The closer reactants get to one another, the more likely they are to fuse (see ). Thus most fusion in the Sun and other stars takes place at their centers, where temperatures are highest. Moreover, high temperature is needed for thermonuclear power to be a practical source of energy. The Sun produces energy by fusing protons or hydrogen nuclei (by far the Sun’s most abundant nuclide) into helium nuclei . The principal sequence of fusion reactions forms what is called the proton-proton cycle: where stands for a positron and is an electron neutrino. (The energy in parentheses is released by the reaction.) Note that the first two reactions must occur twice for the third to be possible, so that the cycle consumes six protons () but gives back two. Furthermore, the two positrons produced will find two electrons and annihilate to form four more rays, for a total of six. The overall effect of the cycle is thus where the 26.7 MeV includes the annihilation energy of the positrons and electrons and is distributed among all the reaction products. The solar interior is dense, and the reactions occur deep in the Sun where temperatures are highest. It takes about 32,000 years for the energy to diffuse to the surface and radiate away. However, the neutrinos escape the Sun in less than two seconds, carrying their energy with them, because they interact so weakly that the Sun is transparent to them. Negative feedback in the Sun acts as a thermostat to regulate the overall energy output. For instance, if the interior of the Sun becomes hotter than normal, the reaction rate increases, producing energy that expands the interior. This cools it and lowers the reaction rate. Conversely, if the interior becomes too cool, it contracts, increasing the temperature and reaction rate (see ). Stars like the Sun are stable for billions of years, until a significant fraction of their hydrogen has been depleted. What happens then is discussed in Frontiers of Physics. Theories of the proton-proton cycle (and other energy-producing cycles in stars) were pioneered by the German-born, American physicist Hans Bethe (1906–2005), starting in 1938. He was awarded the 1967 Nobel Prize in physics for this work, and he has made many other contributions to physics and society. Neutrinos produced in these cycles escape so readily that they provide us an excellent means to test these theories and study stellar interiors. Detectors have been constructed and operated for more than four decades now to measure solar neutrinos (see ). Although solar neutrinos are detected and neutrinos were observed from Supernova 1987A (), too few solar neutrinos were observed to be consistent with predictions of solar energy production. After many years, this solar neutrino problem was resolved with a blend of theory and experiment that showed that the neutrino does indeed have mass. It was also found that there are three types of neutrinos, each associated with a different type of nuclear decay. The proton-proton cycle is not a practical source of energy on Earth, in spite of the great abundance of hydrogen (). The reaction has a very low probability of occurring. (This is why our Sun will last for about ten billion years.) However, a number of other fusion reactions are easier to induce. Among them are: Deuterium () is about 0.015% of natural hydrogen, so there is an immense amount of it in sea water alone. In addition to an abundance of deuterium fuel, these fusion reactions produce large energies per reaction (in parentheses), but they do not produce much radioactive waste. Tritium () is radioactive, but it is consumed as a fuel (the reaction ), and the neutrons and s can be shielded. The neutrons produced can also be used to create more energy and fuel in reactions like and Note that these last two reactions, and , put most of their energy output into the ray, and such energy is difficult to utilize. The three keys to practical fusion energy generation are to achieve the temperatures necessary to make the reactions likely, to raise the density of the fuel, and to confine it long enough to produce large amounts of energy. These three factors—temperature, density, and time—complement one another, and so a deficiency in one can be compensated for by the others. Ignition is defined to occur when the reactions produce enough energy to be self-sustaining after external energy input is cut off. This goal, which must be reached before commercial plants can be a reality, has not been achieved. Another milestone, called break-even, occurs when the fusion power produced equals the heating power input. Break-even has nearly been reached and gives hope that ignition and commercial plants may become a reality in a few decades. Two techniques have shown considerable promise. The first of these is called magnetic confinement and uses the property that charged particles have difficulty crossing magnetic field lines. The tokamak, shown in , has shown particular promise. The tokamak’s toroidal coil confines charged particles into a circular path with a helical twist due to the circulating ions themselves. In 1995, the Tokamak Fusion Test Reactor at Princeton in the US achieved world-record plasma temperatures as high as 500 million degrees Celsius. This facility operated between 1982 and 1997. A joint international effort is underway in France to build a tokamak-type reactor that will be the stepping stone to commercial power. ITER, as it is called, will be a full-scale device that aims to demonstrate the feasibility of fusion energy. It will generate 500 MW of power for extended periods of time and will achieve break-even conditions. It will study plasmas in conditions similar to those expected in a fusion power plant. Completion is scheduled for 2018. The second promising technique aims multiple lasers at tiny fuel pellets filled with a mixture of deuterium and tritium. Huge power input heats the fuel, evaporating the confining pellet and crushing the fuel to high density with the expanding hot plasma produced. This technique is called inertial confinement, because the fuel’s inertia prevents it from escaping before significant fusion can take place. Higher densities have been reached than with tokamaks, but with smaller confinement times. In 2009, the Lawrence Livermore Laboratory (CA) completed a laser fusion device with 192 ultraviolet laser beams that are focused upon a D-T pellet (see ). ### Test Prep for AP Courses ### Section Summary 1. Nuclear fusion is a reaction in which two nuclei are combined to form a larger nucleus. It releases energy when light nuclei are fused to form medium-mass nuclei. 2. Fusion is the source of energy in stars, with the proton-proton cycle, being the principal sequence of energy-producing reactions in our Sun. 3. The overall effect of the proton-proton cycle is where the 26.7 MeV includes the energy of the positrons emitted and annihilated. 4. Attempts to utilize controlled fusion as an energy source on Earth are related to deuterium and tritium, and the reactions play important roles. 5. Ignition is the condition under which controlled fusion is self-sustaining; it has not yet been achieved. Break-even, in which the fusion energy output is as great as the external energy input, has nearly been achieved. 6. Magnetic confinement and inertial confinement are the two methods being developed for heating fuel to sufficiently high temperatures, at sufficient density, and for sufficiently long times to achieve ignition. The first method uses magnetic fields and the second method uses the momentum of impinging laser beams for confinement. ### Conceptual Questions ### Problems & Exercises
# Medical Applications of Nuclear Physics ## Fission ### Learning Objectives By the end of this section, you will be able to: 1. Define nuclear fission. 2. Discuss how fission fuel reacts and describe what it produces. 3. Describe controlled and uncontrolled chain reactions. Nuclear fission is a reaction in which a nucleus is split (or fissured). Controlled fission is a reality, whereas controlled fusion is a hope for the future. Hundreds of nuclear fission power plants around the world attest to the fact that controlled fission is practical and, at least in the short term, economical, as seen in . Whereas nuclear power was of little interest for decades following TMI and Chernobyl (and now Fukushima Daiichi), growing concerns over global warming has brought nuclear power back on the table as a viable energy alternative. By the end of 2009, there were 442 reactors operating in 30 countries, providing 15% of the world’s electricity. France provides over 75% of its electricity with nuclear power, while the US has 104 operating reactors providing 20% of its electricity. Australia and New Zealand have none. China is building nuclear power plants at the rate of one start every month. Fission is the opposite of fusion and releases energy only when heavy nuclei are split. As noted in Fusion, energy is released if the products of a nuclear reaction have a greater binding energy per nucleon () than the parent nuclei. shows that is greater for medium-mass nuclei than heavy nuclei, implying that when a heavy nucleus is split, the products have less mass per nucleon, so that mass is destroyed and energy is released in the reaction. The amount of energy per fission reaction can be large, even by nuclear standards. The graph in shows to be about 7.6 MeV/nucleon for the heaviest nuclei ( about 240), while is about 8.6 MeV/nucleon for nuclei having about 120. Thus, if a heavy nucleus splits in half, then about 1 MeV per nucleon, or approximately 240 MeV per fission, is released. This is about 10 times the energy per fusion reaction, and about 100 times the energy of the average , , or decay. Spontaneous fission can occur, but this is usually not the most common decay mode for a given nuclide. For example, can spontaneously fission, but it decays mostly by emission. Neutron-induced fission is crucial as seen in . Being chargeless, even low-energy neutrons can strike a nucleus and be absorbed once they feel the attractive nuclear force. Large nuclei are described by a liquid drop model with surface tension and oscillation modes, because the large number of nucleons act like atoms in a drop. The neutron is attracted and thus, deposits energy, causing the nucleus to deform as a liquid drop. If stretched enough, the nucleus narrows in the middle. The number of nucleons in contact and the strength of the nuclear force binding the nucleus together are reduced. Coulomb repulsion between the two ends then succeeds in fissioning the nucleus, which pops like a water drop into two large pieces and a few neutrons. Neutron-induced fission can be written as where and are the two daughter nuclei, called fission fragments, and is the number of neutrons produced. Most often, the masses of the fission fragments are not the same. Most of the released energy goes into the kinetic energy of the fission fragments, with the remainder going into the neutrons and excited states of the fragments. Since neutrons can induce fission, a self-sustaining chain reaction is possible, provided more than one neutron is produced on average — that is, if in . This can also be seen in . An example of a typical neutron-induced fission reaction is Note that in this equation, the total charge remains the same (is conserved): . Also, as far as whole numbers are concerned, the mass is constant: . This is not true when we consider the masses out to 6 or 7 significant places, as in the previous example. Not every neutron produced by fission induces fission. Some neutrons escape the fissionable material, while others interact with a nucleus without making it fission. We can enhance the number of fissions produced by neutrons by having a large amount of fissionable material. The minimum amount necessary for self-sustained fission of a given nuclide is called its critical mass. Some nuclides, such as , produce more neutrons per fission than others, such as . Additionally, some nuclides are easier to make fission than others. In particular, and are easier to fission than the much more abundant . Both factors affect critical mass, which is smallest for . The reason and are easier to fission than is that the nuclear force is more attractive for an even number of neutrons in a nucleus than for an odd number. Consider that has 143 neutrons, and has 145 neutrons, whereas has 146. When a neutron encounters a nucleus with an odd number of neutrons, the nuclear force is more attractive, because the additional neutron will make the number even. About 2-MeV more energy is deposited in the resulting nucleus than would be the case if the number of neutrons was already even. This extra energy produces greater deformation, making fission more likely. Thus, and are superior fission fuels. The isotope is only 0.72 % of natural uranium, while is 99.27%, and does not exist in nature. Australia has the largest deposits of uranium in the world, standing at 28% of the total. This is followed by Kazakhstan and Canada. The US has only 3% of global reserves. Most fission reactors utilize , which is separated from at some expense. This is called enrichment. The most common separation method is gaseous diffusion of uranium hexafluoride () through membranes. Since has less mass than , its molecules have higher average velocity at the same temperature and diffuse faster. Another interesting characteristic of is that it preferentially absorbs very slow moving neutrons (with energies a fraction of an eV), whereas fission reactions produce fast neutrons with energies in the order of an MeV. To make a self-sustained fission reactor with , it is thus necessary to slow down (“thermalize”) the neutrons. Water is very effective, since neutrons collide with protons in water molecules and lose energy. shows a schematic of a reactor design, called the pressurized water reactor. Control rods containing nuclides that very strongly absorb neutrons are used to adjust neutron flux. To produce large power, reactors contain hundreds to thousands of critical masses, and the chain reaction easily becomes self-sustaining, a condition called criticality. Neutron flux should be carefully regulated to avoid an exponential increase in fissions, a condition called supercriticality. Control rods help prevent overheating, perhaps even a meltdown or explosive disassembly. The water that is used to thermalize neutrons, necessary to get them to induce fission in , and achieve criticality, provides a negative feedback for temperature increases. In case the reactor overheats and boils the water to steam or is breached, the absence of water kills the chain reaction. Considerable heat, however, can still be generated by the reactor’s radioactive fission products. Other safety features, thus, need to be incorporated in the event of a loss of coolant accident, including auxiliary cooling water and pumps. One nuclide already mentioned is , which has a 24,120-y half-life and does not exist in nature. Plutonium-239 is manufactured from in reactors, and it provides an opportunity to utilize the other 99% of natural uranium as an energy source. The following reaction sequence, called breeding, produces . Breeding begins with neutron capture by : Uranium-239 then decays: Neptunium-239 also decays: Plutonium-239 builds up in reactor fuel at a rate that depends on the probability of neutron capture by (all reactor fuel contains more than ). Reactors designed specifically to make plutonium are called breeder reactors. They seem to be inherently more hazardous than conventional reactors, but it remains unknown whether their hazards can be made economically acceptable. The four reactors at Chernobyl, including the one that was destroyed, were built to breed plutonium and produce electricity. These reactors had a design that was significantly different from the pressurized water reactor illustrated above. Plutonium-239 has advantages over as a reactor fuel — it produces more neutrons per fission on average, and it is easier for a thermal neutron to cause it to fission. It is also chemically different from uranium, so it is inherently easier to separate from uranium ore. This means has a particularly small critical mass, an advantage for nuclear weapons. ### Test Prep for AP Courses ### Section Summary 1. Nuclear fission is a reaction in which a nucleus is split. 2. Fission releases energy when heavy nuclei are split into medium-mass nuclei. 3. Self-sustained fission is possible, because neutron-induced fission also produces neutrons that can induce other fissions, , where and are the two daughter nuclei, or fission fragments, and x is the number of neutrons produced. 4. A minimum mass, called the critical mass, should be present to achieve criticality. 5. More than a critical mass can produce supercriticality. 6. The production of new or different isotopes (especially ) by nuclear transformation is called breeding, and reactors designed for this purpose are called breeder reactors. ### Conceptual Questions ### Problem Exercises
# Medical Applications of Nuclear Physics ## Nuclear Weapons ### Learning Objectives By the end of this section, you will be able to: 1. Discuss different types of fission and thermonuclear bombs. 2. Explain the ill effects of nuclear explosion. The world was in turmoil when fission was discovered in 1938. The discovery of fission, made by two German physicists, Otto Hahn and Fritz Strassman, was quickly verified by two Jewish refugees from Nazi Germany, Lise Meitner and her nephew Otto Frisch. Fermi, among others, soon found that not only did neutrons induce fission; more neutrons were produced during fission. The possibility of a self-sustained chain reaction was immediately recognized by leading scientists the world over. The enormous energy known to be in nuclei, but considered inaccessible, now seemed to be available on a large scale. Within months after the announcement of the discovery of fission, Adolf Hitler banned the export of uranium from newly occupied Czechoslovakia. It seemed that the military value of uranium had been recognized in Nazi Germany, and that a serious effort to build a nuclear bomb had begun. Alarmed scientists, many of them who fled Nazi Germany, decided to take action. None was more famous or revered than Einstein. It was felt that his help was needed to get the American government to make a serious effort at nuclear weapons as a matter of survival. Leo Szilard, an escaped Hungarian physicist, took a draft of a letter to Einstein, who, although pacifistic, signed the final version. The letter was for President Franklin Roosevelt, warning of the German potential to build extremely powerful bombs of a new type. It was sent in August of 1939, just before the German invasion of Poland that marked the start of World War II. It was not until December 6, 1941, the day before the Japanese attack on Pearl Harbor, that the United States made a massive commitment to building a nuclear bomb. The top secret Manhattan Project was a crash program aimed at beating the Germans. It was carried out in remote locations, such as Los Alamos, New Mexico, whenever possible, and eventually came to cost billions of dollars and employ the efforts of more than 100,000 people. J. Robert Oppenheimer (1904–1967), whose talent and ambitions made him ideal, was chosen to head the project. The first major step was made by Enrico Fermi and his group in December 1942, when they achieved the first self-sustained nuclear reactor. This first “atomic pile”, built in a squash court at the University of Chicago, used carbon blocks to thermalize neutrons. It not only proved that the chain reaction was possible, it began the era of nuclear reactors. Glenn Seaborg, an American chemist and physicist, received the Nobel Prize in physics in 1951 for discovery of several transuranic elements, including plutonium. Carbon-moderated reactors are relatively inexpensive and simple in design and are still used for breeding plutonium, such as at Chernobyl, where two such reactors remain in operation. Plutonium was recognized as easier to fission with neutrons and, hence, a superior fission material very early in the Manhattan Project. Plutonium availability was uncertain, and so a uranium bomb was developed simultaneously. shows a gun-type bomb, which takes two subcritical uranium masses and blows them together. To get an appreciable yield, the critical mass must be held together by the explosive charges inside the cannon barrel for a few microseconds. Since the buildup of the uranium chain reaction is relatively slow, the device to hold the critical mass together can be relatively simple. Owing to the fact that the rate of spontaneous fission is low, a neutron source is triggered at the same time the critical mass is assembled. Plutonium’s special properties necessitated a more sophisticated critical mass assembly, shown schematically in . A spherical mass of plutonium is surrounded by shape charges (high explosives that release most of their blast in one direction) that implode the plutonium, crushing it into a smaller volume to form a critical mass. The implosion technique is faster and more effective, because it compresses three-dimensionally rather than one-dimensionally as in the gun-type bomb. Again, a neutron source must be triggered at just the correct time to initiate the chain reaction. Owing to its complexity, the plutonium bomb needed to be tested before there could be any attempt to use it. On July 16, 1945, the test named Trinity was conducted in the isolated Alamogordo Desert about 200 miles south of Los Alamos (see ). A new age had begun. The yield of this device was about 10 kilotons (kT), the equivalent of 5000 of the largest conventional bombs. Although Germany surrendered on May 7, 1945, Japan had been steadfastly refusing to surrender for many months, forcing large casualties. Invasion plans by the Allies estimated a million casualties of their own and untold losses of Japanese lives. The bomb was viewed as a way to end the war. The first was a uranium bomb dropped on Hiroshima on August 6. Its yield of about 15 kT destroyed the city and killed an estimated 80,000 people, with 100,000 more being seriously injured (see ). The second was a plutonium bomb dropped on Nagasaki only three days later, on August 9. Its 20 kT yield killed at least 50,000 people, something less than Hiroshima because of the hilly terrain and the fact that it was a few kilometers off target. The Japanese were told that one bomb a week would be dropped until they surrendered unconditionally, which they did on August 14. In actuality, the United States had only enough plutonium for one more and as yet unassembled bomb. Knowing that fusion produces several times more energy per kilogram of fuel than fission, some scientists pushed the idea of a fusion bomb starting very early on. Calling this bomb the Super, they realized that it could have another advantage over fission—high-energy neutrons would aid fusion, while they are ineffective in fission. Thus the fusion bomb could be virtually unlimited in energy release. The first such bomb was detonated by the United States on October 31, 1952, at Eniwetok Atoll with a yield of 10 megatons (MT), about 670 times that of the fission bomb that destroyed Hiroshima. The USSR followed with a fusion device of their own in August 1953, and a weapons race, beyond the aim of this text to discuss, continued until the end of the Cold War. shows a simple diagram of how a thermonuclear bomb is constructed. A fission bomb is exploded next to fusion fuel in the solid form of lithium deuteride. Before the shock wave blows it apart, rays heat and compress the fuel, and neutrons create tritium through the reaction . Additional fusion and fission fuels are enclosed in a dense shell of . The shell reflects some of the neutrons back into the fuel to enhance its fusion, but at high internal temperatures fast neutrons are created that also cause the plentiful and inexpensive to fission, part of what allows thermonuclear bombs to be so large. The energy yield and the types of energy produced by nuclear bombs can be varied. Energy yields in current arsenals range from about 0.1 kT to 20 MT, although the USSR once detonated a 67 MT device. Nuclear bombs differ from conventional explosives in more than size. shows the approximate fraction of energy output in various forms for conventional explosives and for two types of nuclear bombs. Nuclear bombs put a much larger fraction of their output into thermal energy than do conventional bombs, which tend to concentrate the energy in blast. Another difference is the immediate and residual radiation energy from nuclear weapons. This can be adjusted to put more energy into radiation (the so-called neutron bomb) so that the bomb can be used to irradiate advancing troops without killing friendly troops with blast and heat. At its peak in 1986, the combined arsenals of the United States and the Soviet Union totaled about 60,000 nuclear warheads. In addition, the British, French, and Chinese each have several hundred bombs of various sizes, and a few other countries have a small number. Nuclear weapons are generally divided into two categories. Strategic nuclear weapons are those intended for military targets, such as bases and missile complexes, and moderate to large cities. There were about 20,000 strategic weapons in 1988. Tactical weapons are intended for use in smaller battles. Since the collapse of the Soviet Union and the end of the Cold War in 1989, most of the 32,000 tactical weapons (including Cruise missiles, artillery shells, land mines, torpedoes, depth charges, and backpacks) have been demobilized, and parts of the strategic weapon systems are being dismantled with warheads and missiles being disassembled. According to the Treaty of Moscow of 2002, Russia and the United States have been required to reduce their strategic nuclear arsenal down to about 2000 warheads each. A few small countries have built or are capable of building nuclear bombs, as are some terrorist groups. Two things are needed—a minimum level of technical expertise and sufficient fissionable material. The first is easy. Fissionable material is controlled but is also available. There are international agreements and organizations that attempt to control nuclear proliferation, but it is increasingly difficult given the availability of fissionable material and the small amount needed for a crude bomb. The production of fissionable fuel itself is technologically difficult. However, the presence of large amounts of such material worldwide, though in the hands of a few, makes control and accountability crucial. ### Section Summary 1. There are two types of nuclear weapons—fission bombs use fission alone, whereas thermonuclear bombs use fission to ignite fusion. 2. Both types of weapons produce huge numbers of nuclear reactions in a very short time. 3. Energy yields are measured in kilotons or megatons of equivalent conventional explosives and range from 0.1 kT to more than 20 MT. 4. Nuclear bombs are characterized by far more thermal output and nuclear radiation output than conventional explosives. ### Conceptual Questions ### Problems & Exercises
# Particle Physics ## Introduction to Particle Physics ### Learning Objectives By the end of this section, you will be able to: Following ideas remarkably similar to those of the ancient Greeks, we continue to look for smaller and smaller structures in nature, hoping ultimately to find and understand the most fundamental building blocks that exist. Atomic physics deals with the smallest units of elements and compounds. In its study, we have found a relatively small number of atoms with systematic properties that explained a tremendous range of phenomena. Nuclear physics is concerned with the nuclei of atoms and their substructures. Here, a smaller number of components—the proton and neutron—make up all nuclei. Exploring the systematic behavior of their interactions has revealed even more about matter, forces, and energy. Particle physics deals with the substructures of atoms and nuclei and is particularly aimed at finding those truly fundamental particles that have no further substructure. Just as in atomic and nuclear physics, we have found a complex array of particles and properties with systematic characteristics analogous to the periodic table and the chart of nuclides. An underlying structure is apparent, and there is some reason to think that we are finding particles that have no substructure. Of course, we have been in similar situations before. For example, atoms were once thought to be the ultimate substructure. Perhaps we will find deeper and deeper structures and never come to an ultimate substructure. We may never really know, as indicated in . This chapter covers the basics of particle physics as we know it today. An amazing convergence of topics is evolving in particle physics. We find that some particles are intimately related to forces, and that nature on the smallest scale may have its greatest influence on the large-scale character of the universe. It is an adventure exceeding the best science fiction because it is not only fantastic, it is real.
# Particle Physics ## The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited ### Learning Objectives By the end of this section, you will be able to: 1. Define Yukawa particle. 2. State the Heisenberg uncertainty principle. 3. Describe pion. 4. Estimate the mass of a pion. 5. Explain meson. Particle physics as we know it today began with the ideas of Hideki Yukawa in 1935. Physicists had long been concerned with how forces are transmitted, finding the concept of fields, such as electric and magnetic fields to be very useful. A field surrounds an object and carries the force exerted by the object through space. Yukawa was interested in the strong nuclear force in particular and found an ingenious way to explain its short range. His idea is a blend of particles, forces, relativity, and quantum mechanics that is applicable to all forces. Yukawa proposed that force is transmitted by the exchange of particles (called carrier particles). The field consists of these carrier particles. Specifically for the strong nuclear force, Yukawa proposed that a previously unknown particle, now called a pion, is exchanged between nucleons, transmitting the force between them. illustrates how a pion would carry a force between a proton and a neutron. The pion has mass and can only be created by violating the conservation of mass-energy. This is allowed by the Heisenberg uncertainty principle if it occurs for a sufficiently short period of time. As discussed in Probability: The Heisenberg Uncertainty Principle the Heisenberg uncertainty principle relates the uncertainties in energy and in time by where is Planck’s constant. Therefore, conservation of mass-energy can be violated by an amount for a time in which time no process can detect the violation. This allows the temporary creation of a particle of mass , where . The larger the mass and the greater the , the shorter is the time it can exist. This means the range of the force is limited, because the particle can only travel a limited distance in a finite amount of time. In fact, the maximum distance is , where c is the speed of light. The pion must then be captured and, thus, cannot be directly observed because that would amount to a permanent violation of mass-energy conservation. Such particles (like the pion above) are called virtual particles, because they cannot be directly observed but their effects can be directly observed. Realizing all this, Yukawa used the information on the range of the strong nuclear force to estimate the mass of the pion, the particle that carries it. The steps of his reasoning are approximately retraced in the following worked example: Yukawa’s proposal of particle exchange as the method of force transfer is intriguing. But how can we verify his proposal if we cannot observe the virtual pion directly? If sufficient energy is in a nucleus, it would be possible to free the pion—that is, to create its mass from external energy input. This can be accomplished by collisions of energetic particles with nuclei, but energies greater than 100 MeV are required to conserve both energy and momentum. In 1947, pions were observed in cosmic-ray experiments, which were designed to supply a small flux of high-energy protons that may collide with nuclei. Soon afterward, accelerators of sufficient energy were creating pions in the laboratory under controlled conditions. Three pions were discovered, two with charge and one neutral, and given the symbols , respectively. The masses of and are identical at , whereas has a mass of . These masses are close to the predicted value of and, since they are intermediate between electron and nucleon masses, the particles are given the name meson (now an entire class of particles, as we shall see in Particles, Patterns, and Conservation Laws). The pions, or -mesons as they are also called, have masses close to those predicted and feel the strong nuclear force. Another previously unknown particle, now called the muon, was discovered during cosmic-ray experiments in 1936 (one of its discoverers, Seth Neddermeyer, also originated the idea of implosion for plutonium bombs). Since the mass of a muon is around , at first it was thought to be the particle predicted by Yukawa. But it was soon realized that muons do not feel the strong nuclear force and could not be Yukawa’s particle. Their role was unknown, causing the respected physicist I. I. Rabi to comment, “Who ordered that?” This remains a valid question today. We have discovered hundreds of subatomic particles; the roles of some are only partially understood. But there are various patterns and relations to forces that have led to profound insights into nature’s secrets. ### Summary 1. Yukawa’s idea of virtual particle exchange as the carrier of forces is crucial, with virtual particles being formed in temporary violation of the conservation of mass-energy as allowed by the Heisenberg uncertainty principle. ### Problems & Exercises
# Particle Physics ## The Four Basic Forces ### Learning Objectives By the end of this section, you will be able to: 1. State the four basic forces. 2. Explain the Feynman diagram for the exchange of a virtual photon between two positive charges. 3. Define QED. 4. Describe the Feynman diagram for the exchange of a between a proton and a neutron. As first discussed in Problem-Solving Strategies and mentioned at various points in the text since then, there are only four distinct basic forces in all of nature. This is a remarkably small number considering the myriad phenomena they explain. Particle physics is intimately tied to these four forces. Certain fundamental particles, called carrier particles, carry these forces, and all particles can be classified according to which of the four forces they feel. The table given below summarizes important characteristics of the four basic forces. Although these four forces are distinct and differ greatly from one another under all but the most extreme circumstances, we can see similarities among them. (In GUTs: the Unification of Forces, we will discuss how the four forces may be different manifestations of a single unified force.) Perhaps the most important characteristic among the forces is that they are all transmitted by the exchange of a carrier particle, exactly like what Yukawa had in mind for the strong nuclear force. Each carrier particle is a virtual particle—it cannot be directly observed while transmitting the force. shows the exchange of a virtual photon between two positive charges. The photon cannot be directly observed in its passage, because this would disrupt it and alter the force. shows a way of graphing the exchange of a virtual photon between two positive charges. This graph of time versus position is called a Feynman diagram, after the brilliant American physicist Richard Feynman (1918–1988) who developed it. is a Feynman diagram for the exchange of a virtual pion between a proton and a neutron representing the same interaction as in . Feynman diagrams are not only a useful tool for visualizing interactions at the quantum mechanical level, they are also used to calculate details of interactions, such as their strengths and probability of occurring. Feynman was one of the theorists who developed the field of quantum electrodynamics (QED), which is the quantum mechanics of electromagnetism. QED has been spectacularly successful in describing electromagnetic interactions on the submicroscopic scale. Feynman was an inspiring teacher, had a colorful personality, and made a profound impact on generations of physicists. He shared the 1965 Nobel Prize with Julian Schwinger and S. I. Tomonaga for work in QED with its deep implications for particle physics. Why is it that particles called gluons are listed as the carrier particles for the strong nuclear force when, in The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited, we saw that pions apparently carry that force? The answer is that pions are exchanged but they have a substructure and, as we explore it, we find that the strong force is actually related to the indirectly observed but more fundamental gluons. In fact, all the carrier particles are thought to be fundamental in the sense that they have no substructure. Another similarity among carrier particles is that they are all bosons (first mentioned in Patterns in Spectra Reveal More Quantization), having integral intrinsic spins. There is a relationship between the mass of the carrier particle and the range of the force. The photon is massless and has energy. So, the existence of (virtual) photons is possible only by virtue of the Heisenberg uncertainty principle and can travel an unlimited distance. Thus, the range of the electromagnetic force is infinite. This is also true for gravity. It is infinite in range because its carrier particle, the graviton, has zero rest mass. (Gravity is the most difficult of the four forces to understand on a quantum scale because it affects the space and time in which the others act. But gravity is so weak that its effects are extremely difficult to observe quantum mechanically. We shall explore it further in General Relativity and Quantum Gravity). The , and particles that carry the weak nuclear force have mass, accounting for the very short range of this force. In fact, the , and are about 1000 times more massive than pions, consistent with the fact that the range of the weak nuclear force is about 1/1000 that of the strong nuclear force. Gluons are actually massless, but since they act inside massive carrier particles like pions, the strong nuclear force is also short ranged. The relative strengths of the forces given in the are those for the most common situations. When particles are brought very close together, the relative strengths change, and they may become identical at extremely close range. As we shall see in GUTs: the Unification of Forces, carrier particles may be altered by the energy required to bring particles very close together—in such a manner that they become identical. ### Test Prep for AP Courses ### Summary 1. The four basic forces and their carrier particles are summarized in the . 2. Feynman diagrams are graphs of time versus position and are highly useful pictorial representations of particle processes. 3. The theory of electromagnetism on the particle scale is called quantum electrodynamics (QED). ### Problems & Exercises
# Particle Physics ## Accelerators Create Matter from Energy ### Learning Objectives By the end of this section, you will be able to: 1. State the principle of a cyclotron. 2. Explain the principle of a synchrotron. 3. Describe the voltage needed by an accelerator between accelerating tubes. 4. State Fermilab’s accelerator principle. Before looking at all the particles we now know about, let us examine some of the machines that created them. The fundamental process in creating previously unknown particles is to accelerate known particles, such as protons or electrons, and direct a beam of them toward a target. Collisions with target nuclei provide a wealth of information, such as information obtained by Rutherford using energetic helium nuclei from natural radiation. But if the energy of the incoming particles is large enough, new matter is sometimes created in the collision. The more energy input or , the more matter can be created, since . Limitations are placed on what can occur by known conservation laws, such as conservation of mass-energy, momentum, and charge. Even more interesting are the unknown limitations provided by nature. Some expected reactions do occur, while others do not, and still other unexpected reactions may appear. New laws are revealed, and the vast majority of what we know about particle physics has come from accelerator laboratories. It is the particle physicist’s favorite indoor sport, which is partly inspired by theory. ### Early Accelerators An early accelerator is a relatively simple, large-scale version of the electron gun. The Van de Graaff (named after the Dutch physicist), which you have likely seen in physics demonstrations, is a small version of the ones used for nuclear research since their invention for that purpose in 1932. For more, see . These machines are electrostatic, creating potentials as great as 50 MV, and are used to accelerate a variety of nuclei for a range of experiments. Energies produced by Van de Graaffs are insufficient to produce new particles, but they have been instrumental in exploring several aspects of the nucleus. Another, equally famous, early accelerator is the cyclotron, invented in 1930 by the American physicist, E. O. Lawrence (1901–1958). For a visual representation with more detail, see . Cyclotrons use fixed-frequency alternating electric fields to accelerate particles. The particles spiral outward in a magnetic field, making increasingly larger radius orbits during acceleration. This clever arrangement allows the successive addition of electric potential energy and so greater particle energies are possible than in a Van de Graaff. Lawrence was involved in many early discoveries and in the promotion of physics programs in American universities. He was awarded the 1939 Nobel Prize in Physics for the cyclotron and nuclear activations, and he has an element and two major laboratories named for him. A synchrotron is a version of a cyclotron in which the frequency of the alternating voltage and the magnetic field strength are increased as the beam particles are accelerated. Particles are made to travel the same distance in a shorter time with each cycle in fixed-radius orbits. A ring of magnets and accelerating tubes, as shown in , are the major components of synchrotrons. Accelerating voltages are synchronized (i.e., occur at the same time) with the particles to accelerate them, hence the name. Magnetic field strength is increased to keep the orbital radius constant as energy increases. High-energy particles require strong magnetic fields to steer them, so superconducting magnets are commonly employed. Still limited by achievable magnetic field strengths, synchrotrons need to be very large at very high energies, since the radius of a high-energy particle’s orbit is very large. Radiation caused by a magnetic field accelerating a charged particle perpendicular to its velocity is called synchrotron radiation in honor of its importance in these machines. Synchrotron radiation has a characteristic spectrum and polarization, and can be recognized in cosmic rays, implying large-scale magnetic fields acting on energetic and charged particles in deep space. Synchrotron radiation produced by accelerators is sometimes used as a source of intense energetic electromagnetic radiation for research purposes. ### Modern Behemoths and Colliding Beams Physicists have built ever-larger machines, first to reduce the wavelength of the probe and obtain greater detail, then to put greater energy into collisions to create new particles. Each major energy increase brought new information, sometimes producing spectacular progress, motivating the next step. One major innovation was driven by the desire to create more massive particles. Since momentum needs to be conserved in a collision, the particles created by a beam hitting a stationary target should recoil. This means that part of the energy input goes into recoil kinetic energy, significantly limiting the fraction of the beam energy that can be converted into new particles. One solution to this problem is to have head-on collisions between particles moving in opposite directions. Colliding beams are made to meet head-on at points where massive detectors are located. Since the total incoming momentum is zero, it is possible to create particles with momenta and kinetic energies near zero. Particles with masses equivalent to twice the beam energy can thus be created. Another innovation is to create the antimatter counterpart of the beam particle, which thus has the opposite charge and circulates in the opposite direction in the same beam pipe. For a schematic representation, see . Detectors capable of finding the new particles in the spray of material that emerges from colliding beams are as impressive as the accelerators. While the Fermilab Tevatron had proton and antiproton beam energies of about 1 TeV, so that it can create particles up to , the Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) has achieved beam energies of 3.5 TeV, so that it has a 7-TeV collision energy; CERN hopes to double the beam energy in 2014. The now-canceled Superconducting Super Collider was being constructed in Texas with a design energy of 20 TeV to give a 40-TeV collision energy. It was to be an oval 30 km in diameter. Its cost as well as the politics of international research funding led to its demise. In addition to the large synchrotrons that produce colliding beams of protons and antiprotons, there are other large electron-positron accelerators. The oldest of these was a straight-line or linear accelerator, called the Stanford Linear Accelerator (SLAC), which accelerated particles up to 50 GeV as seen in . Positrons created by the accelerator were brought to the same energy and collided with electrons in specially designed detectors. Linear accelerators use accelerating tubes similar to those in synchrotrons, but aligned in a straight line. This helps eliminate synchrotron radiation losses, which are particularly severe for electrons made to follow curved paths. CERN had an electron-positron collider appropriately called the Large Electron-Positron Collider (LEP), which accelerated particles to 100 GeV and created a collision energy of 200 GeV. It was 8.5 km in diameter, while the SLAC machine was 3.2 km long. ### Test Prep for AP Courses ### Summary 1. A variety of particle accelerators have been used to explore the nature of subatomic particles and to test predictions of particle theories. 2. Modern accelerators used in particle physics are either large synchrotrons or linear accelerators. 3. The use of colliding beams makes much greater energy available for the creation of particles, and collisions between matter and antimatter allow a greater range of final products. ### Conceptual Questions ### Problems & Exercises
# Particle Physics ## Particles, Patterns, and Conservation Laws ### Learning Objectives By the end of this section, you will be able to: 1. Define matter and antimatter. 2. Outline the differences between hadrons and leptons. 3. State the differences between mesons and baryons. In the early 1930s only a small number of subatomic particles were known to exist—the proton, neutron, electron, photon and, indirectly, the neutrino. Nature seemed relatively simple in some ways, but mysterious in others. Why, for example, should the particle that carries positive charge be almost 2000 times as massive as the one carrying negative charge? Why does a neutral particle like the neutron have a magnetic moment? Does this imply an internal structure with a distribution of moving charges? Why is it that the electron seems to have no size other than its wavelength, while the proton and neutron are about 1 fermi in size? So, while the number of known particles was small and they explained a great deal of atomic and nuclear phenomena, there were many unexplained phenomena and hints of further substructures. Things soon became more complicated, both in theory and in the prediction and discovery of new particles. In 1928, the British physicist P.A.M. Dirac (see ) developed a highly successful relativistic quantum theory that laid the foundations of quantum electrodynamics (QED). His theory, for example, explained electron spin and magnetic moment in a natural way. But Dirac’s theory also predicted negative energy states for free electrons. By 1931, Dirac, along with Oppenheimer, realized this was a prediction of positively charged electrons (or positrons). In 1932, American physicist Carl Anderson discovered the positron in cosmic ray studies. The positron, or , is the same particle as emitted in decay and was the first antimatter that was discovered. In 1935, Yukawa predicted pions as the carriers of the strong nuclear force, and they were eventually discovered. Muons were discovered in cosmic ray experiments in 1937, and they seemed to be heavy, unstable versions of electrons and positrons. After World War II, accelerators energetic enough to create these particles were built. Not only were predicted and known particles created, but many unexpected particles were observed. Initially called elementary particles, their numbers proliferated to dozens and then hundreds, and the term “particle zoo” became the physicist’s lament at the lack of simplicity. But patterns were observed in the particle zoo that led to simplifying ideas such as quarks, as we shall soon see. ### Matter and Antimatter The positron was only the first example of antimatter. Every particle in nature has an antimatter counterpart, although some particles, like the photon, are their own antiparticles. Antimatter has charge opposite to that of matter (for example, the positron is positive while the electron is negative) but is nearly identical otherwise, having the same mass, intrinsic spin, half-life, and so on. When a particle and its antimatter counterpart interact, they annihilate one another, usually totally converting their masses to pure energy in the form of photons as seen in . Neutral particles, such as neutrons, have neutral antimatter counterparts, which also annihilate when they interact. Certain neutral particles are their own antiparticle and live correspondingly short lives. For example, the neutral pion is its own antiparticle and has a half-life about shorter than and , which are each other’s antiparticles. Without exception, nature is symmetric—all particles have antimatter counterparts. For example, antiprotons and antineutrons were first created in accelerator experiments in 1956 and the antiproton is negative. Antihydrogen atoms, consisting of an antiproton and antielectron, were observed in 1995 at CERN, too. It is possible to contain large-scale antimatter particles such as antiprotons by using electromagnetic traps that confine the particles within a magnetic field so that they don't annihilate with other particles. However, particles of the same charge repel each other, so the more particles that are contained in a trap, the more energy is needed to power the magnetic field that contains them. It is not currently possible to store a significant quantity of antiprotons. At any rate, we now see that negative charge is associated with both low-mass (electrons) and high-mass particles (antiprotons) and the apparent asymmetry is not there. But this knowledge does raise another question—why is there such a predominance of matter and so little antimatter? Possible explanations emerge later in this and the next chapter. ### Hadrons and Leptons Particles can also be revealingly grouped according to what forces they feel between them. All particles (even those that are massless) are affected by gravity, since gravity affects the space and time in which particles exist. All charged particles are affected by the electromagnetic force, as are neutral particles that have an internal distribution of charge (such as the neutron with its magnetic moment). Special names are given to particles that feel the strong and weak nuclear forces. Hadrons are particles that feel the strong nuclear force, whereas leptons are particles that do not. The proton, neutron, and the pions are examples of hadrons. The electron, positron, muons, and neutrinos are examples of leptons, the name meaning low mass. Leptons feel the weak nuclear force. In fact, all particles feel the weak nuclear force. This means that hadrons are distinguished by being able to feel both the strong and weak nuclear forces. lists the characteristics of some of the most important subatomic particles, including the directly observed carrier particles for the electromagnetic and weak nuclear forces, all leptons, and some hadrons. Several hints related to an underlying substructure emerge from an examination of these particle characteristics. Note that the carrier particles are called gauge bosons. First mentioned in Patterns in Spectra Reveal More Quantization, a boson is a particle with zero or an integer value of intrinsic spin (such as ), whereas a fermion is a particle with a half-integer value of intrinsic spin (). Fermions obey the Pauli exclusion principle whereas bosons do not. All the known and conjectured carrier particles are bosons. All known leptons are listed in the table given above. There are only six leptons (and their antiparticles), and they seem to be fundamental in that they have no apparent underlying structure. Leptons have no discernible size other than their wavelength, so that we know they are pointlike down to about . The leptons fall into three families, implying three conservation laws for three quantum numbers. One of these was known from decay, where the existence of the electron’s neutrino implied that a new quantum number, called the electron family number is conserved. Thus, in decay, an antielectron’s neutrino must be created with when an electron with is created, so that the total remains 0 as it was before decay. Once the muon was discovered in cosmic rays, its decay mode was found to be which implied another “family” and associated conservation principle. The particle is a muon’s neutrino, and it is created to conserve muon family number. So muons are leptons with a family of their own, and conservation of total also seems to be obeyed in many experiments. More recently, a third lepton family was discovered when particles were created and observed to decay in a manner similar to muons. One principal decay mode is Conservation of total seems to be another law obeyed in many experiments. In fact, particle experiments have found that lepton family number is not universally conserved, due to neutrino “oscillations,” or transformations of neutrinos from one family type to another. ### Mesons and Baryons Now, note that the hadrons in the table given above are divided into two subgroups, called mesons (originally for medium mass) and baryons (the name originally meaning large mass). The division between mesons and baryons is actually based on their observed decay modes and is not strictly associated with their masses. Mesons are hadrons that can decay to leptons and leave no hadrons, which implies that mesons are not conserved in number. Baryons are hadrons that always decay to another baryon. A new physical quantity called baryon number seems to always be conserved in nature and is listed for the various particles in the table given above. Mesons and leptons have so that they can decay to other particles with . But baryons have if they are matter, and if they are antimatter. The conservation of total baryon number is a more general rule than first noted in nuclear physics, where it was observed that the total number of nucleons was always conserved in nuclear reactions and decays. That rule in nuclear physics is just one consequence of the conservation of the total baryon number. ### Forces, Reactions, and Reaction Rates The forces that act between particles regulate how they interact with other particles. For example, pions feel the strong force and do not penetrate as far in matter as do muons, which do not feel the strong force. (This was the way those who discovered the muon knew it could not be the particle that carries the strong force—its penetration or range was too great for it to be feeling the strong force.) Similarly, reactions that create other particles, like cosmic rays interacting with nuclei in the atmosphere, have greater probability if they are caused by the strong force than if they are caused by the weak force. Such knowledge has been useful to physicists while analyzing the particles produced by various accelerators. The forces experienced by particles also govern how particles interact with themselves if they are unstable and decay. For example, the stronger the force, the faster they decay and the shorter is their lifetime. An example of a nuclear decay via the strong force is with a lifetime of about . The neutron is a good example of decay via the weak force. The process has a longer lifetime of 882 s. The weak force causes this decay, as it does all decay. An important clue that the weak force is responsible for decay is the creation of leptons, such as and . None would be created if the strong force was responsible, just as no leptons are created in the decay of . The systematics of particle lifetimes is a little simpler than nuclear lifetimes when hundreds of particles are examined (not just the ones in the table given above). Particles that decay via the weak force have lifetimes mostly in the range of to s, whereas those that decay via the strong force have lifetimes mostly in the range of to s. Turning this around, if we measure the lifetime of a particle, we can tell if it decays via the weak or strong force. Yet another quantum number emerges from decay lifetimes and patterns. Note that the particles , and decay with lifetimes on the order of s (the exception is , whose short lifetime is explained by its particular quark substructure.), implying that their decay is caused by the weak force alone, although they are hadrons and feel the strong force. The decay modes of these particles also show patterns—in particular, certain decays that should be possible within all the known conservation laws do not occur. Whenever something is possible in physics, it will happen. If something does not happen, it is forbidden by a rule. All this seemed strange to those studying these particles when they were first discovered, so they named a new quantum number strangeness, given the symbol in the table given above. The values of strangeness assigned to various particles are based on the decay systematics. It is found that strangeness is conserved by the strong force, which governs the production of most of these particles in accelerator experiments. However, strangeness is . This conclusion is reached from the fact that particles that have long lifetimes decay via the weak force and do not conserve strangeness. All of this also has implications for the carrier particles, since they transmit forces and are thus involved in these decays. There are hundreds of particles, all hadrons, not listed in , most of which have shorter lifetimes. The systematics of those particle lifetimes, their production probabilities, and decay products are completely consistent with the conservation laws noted for lepton families, baryon number, and strangeness, but they also imply other quantum numbers and conservation laws. There are a finite, and in fact relatively small, number of these conserved quantities, however, implying a finite set of substructures. Additionally, some of these short-lived particles resemble the excited states of other particles, implying an internal structure. All of this jigsaw puzzle can be tied together and explained relatively simply by the existence of fundamental substructures. Leptons seem to be fundamental structures. Hadrons seem to have a substructure called quarks. Quarks: Is That All There Is? explores the basics of the underlying quark building blocks. ### Test Prep for AP Courses ### Summary 1. All particles of matter have an antimatter counterpart that has the opposite charge and certain other quantum numbers as seen in . These matter-antimatter pairs are otherwise very similar but will annihilate when brought together. Known particles can be divided into three major groups—leptons, hadrons, and carrier particles (gauge bosons). 2. Leptons do not feel the strong nuclear force and are further divided into three groups—electron family designated by electron family number ; muon family designated by muon family number ; and tau family designated by tau family number . The family numbers are not universally conserved due to neutrino oscillations. 3. Hadrons are particles that feel the strong nuclear force and are divided into baryons, with the baryon family number being conserved, and mesons. ### Conceptual Questions ### Problems & Exercises
# Particle Physics ## Quarks: Is That All There Is? ### Learning Objectives By the end of this section, you will be able to: 1. Define fundamental particle. 2. Describe quark and antiquark. 3. List the flavors of quark. 4. Outline the quark composition of hadrons. 5. Determine quantum numbers from quark composition. Quarks have been mentioned at various points in this text as fundamental building blocks and members of the exclusive club of truly elementary particles. Note that an elementary or fundamental particle has no substructure (it is not made of other particles) and has no finite size other than its wavelength. This does not mean that fundamental particles are stable—some decay, while others do not. Keep in mind that all leptons seem to be fundamental, whereasno hadrons are fundamental. There is strong evidence that quarks are the fundamental building blocks of hadrons as seen in . Quarks are the second group of fundamental particles (leptons are the first). The third and perhaps final group of fundamental particles is the carrier particles for the four basic forces. Leptons, quarks, and carrier particles may be all there is. In this module we will discuss the quark substructure of hadrons and its relationship to forces as well as indicate some remaining questions and problems. ### Conception of Quarks Quarks were first proposed independently by American physicists Murray Gell-Mann and George Zweig in 1963. Their quaint name was taken by Gell-Mann from a James Joyce novel—Gell-Mann was also largely responsible for the concept and name of strangeness. (Whimsical names are common in particle physics, reflecting the personalities of modern physicists.) Originally, three quark types—or flavors—were proposed to account for the then-known mesons and baryons. These quark flavors are named up (u), down (d), and strange (s). All quarks have half-integral spin and are thus fermions. All mesons have integral spin while all baryons have half-integral spin. Therefore, mesons should be made up of an even number of quarks while baryons need to be made up of an odd number of quarks. shows the quark substructure of the proton, neutron, and two pions. The most radical proposal by Gell-Mann and Zweig is the fractional charges of quarks, which are and , whereas all directly observed particles have charges that are integral multiples of . Note that the fractional value of the quark does not violate the fact that the e is the smallest unit of charge that is observed, because a free quark cannot exist. lists characteristics of the six quark flavors that are now thought to exist. Discoveries made since 1963 have required extra quark flavors, which are divided into three families quite analogous to leptons. ### How Does it Work? To understand how these quark substructures work, let us specifically examine the proton, neutron, and the two pions pictured in before moving on to more general considerations. First, the proton p is composed of the three quarks uud, so that its total charge is , as expected. With the spins aligned as in the figure, the proton’s intrinsic spin is , also as expected. Note that the spins of the up quarks are aligned, so that they would be in the same state except that they have different colors (another quantum number to be elaborated upon a little later). Quarks obey the Pauli exclusion principle. Similar comments apply to the neutron n, which is composed of the three quarks udd. Note also that the neutron is made of charges that add to zero but move internally, producing its well-known magnetic moment. When the neutron decays, it does so by changing the flavor of one of its quarks. Writing neutron decay in terms of quarks, We see that this is equivalent to a down quark changing flavor to become an up quark: This is an example of the general fact that the weak nuclear force can change the flavor of a quark. By general, we mean that any quark can be converted to any other (change flavor) by the weak nuclear force. Not only can we get , we can also get . Furthermore, the strange quark can be changed by the weak force, too, making and possible. This explains the violation of the conservation of strangeness by the weak force noted in the preceding section. Another general fact is that the strong nuclear force cannot change the flavor of a quark. Again, from , we see that the meson (one of the three pions) is composed of an up quark plus an antidown quark, or . Its total charge is thus , as expected. Its baryon number is 0, since it has a quark and an antiquark with baryon numbers . The half-life is relatively long since, although it is composed of matter and antimatter, the quarks are different flavors and the weak force should cause the decay by changing the flavor of one into that of the other. The spins of the and quarks are antiparallel, enabling the pion to have spin zero, as observed experimentally. Finally, the meson shown in is the antiparticle of the meson, and it is composed of the corresponding quark antiparticles. That is, the meson is , while the meson is . These two pions annihilate each other quickly, because their constituent quarks are each other’s antiparticles. Two general rules for combining quarks to form hadrons are: 1. Baryons are composed of three quarks, and antibaryons are composed of three antiquarks. 2. Mesons are combinations of a quark and an antiquark. One of the clever things about this scheme is that only integral charges result, even though the quarks have fractional charge. ### All Combinations are Possible All quark combinations are possible. lists some of these combinations. When Gell-Mann and Zweig proposed the original three quark flavors, particles corresponding to all combinations of those three had not been observed. The pattern was there, but it was incomplete—much as had been the case in the periodic table of the elements and the chart of nuclides. The particle, in particular, had not been discovered but was predicted by quark theory. Its combination of three strange quarks, , gives it a strangeness of (see ) and other predictable characteristics, such as spin, charge, approximate mass, and lifetime. If the quark picture is complete, the should exist. It was first observed in 1964 at Brookhaven National Laboratory and had the predicted characteristics as seen in . The discovery of the was convincing indirect evidence for the existence of the three original quark flavors and boosted theoretical and experimental efforts to further explore particle physics in terms of quarks. ### Now, Let Us Talk About Direct Evidence At first, physicists expected that, with sufficient energy, we should be able to free quarks and observe them directly. This has not proved possible. There is still no direct observation of a fractional charge or any isolated quark. When large energies are put into collisions, other particles are created—but no quarks emerge. There is nearly direct evidence for quarks that is quite compelling. By 1967, experiments at SLAC scattering 20-GeV electrons from protons had produced results like Rutherford had obtained for the nucleus nearly 60 years earlier. The SLAC scattering experiments showed unambiguously that there were three pointlike (meaning they had sizes considerably smaller than the probe’s wavelength) charges inside the proton as seen in . This evidence made all but the most skeptical admit that there was validity to the quark substructure of hadrons. More recent and higher-energy experiments have produced jets of particles in collisions, highly suggestive of three quarks in a nucleon. Since the quarks are very tightly bound, energy put into separating them pulls them only so far apart before it starts being converted into other particles. More energy produces more particles, not a separation of quarks. Conservation of momentum requires that the particles come out in jets along the three paths in which the quarks were being pulled. Note that there are only three jets, and that other characteristics of the particles are consistent with the three-quark substructure. ### Quarks Have Their Ups and Downs The quark model actually lost some of its early popularity because the original model with three quarks had to be modified. The up and down quarks seemed to compose normal matter as seen in , while the single strange quark explained strangeness. Why didn’t it have a counterpart? A fourth quark flavor called charm (c) was proposed as the counterpart of the strange quark to make things symmetric—there would be two normal quarks (u and d) and two exotic quarks (s and c). Furthermore, at that time only four leptons were known, two normal and two exotic. It was attractive that there would be four quarks and four leptons. The problem was that no known particles contained a charmed quark. Suddenly, in November of 1974, two groups (one headed by C. C. Ting at Brookhaven National Laboratory and the other by Burton Richter at SLAC) independently and nearly simultaneously discovered a new meson with characteristics that made it clear that its substructure is . It was called J by one group and psi () by the other and now is known as the meson. Since then, numerous particles have been discovered containing the charmed quark, consistent in every way with the quark model. The discovery of the meson had such a rejuvenating effect on quark theory that it is now called the November Revolution. Ting and Richter shared the 1976 Nobel Prize. History quickly repeated itself. In 1975, the tau () was discovered, and a third family of leptons emerged as seen in ). Theorists quickly proposed two more quark flavors called top (t) or truth and bottom (b) or beauty to keep the number of quarks the same as the number of leptons. And in 1976, the upsilon ( ) meson was discovered and shown to be composed of a bottom and an antibottom quark or , quite analogous to the being as seen in . Being a single flavor, these mesons are sometimes called bare charm and bare bottom and reveal the characteristics of their quarks most clearly. Other mesons containing bottom quarks have since been observed. In 1995, two groups at Fermilab confirmed the top quark’s existence, completing the picture of six quarks listed in . Each successive quark discovery—first , then , and finally —has required higher energy because each has higher mass. Quark masses in are only approximately known, because they are not directly observed. They must be inferred from the masses of the particles they combine to form. ### What’s Color got to do with it?—A Whiter Shade of Pale As mentioned and shown in , quarks carry another quantum number, which we call color. Of course, it is not the color we sense with visible light, but its properties are analogous to those of three primary and three secondary colors. Specifically, a quark can have one of three color values we call red (), green (), and blue () in analogy to those primary visible colors. Antiquarks have three values we call antired or cyan, antigreen or magenta, and antiblue or yellow in analogy to those secondary visible colors. The reason for these names is that when certain visual colors are combined, the eye sees white. The analogy of the colors combining to white is used to explain why baryons are made of three quarks, why mesons are a quark and an antiquark, and why we cannot isolate a single quark. The force between the quarks is such that their combined colors produce white. This is illustrated in . A baryon must have one of each primary color or RGB, which produces white. A meson must have a primary color and its anticolor, also producing white. Why must hadrons be white? The color scheme is intentionally devised to explain why baryons have three quarks and mesons have a quark and an antiquark. Quark color is thought to be similar to charge, but with more values. An ion, by analogy, exerts much stronger forces than a neutral molecule. When the color of a combination of quarks is white, it is like a neutral atom. The forces a white particle exerts are like the polarization forces in molecules, but in hadrons these leftovers are the strong nuclear force. When a combination of quarks has color other than white, it exerts extremely large forces—even larger than the strong force—and perhaps cannot be stable or permanently separated. This is part of the theory of quark confinement, which explains how quarks can exist and yet never be isolated or directly observed. Finally, an extra quantum number with three values (like those we assign to color) is necessary for quarks to obey the Pauli exclusion principle. Particles such as the , which is composed of three strange quarks, , and the , which is three up quarks, uuu, can exist because the quarks have different colors and do not have the same quantum numbers. Color is consistent with all observations and is now widely accepted. Quark theory including color is called quantum chromodynamics (QCD), also named by Gell-Mann. ### The Three Families Fundamental particles are thought to be one of three types—leptons, quarks, or carrier particles. Each of those three types is further divided into three analogous families as illustrated in . We have examined leptons and quarks in some detail. Each has six members (and their six antiparticles) divided into three analogous families. The first family is normal matter, of which most things are composed. The second is exotic, and the third more exotic and more massive than the second. The only stable particles are in the first family, which also has unstable members. Always searching for symmetry and similarity, physicists have also divided the carrier particles into three families, omitting the graviton. Gravity is special among the four forces in that it affects the space and time in which the other forces exist and is proving most difficult to include in a Theory of Everything or TOE (to stub the pretension of such a theory). Gravity is thus often set apart. It is not certain that there is meaning in the groupings shown in , but the analogies are tempting. In the past, we have been able to make significant advances by looking for analogies and patterns, and this is an example of one under current scrutiny. There are connections between the families of leptons, in that the decays into the and the into the e. Similarly for quarks, the higher families eventually decay into the lowest, leaving only u and d quarks. We have long sought connections between the forces in nature. Since these are carried by particles, we will explore connections between gluons, and , and photons as part of the search for unification of forces discussed in GUTs: The Unification of Forces.. ### Test Prep for AP Courses ### Summary 1. Hadrons are thought to be composed of quarks, with baryons having three quarks and mesons having a quark and an antiquark. 2. The characteristics of the six quarks and their antiquark counterparts are given in , and the quark compositions of certain hadrons are given in . 3. Indirect evidence for quarks is very strong, explaining all known hadrons and their quantum numbers, such as strangeness, charm, topness, and bottomness. 4. Quarks come in six flavors and three colors and occur only in combinations that produce white. 5. Fundamental particles have no further substructure, not even a size beyond their de Broglie wavelength. 6. There are three types of fundamental particles—leptons, quarks, and carrier particles. Each type is divided into three analogous families as indicated in . ### Conceptual Questions ### Problems & Exercises
# Particle Physics ## GUTs: The Unification of Forces ### Learning Objectives By the end of this section, you will be able to: 1. State the grand unified theory. 2. Explain the electroweak theory. 3. Define gluons. 4. Describe the principle of quantum chromodynamics. 5. Define the standard model. Present quests to show that the four basic forces are different manifestations of a single unified force follow a long tradition. In the 19th century, the distinct electric and magnetic forces were shown to be intimately connected and are now collectively called the electromagnetic force. More recently, the weak nuclear force has been shown to be connected to the electromagnetic force in a manner suggesting that a theory may be constructed in which all four forces are unified. Certainly, there are similarities in how forces are transmitted by the exchange of carrier particles, and the carrier particles themselves (the gauge bosons in ) are also similar in important ways. The analogy to the unification of electric and magnetic forces is quite good—the four forces are distinct under normal circumstances, but there are hints of connections even on the atomic scale, and there may be conditions under which the forces are intimately related and even indistinguishable. The search for a correct theory linking the forces, called the Grand Unified Theory (GUT), is explored in this section in the realm of particle physics. Frontiers of Physics expands the story in making a connection with cosmology, on the opposite end of the distance scale. is a Feynman diagram showing how the weak nuclear force is transmitted by the carrier particle , similar to the diagrams in and for the electromagnetic and strong nuclear forces. In the 1960s, a gauge theory, called electroweak theory, was developed by Steven Weinberg, Sheldon Glashow, and Abdus Salam and proposed that the electromagnetic and weak forces are identical at sufficiently high energies. One of its predictions, in addition to describing both electromagnetic and weak force phenomena, was the existence of the , and carrier particles. Not only were three particles having spin 1 predicted, the mass of the and was predicted to be , and that of the was predicted to be . (Their masses had to be about 1000 times that of the pion, or about , since the range of the weak force is about 1000 times less than the strong force carried by virtual pions.) In 1983, these carrier particles were observed at CERN with the predicted characteristics, including masses having the predicted values as seen in . This was another triumph of particle theory and experimental effort, resulting in the 1984 Nobel Prize to the experiment’s group leaders Carlo Rubbia and Simon van der Meer. Theorists Weinberg, Glashow, and Salam had already been honored with the 1979 Nobel Prize for other aspects of electroweak theory. Although the weak nuclear force is very short ranged ( , as indicated in ), its effects on atomic levels can be measured given the extreme precision of modern techniques. Since electrons spend some time in the nucleus, their energies are affected, and spectra can even indicate new aspects of the weak force, such as the possibility of other carrier particles. So systems many orders of magnitude larger than the range of the weak force supply evidence of electroweak unification in addition to evidence found at the particle scale. Gluons () are the proposed carrier particles for the strong nuclear force, although they are not directly observed. Like quarks, gluons may be confined to systems having a total color of white. Less is known about gluons than the fact that they are the carriers of the weak and certainly of the electromagnetic force. QCD theory calls for eight gluons, all massless and all spin 1. Six of the gluons carry a color and an anticolor, while two do not carry color, as illustrated in (a). There is indirect evidence of the existence of gluons in nucleons. When high-energy electrons are scattered from nucleons and evidence of quarks is seen, the momenta of the quarks are smaller than they would be if there were no gluons. That means that the gluons carrying force between quarks also carry some momentum, inferred by the already indirect quark momentum measurements. At any rate, the gluons carry color charge and can change the colors of quarks when exchanged, as seen in (b). In the figure, a red down quark interacts with a green strange quark by sending it a gluon. That gluon carries red away from the down quark and leaves it green, because it is an (red-antigreen) gluon. (Taking antigreen away leaves you green.) Its antigreenness kills the green in the strange quark, and its redness turns the quark red. The strong force is complicated, since observable particles that feel the strong force (hadrons) contain multiple quarks. shows the quark and gluon details of pion exchange between a proton and a neutron as illustrated earlier in and . The quarks within the proton and neutron move along together exchanging gluons, until the proton and neutron get close together. As the quark leaves the proton, a gluon creates a pair of virtual particles, a quark and a antiquark. The quark stays behind and the proton turns into a neutron, while the and move together as a ( confirms the composition for the .) The annihilates a quark in the neutron, the joins the neutron, and the neutron becomes a proton. A pion is exchanged and a force is transmitted. It is beyond the scope of this text to go into more detail on the types of quark and gluon interactions that underlie the observable particles, but the theory (quantum chromodynamics or QCD) is very self-consistent. So successful have QCD and the electroweak theory been that, taken together, they are called the Standard Model. Advances in knowledge are expected to modify, but not overthrow, the Standard Model of particle physics and forces. How can forces be unified? They are definitely distinct under most circumstances, for example, being carried by different particles and having greatly different strengths. But experiments show that at extremely small distances, the strengths of the forces begin to become more similar. In fact, electroweak theory’s prediction of the , , and carrier particles was based on the strengths of the two forces being identical at extremely small distances as seen in . As discussed in case of the creation of virtual particles for extremely short times, the small distances or short ranges correspond to the large masses of the carrier particles and the correspondingly large energies needed to create them. Thus, the energy scale on the horizontal axis of corresponds to smaller and smaller distances, with 100 GeV corresponding to approximately, for example. At that distance, the strengths of the EM and weak forces are the same. To test physics at that distance, energies of about 100 GeV must be put into the system, and that is sufficient to create and release the , , and carrier particles. At those and higher energies, the masses of the carrier particles becomes less and less relevant, and the in particular resembles the massless, chargeless, spin 1 photon. In fact, there is enough energy when things are pushed to even smaller distances to transform the, and into massless carrier particles more similar to photons and gluons. These have not been observed experimentally, but there is a prediction of an associated particle called the Higgs boson. The mass of this particle is not predicted with nearly the certainty with which the mass of the and particles were predicted, but it was hoped that the Higgs boson could be observed at the now-canceled Superconducting Super Collider (SSC). Ongoing experiments at the Large Hadron Collider at CERN have presented some evidence for a Higgs boson with a mass of 125 GeV, and there is a possibility of a direct discovery during 2012. The existence of this more massive particle would give validity to the theory that the carrier particles are identical under certain circumstances. The small distances and high energies at which the electroweak force becomes identical with the strong nuclear force are not reachable with any conceivable human-built accelerator. At energies of about (16,000 J per particle), distances of about can be probed. Such energies are needed to test theory directly, but these are about higher than the proposed giant SSC would have had, and the distances are about smaller than any structure we have direct knowledge of. This would be the realm of various GUTs, of which there are many since there is no constraining evidence at these energies and distances. Past experience has shown that any time you probe so many orders of magnitude further (here, about ), you find the unexpected. Even more extreme are the energies and distances at which gravity is thought to unify with the other forces in a TOE. Most speculative and least constrained by experiment are TOEs, one of which is called Superstring theory. Superstrings are entities that are in scale and act like one-dimensional oscillating strings and are also proposed to underlie all particles, forces, and space itself. At the energy of GUTs, the carrier particles of the weak force would become massless and identical to gluons. If that happens, then both lepton and baryon conservation would be violated. We do not see such violations, because we do not encounter such energies. However, there is a tiny probability that, at ordinary energies, the virtual particles that violate the conservation of baryon number may exist for extremely small amounts of time (corresponding to very small ranges). All GUTs thus predict that the proton should be unstable, but would decay with an extremely long lifetime of about . The predicted decay mode is which violates both conservation of baryon number and electron family number. Although is an extremely long time (about times the age of the universe), there are a lot of protons, and detectors have been constructed to look for the proposed decay mode as seen in . It is somewhat comforting that proton decay has not been detected, and its experimental lifetime is now greater than . This does not prove GUTs wrong, but it does place greater constraints on the theories, benefiting theorists in many ways. From looking increasingly inward at smaller details for direct evidence of electroweak theory and GUTs, we turn around and look to the universe for evidence of the unification of forces. In the 1920s, the expansion of the universe was discovered. Thinking backward in time, the universe must once have been very small, dense, and extremely hot. At a tiny fraction of a second after the fabled Big Bang, forces would have been unified and may have left their fingerprint on the existing universe. This, one of the most exciting forefronts of physics, is the subject of Frontiers of Physics. ### Summary 1. Attempts to show unification of the four forces are called Grand Unified Theories (GUTs) and have been partially successful, with connections proven between EM and weak forces in electroweak theory. 2. The strong force is carried by eight proposed particles called gluons, which are intimately connected to a quantum number called color—their governing theory is thus called quantum chromodynamics (QCD). Taken together, QCD and the electroweak theory are widely accepted as the Standard Model of particle physics. 3. Unification of the strong force is expected at such high energies that it cannot be directly tested, but it may have observable consequences in the as-yet unobserved decay of the proton and topics to be discussed in the next chapter. Although unification of forces is generally anticipated, much remains to be done to prove its validity. ### Conceptual Questions ### Problems & Exercises
# Frontiers of Physics ## Introduction to Frontiers of Physics Frontiers are exciting. There is mystery, surprise, adventure, and discovery. The satisfaction of finding the answer to a question is made keener by the fact that the answer always leads to a new question. The picture of nature becomes more complete, yet nature retains its sense of mystery and never loses its ability to awe us. The view of physics is beautiful looking both backward and forward in time. What marvelous patterns we have discovered. How clever nature seems in its rules and connections. How awesome. And we continue looking ever deeper and ever further, probing the basic structure of matter, energy, space, and time and wondering about the scope of the universe, its beginnings and future. You are now in a wonderful position to explore the forefronts of physics, both the new discoveries and the unanswered questions. With the concepts, qualitative and quantitative, the problem-solving skills, the feeling for connections among topics, and all the rest you have mastered, you can more deeply appreciate and enjoy the brief treatments that follow. Years from now you will still enjoy the quest with an insight all the greater for your efforts.
# Frontiers of Physics ## Cosmology and Particle Physics ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the expansion of the universe. 2. Explain the Big Bang. Look at the sky on some clear night when you are away from city lights. There you will see thousands of individual stars and a faint glowing background of millions more. The Milky Way, as it has been called since ancient times, is an arm of our galaxy of stars—the word galaxy coming from the Greek word galaxias, meaning milky. We know a great deal about our Milky Way galaxy and of the billions of other galaxies beyond its fringes. But they still provoke wonder and awe (see ). And there are still many questions to be answered. Most remarkable when we view the universe on the large scale is that once again explanations of its character and evolution are tied to the very small scale. Particle physics and the questions being asked about the very small scales may also have their answers in the very large scales. As has been noted in numerous Things Great and Small vignettes, this is not the first time the large has been explained by the small and vice versa. Newton realized that the nature of gravity on Earth that pulls an apple to the ground could explain the motion of the moon and planets so much farther away. Minute atoms and molecules explain the chemistry of substances on a much larger scale. Decays of tiny nuclei explain the hot interior of the Earth. Fusion of nuclei likewise explains the energy of stars. Today, the patterns in particle physics seem to be explaining the evolution and character of the universe. And the nature of the universe has implications for unexplored regions of particle physics. Cosmology is the study of the character and evolution of the universe. What are the major characteristics of the universe as we know them today? First, there are approximately galaxies in the observable part of the universe. An average galaxy contains more than stars, with our Milky Way galaxy being larger than average, both in its number of stars and its dimensions. Ours is a spiral-shaped galaxy with a diameter of about 100,000 light years and a thickness of about 2000 light years in the arms with a central bulge about 10,000 light years across. The Sun lies about 30,000 light years from the center near the galactic plane. There are significant clouds of gas, and there is a halo of less-dense regions of stars surrounding the main body. (See .) Evidence strongly suggests the existence of a large amount of additional matter in galaxies that does not produce light—the mysterious dark matter we shall later discuss. Distances are great even within our galaxy and are measured in light years (the distance traveled by light in one year). The average distance between galaxies is on the order of a million light years, but it varies greatly with galaxies forming clusters such as shown in . The Magellanic Clouds, for example, are small galaxies close to our own, some 160,000 light years from Earth. The Andromeda galaxy is a large spiral galaxy like ours and lies 2 million light years away. It is just visible to the naked eye as an extended glow in the Andromeda constellation. Andromeda is the closest large galaxy in our local group, and we can see some individual stars in it with our larger telescopes. The most distant known galaxy is 14 billion light years from Earth—a truly incredible distance. (See .) Consider the fact that the light we receive from these vast distances has been on its way to us for a long time. In fact, the time in years is the same as the distance in light years. For example, the Andromeda galaxy is 2 million light years away, so that the light now reaching us left it 2 million years ago. If we could be there now, Andromeda would be different. Similarly, light from the most distant galaxy left it 14 billion years ago. We have an incredible view of the past when looking great distances. We can try to see if the universe was different then—if distant galaxies are more tightly packed or have younger-looking stars, for example, than closer galaxies, in which case there has been an evolution in time. But the problem is that the uncertainties in our data are great. Cosmology is almost typified by these large uncertainties, so that we must be especially cautious in drawing conclusions. One consequence is that there are more questions than answers, and so there are many competing theories. Another consequence is that any hard data produce a major result. Discoveries of some importance are being made on a regular basis, the hallmark of a field in its golden age. Perhaps the most important characteristic of the universe is that all galaxies except those in our local cluster seem to be moving away from us at speeds proportional to their distance from our galaxy. It looks as if a gigantic explosion, universally called the Big Bang, threw matter out some billions of years ago. This amazing conclusion is based on the pioneering work of Edwin Hubble (1889–1953), the American astronomer. In the 1920s, Hubble first demonstrated conclusively that other galaxies, many previously called nebulae or clouds of stars, were outside our own. He then found that all but the closest galaxies have a red shift in their hydrogen spectra that is proportional to their distance. The explanation is that there is a cosmological red shift due to the expansion of space itself. The photon wavelength is stretched in transit from the source to the observer. Double the distance, and the red shift is doubled. While this cosmological red shift is often called a Doppler shift, it is not—space itself is expanding. There is no center of expansion in the universe. All observers see themselves as stationary; the other objects in space appear to be moving away from them. Hubble was directly responsible for discovering that the universe was much larger than had previously been imagined and that it had this amazing characteristic of rapid expansion. Universal expansion on the scale of galactic clusters (that is, galaxies at smaller distances are not uniformly receding from one another) is an integral part of modern cosmology. For galaxies farther away than about 50 Mly (50 million light years), the expansion is uniform with variations due to local motions of galaxies within clusters. A representative recession velocity can be obtained from the simple formula where is the distance to the galaxy and is the Hubble constant. The Hubble constant is a central concept in cosmology. Its value is determined by taking the slope of a graph of velocity versus distance, obtained from red shift measurements, such as shown in . We shall use an approximate value of Thus, is an average behavior for all but the closest galaxies. For example, a galaxy 100 Mly away (as determined by its size and brightness) typically moves away from us at a speed of There can be variations in this speed due to so-called local motions or interactions with neighboring galaxies. Conversely, if a galaxy is found to be moving away from us at speed of 100,000 km/s based on its red shift, it is at a distance or . This last calculation is approximate, because it assumes the expansion rate was the same 5 billion years ago as now. A similar calculation in Hubble’s measurement changed the notion that the universe is in a steady state. One of the most intriguing developments recently has been the discovery that the expansion of the universe may be faster now than in the past, rather than slowing due to gravity as expected. Various groups have been looking, in particular, at supernovas in moderately distant galaxies (less than 1 Gly) to get improved distance measurements. Those distances are larger than expected for the observed galactic red shifts, implying the expansion was slower when that light was emitted. This has cosmological consequences that are discussed in Dark Matter and Closure. The first results, published in 1999, are only the beginning of emerging data, with astronomy now entering a data-rich era. shows how the recession of galaxies looks like the remnants of a gigantic explosion, the famous Big Bang. Extrapolating backward in time, the Big Bang would have occurred between 13 and 15 billion years ago when all matter would have been at a point. Questions instantly arise. What caused the explosion? What happened before the Big Bang? Was there a before, or did time start then? Will the universe expand forever, or will gravity reverse it into a Big Crunch? And is there other evidence of the Big Bang besides the well-documented red shifts? The Russian-born American physicist George Gamow (1904–1968) was among the first to note that, if there was a Big Bang, the remnants of the primordial fireball should still be evident and should be blackbody radiation. Since the radiation from this fireball has been traveling to us since shortly after the Big Bang, its wavelengths should be greatly stretched. It will look as if the fireball has cooled in the billions of years since the Big Bang. Gamow and collaborators predicted in the late 1940s that there should be blackbody radiation from the explosion filling space with a characteristic temperature of about 7 K. Such blackbody radiation would have its peak intensity in the microwave part of the spectrum. (See .) In 1964, Arno Penzias and Robert Wilson, two American scientists working with Bell Telephone Laboratories on a low-noise radio antenna, detected the radiation and eventually recognized it for what it is. (b) shows the spectrum of this microwave radiation that permeates space and is of cosmic origin. It is the most perfect blackbody spectrum known, and the temperature of the fireball remnant is determined from it to be . The detection of what is now called the cosmic microwave background (CMBR) was so important (generally considered as important as Hubble’s detection that the galactic red shift is proportional to distance) that virtually every scientist has accepted the expansion of the universe as fact. Penzias and Wilson shared the 1978 Nobel Prize in Physics for their discovery. We know from direct observation that antimatter is rare. The Earth and the solar system are nearly pure matter. Space probes and cosmic rays give direct evidence—the landing of the Viking probes on Mars would have been spectacular explosions of mutual annihilation energy if Mars were antimatter. We also know that most of the universe is dominated by matter. This is proven by the lack of annihilation radiation coming to us from space, particularly the relative absence of 0.511-MeV rays created by the mutual annihilation of electrons and positrons. It seemed possible that there could be entire solar systems or galaxies made of antimatter in perfect symmetry with our matter-dominated systems. But the interactions between stars and galaxies would sometimes bring matter and antimatter together in large amounts. The annihilation radiation they would produce is simply not observed. Antimatter in nature is created in particle collisions and in decays, but only in small amounts that quickly annihilate, leaving almost pure matter surviving. Particle physics seems symmetric in matter and antimatter. Why isn’t the cosmos? The answer is that particle physics is not quite perfectly symmetric in this regard. The decay of one of the neutral -mesons, for example, preferentially creates more matter than antimatter. This is caused by a fundamental small asymmetry in the basic forces. This small asymmetry produced slightly more matter than antimatter in the early universe. If there was only one part in more matter (a small asymmetry), the rest would annihilate pair for pair, leaving nearly pure matter to form the stars and galaxies we see today. So the vast number of stars we observe may be only a tiny remnant of the original matter created in the Big Bang. Here at last we see a very real and important asymmetry in nature. Rather than be disturbed by an asymmetry, most physicists are impressed by how small it is. Furthermore, if the universe were completely symmetric, the mutual annihilation would be more complete, leaving far less matter to form us and the universe we know. A troubling aspect of cosmic microwave background radiation (CMBR) was soon recognized. True, the CMBR verified the Big Bang, had the correct temperature, and had a blackbody spectrum as expected. But the CMBR was too smooth—it looked identical in every direction. Galaxies and other similar entities could not be formed without the existence of fluctuations in the primordial stages of the universe and so there should be hot and cool spots in the CMBR, nicknamed wrinkles, corresponding to dense and sparse regions of gas caused by turbulence or early fluctuations. Over time, dense regions would contract under gravity and form stars and galaxies. Why aren’t the fluctuations there? (This is a good example of an answer producing more questions.) Furthermore, galaxies are observed very far from us, so that they formed very long ago. The problem was to explain how galaxies could form so early and so quickly after the Big Bang if its remnant fingerprint is perfectly smooth. The answer is that if you look very closely, the CMBR is not perfectly smooth, only extremely smooth. A satellite called the Cosmic Background Explorer (COBE) carried an instrument that made very sensitive and accurate measurements of the CMBR. In April of 1992, there was extraordinary publicity of COBE’s first results—there were small fluctuations in the CMBR. Further measurements were carried out by experiments including NASA’s Wilkinson Microwave Anisotropy Probe (WMAP), which launched in 2001. Data from WMAP provided a much more detailed picture of the CMBR fluctuations. (See .) These amount to temperature fluctuations of only out of 2.7 K, better than one part in 1000. The WMAP experiment will be followed up by the European Space Agency’s Planck Surveyor, which launched in 2009. Let us now examine the various stages of the overall evolution of the universe from the Big Bang to the present, illustrated in . Note that scientific notation is used to encompass the many orders of magnitude in time, energy, temperature, and size of the universe. Going back in time, the two lines approach but do not cross (there is no zero on an exponential scale). Rather, they extend indefinitely in ever-smaller time intervals to some infinitesimal point. Going back in time is equivalent to what would happen if expansion stopped and gravity pulled all the galaxies together, compressing and heating all matter. At a time long ago, the temperature and density were too high for stars and galaxies to exist. Before then, there was a time when the temperature was too great for atoms to exist. And farther back yet, there was a time when the temperature and density were so great that nuclei could not exist. Even farther back in time, the temperature was so high that average kinetic energy was great enough to create short-lived particles, and the density was high enough to make this likely. When we extrapolate back to the point of and production (thermal energies reaching 1 TeV, or a temperature of about ), we reach the limits of what we know directly about particle physics. This is at a time about after the Big Bang. While may seem to be negligibly close to the instant of creation, it is not. There are important stages before this time that are tied to the unification of forces. At those stages, the universe was at extremely high energies and average particle separations were smaller than we can achieve with accelerators. What happened in the early stages before is crucial to all later stages and is possibly discerned by observing present conditions in the universe. One of these is the smoothness of the CMBR. Names are given to early stages representing key conditions. The stage before back to is called the electroweak epoch, because the electromagnetic and weak forces become identical for energies above about 100 GeV. As discussed earlier, theorists expect that the strong force becomes identical to and thus unified with the electroweak force at energies of about . The average particle energy would be this great at after the Big Bang, if there are no surprises in the unknown physics at energies above about 1 TeV. At the immense energy of (corresponding to a temperature of about ), the and carrier particles would be transformed into massless gauge bosons to accomplish the unification. Before back to about , we have Grand Unification in the GUT epoch, in which all forces except gravity are identical. At , the average energy reaches the immense needed to unify gravity with the other forces in TOE, the Theory of Everything. Before that time is the TOE epoch, but we have almost no idea as to the nature of the universe then, since we have no workable theory of quantum gravity. We call the hypothetical unified force superforce. Now let us imagine starting at TOE and moving forward in time to see what type of universe is created from various events along the way. As temperatures and average energies decrease with expansion, the universe reaches the stage where average particle separations are large enough to see differences between the strong and electroweak forces (at about ). After this time, the forces become distinct in almost all interactions—they are no longer unified or symmetric. This transition from GUT to electroweak is an example of spontaneous symmetry breaking, in which conditions spontaneously evolved to a point where the forces were no longer unified, breaking that symmetry. This is analogous to a phase transition in the universe, and a clever proposal by American physicist Alan Guth in the early 1980s ties it to the smoothness of the CMBR. Guth proposed that spontaneous symmetry breaking (like a phase transition during cooling of normal matter) released an immense amount of energy that caused the universe to expand extremely rapidly for the brief time from to about . This expansion may have been by an incredible factor of or more in the size of the universe and is thus called the inflationary scenario. One result of this inflation is that it would stretch the wrinkles in the universe nearly flat, leaving an extremely smooth CMBR. While speculative, there is as yet no other plausible explanation for the smoothness of the CMBR. Unless the CMBR is not really cosmic but local in origin, the distances between regions of similar temperatures are too great for any coordination to have caused them, since any coordination mechanism must travel at the speed of light. Again, particle physics and cosmology are intimately entwined. There is little hope that we may be able to test the inflationary scenario directly, since it occurs at energies near , vastly greater than the limits of modern accelerators. But the idea is so attractive that it is incorporated into most cosmological theories. Characteristics of the present universe may help us determine the validity of this intriguing idea. Additionally, the recent indications that the universe’s expansion rate may be increasing (see Dark Matter and Closure) could even imply that we are in another inflationary epoch. It is important to note that, if conditions such as those found in the early universe could be created in the laboratory, we would see the unification of forces directly today. The forces have not changed in time, but the average energy and separation of particles in the universe have. As discussed in The Four Basic Forces, the four basic forces in nature are distinct under most circumstances found today. The early universe and its remnants provide evidence from times when they were unified under most circumstances. ### Section Summary 1. Cosmology is the study of the character and evolution of the universe. 2. The two most important features of the universe are the cosmological red shifts of its galaxies being proportional to distance and its cosmic microwave background (CMBR). Both support the notion that there was a gigantic explosion, known as the Big Bang that created the universe. 3. Galaxies farther away than our local group have, on an average, a recessional velocity given by where 4. Explanations of the large-scale characteristics of the universe are intimately tied to particle physics. 5. The dominance of matter over antimatter and the smoothness of the CMBR are two characteristics that are tied to particle physics. 6. The epochs of the universe are known back to very shortly after the Big Bang, based on known laws of physics. 7. The earliest epochs are tied to the unification of forces, with the electroweak epoch being partially understood, the GUT epoch being speculative, and the TOE epoch being highly speculative since it involves an unknown single superforce. 8. The transition from GUT to electroweak is called spontaneous symmetry breaking. It released energy that caused the inflationary scenario, which in turn explains the smoothness of the CMBR. ### Conceptual Questions ### Problems & Exercises
# Frontiers of Physics ## General Relativity and Quantum Gravity ### Learning Objectives By the end of this section, you will be able to: 1. Explain the effect of gravity on light. 2. Discuss black hole. 3. Explain quantum gravity. When we talk of black holes or the unification of forces, we are actually discussing aspects of general relativity and quantum gravity. We know from Special Relativity that relativity is the study of how different observers measure the same event, particularly if they move relative to one another. Einstein’s theory of general relativity describes all types of relative motion including accelerated motion and the effects of gravity. General relativity encompasses special relativity and classical relativity in situations where acceleration is zero and relative velocity is small compared with the speed of light. Many aspects of general relativity have been verified experimentally, some of which are better than science fiction in that they are bizarre but true. Quantum gravity is the theory that deals with particle exchange of gravitons as the mechanism for the force, and with extreme conditions where quantum mechanics and general relativity must both be used. A good theory of quantum gravity does not yet exist, but one will be needed to understand how all four forces may be unified. If we are successful, the theory of quantum gravity will encompass all others, from classical physics to relativity to quantum mechanics—truly a Theory of Everything (TOE). ### General Relativity Einstein first considered the case of no observer acceleration when he developed the revolutionary special theory of relativity, publishing his first work on it in 1905. By 1916, he had laid the foundation of general relativity, again almost on his own. Much of what Einstein did to develop his ideas was to mentally analyze certain carefully and clearly defined situations—doing this is to perform a thought experiment. illustrates a thought experiment like the ones that convinced Einstein that light must fall in a gravitational field. Think about what a person feels in an elevator that is accelerated upward. It is identical to being in a stationary elevator in a gravitational field. The feet of a person are pressed against the floor, and objects released from hand fall with identical accelerations. In fact, it is not possible, without looking outside, to know what is happening—acceleration upward or gravity. This led Einstein to correctly postulate that acceleration and gravity will produce identical effects in all situations. So, if acceleration affects light, then gravity will, too. shows the effect of acceleration on a beam of light shone horizontally at one wall. Since the accelerated elevator moves up during the time light travels across the elevator, the beam of light strikes low, seeming to the person to bend down. (Normally a tiny effect, since the speed of light is so great.) The same effect must occur due to gravity, Einstein reasoned, since there is no way to tell the effects of gravity acting downward from acceleration of the elevator upward. Thus gravity affects the path of light, even though we think of gravity as acting between masses and photons are massless. Einstein’s theory of general relativity got its first verification in 1919 when starlight passing near the Sun was observed during a solar eclipse. (See .) During an eclipse, the sky is darkened and we can briefly see stars. Those in a line of sight nearest the Sun should have a shift in their apparent positions. Not only was this shift observed, but it agreed with Einstein’s predictions well within experimental uncertainties. This discovery created a scientific and public sensation. Einstein was now a folk hero as well as a very great scientist. The bending of light by matter is equivalent to a bending of space itself, with light following the curve. This is another radical change in our concept of space and time. It is also another connection that any particle with mass or energy (massless photons) is affected by gravity. There are several current forefront efforts related to general relativity. One is the observation and analysis of gravitational lensing of light. Another is analysis of the definitive proof of the existence of black holes. Direct observation of gravitational waves or moving wrinkles in space is being searched for. Theoretical efforts are also being aimed at the possibility of time travel and wormholes into other parts of space due to black holes. As you can see in , light is bent toward a mass, producing an effect much like a converging lens (large masses are needed to produce observable effects). On a galactic scale, the light from a distant galaxy could be “lensed” into several images when passing close by another galaxy on its way to Earth. Einstein predicted this effect, but he considered it unlikely that we would ever observe it. A number of cases of this effect have now been observed; one is shown in . This effect is a much larger scale verification of general relativity. But such gravitational lensing is also useful in verifying that the red shift is proportional to distance. The red shift of the intervening galaxy is always less than that of the one being lensed, and each image of the lensed galaxy has the same red shift. This verification supplies more evidence that red shift is proportional to distance. Confidence that the multiple images are not different objects is bolstered by the observations that if one image varies in brightness over time, the others also vary in the same manner. Black holes are objects having such large gravitational fields that things can fall in, but nothing, not even light, can escape. Bodies, like the Earth or the Sun, have what is called an escape velocity. If an object moves straight up from the body, starting at the escape velocity, it will just be able to escape the gravity of the body. The greater the acceleration of gravity on the body, the greater is the escape velocity. As long ago as the late 1700s, it was proposed that if the escape velocity is greater than the speed of light, then light cannot escape. Simon Laplace (1749–1827), the French astronomer and mathematician, even incorporated this idea of a dark star into his writings. But the idea was dropped after Young’s double slit experiment showed light to be a wave. For some time, light was thought not to have particle characteristics and, thus, could not be acted upon by gravity. The idea of a black hole was very quickly reincarnated in 1916 after Einstein’s theory of general relativity was published. It is now thought that black holes can form in the supernova collapse of a massive star, forming an object perhaps 10 km across and having a mass greater than that of our Sun. It is interesting that several prominent physicists who worked on the concept, including Einstein, firmly believed that nature would find a way to prohibit such objects. Black holes are difficult to observe directly, because they are small and no light comes directly from them. In fact, no light comes from inside the event horizon, which is defined to be at a distance from the object at which the escape velocity is exactly the speed of light. The radius of the event horizon is known as the Schwarzschild radius and is given by where is the universal gravitational constant, is the mass of the body, and is the speed of light. The event horizon is the edge of the black hole and is its radius (that is, the size of a black hole is twice ). Since is small and is large, you can see that black holes are extremely small, only a few kilometers for masses a little greater than the Sun’s. The object itself is inside the event horizon. Physics near a black hole is fascinating. Gravity increases so rapidly that, as you approach a black hole, the tidal effects tear matter apart, with matter closer to the hole being pulled in with much more force than that only slightly farther away. This can pull a companion star apart and heat inflowing gases to the point of producing X rays. (See .) We have observed X rays from certain binary star systems that are consistent with such a picture. This is not quite proof of black holes, because the X rays could also be caused by matter falling onto a neutron star. These objects were first discovered in 1967 by the British astrophysicists, Jocelyn Bell and Anthony Hewish. Neutron stars are literally a star composed of neutrons. They are formed by the collapse of a star’s core in a supernova, during which electrons and protons are forced together to form neutrons (the reverse of neutron decay). Neutron stars are slightly larger than a black hole of the same mass and will not collapse further because of resistance by the strong force. However, neutron stars cannot have a mass greater than about eight solar masses or they must collapse to a black hole. With recent improvements in our ability to resolve small details, such as with the orbiting Chandra X-ray Observatory, it has become possible to measure the masses of X-ray-emitting objects by observing the motion of companion stars and other matter in their vicinity. What has emerged is a plethora of X-ray-emitting objects too massive to be neutron stars. This evidence is considered conclusive and the existence of black holes is widely accepted. These black holes are concentrated near galactic centers. We also have evidence that supermassive black holes may exist at the cores of many galaxies, including the Milky Way. Such a black hole might have a mass millions or even billions of times that of the Sun, and it would probably have formed when matter first coalesced into a galaxy billions of years ago. Supporting this is the fact that very distant galaxies are more likely to have abnormally energetic cores. Some of the moderately distant galaxies, and hence among the younger, are known as quasars and emit as much or more energy than a normal galaxy but from a region less than a light year across. Quasar energy outputs may vary in times less than a year, so that the energy-emitting region must be less than a light year across. The best explanation of quasars is that they are young galaxies with a supermassive black hole forming at their core, and that they become less energetic over billions of years. In closer superactive galaxies, we observe tremendous amounts of energy being emitted from very small regions of space, consistent with stars falling into a black hole at the rate of one or more a month. The Hubble Space Telescope (1994) observed an accretion disk in the galaxy M87 rotating rapidly around a region of extreme energy emission. (See .) A jet of material being ejected perpendicular to the plane of rotation gives further evidence of a supermassive black hole as the engine. If a massive object distorts the space around it, like the foot of a water bug on the surface of a pond, then movement of the massive object should create waves in space like those on a pond. Gravitational waves are mass-created distortions in space that propagate at the speed of light and are predicted by general relativity. Since gravity is by far the weakest force, extreme conditions are needed to generate significant gravitational waves. Gravity near binary neutron star systems is so great that significant gravitational wave energy is radiated as the two neutron stars orbit one another. American astronomers, Joseph Taylor and Russell Hulse, measured changes in the orbit of such a binary neutron star system. They found its orbit to change precisely as predicted by general relativity, a strong indication of gravitational waves, and were awarded the 1993 Nobel Prize. But direct detection of gravitational waves on Earth would be conclusive. For many years, various attempts have been made to detect gravitational waves by observing vibrations induced in matter distorted by these waves. American physicist Joseph Weber pioneered this field in the 1960s, but no conclusive events have been observed. (No gravity wave detectors were in operation at the time of the 1987A supernova, unfortunately.) There are now several ambitious systems of gravitational wave detectors in use around the world. These include the LIGO (Laser Interferometer Gravitational Wave Observatory) system with two laser interferometer detectors, one in the state of Washington and another in Louisiana (See ) and the VIRGO (Variability of Irradiance and Gravitational Oscillations) facility in Italy with a single detector. ### Quantum Gravity Quantum gravity is important in those situations where gravity is so extremely strong that it has effects on the quantum scale, where the other forces are ordinarily much stronger. The early universe was such a place, but black holes are another. The first significant connection between gravity and quantum effects was made by the Russian physicist Yakov Zel’dovich in 1971, and other significant advances followed from the British physicist Stephen Hawking. (See .) These two showed that black holes could radiate away energy by quantum effects just outside the event horizon (nothing can escape from inside the event horizon). Black holes are, thus, expected to radiate energy and shrink to nothing, although extremely slowly for most black holes. The mechanism is the creation of a particle-antiparticle pair from energy in the extremely strong gravitational field near the event horizon. One member of the pair falls into the hole and the other escapes, conserving momentum. (See .) When a black hole loses energy and, hence, rest mass, its event horizon shrinks, creating an even greater gravitational field. This increases the rate of pair production so that the process grows exponentially until the black hole is nuclear in size. A final burst of particles and rays ensues. This is an extremely slow process for black holes about the mass of the Sun (produced by supernovas) or larger ones (like those thought to be at galactic centers), taking on the order of years or longer! Smaller black holes would evaporate faster, but they are only speculated to exist as remnants of the Big Bang. Searches for characteristic -ray bursts have produced events attributable to more mundane objects like neutron stars accreting matter. The subject of time travel captures the imagination. Theoretical physicists, such as the American Kip Thorne, have treated the subject seriously, looking into the possibility that falling into a black hole could result in popping up in another time and place—a trip through a so-called wormhole. Time travel and wormholes appear in innumerable science fiction dramatizations, but the consensus is that time travel is not possible in theory. While still debated, it appears that quantum gravity effects inside a black hole prevent time travel due to the creation of particle pairs. Direct evidence is elusive. Theoretical studies indicate that, at extremely high energies and correspondingly early in the universe, quantum fluctuations may make time intervals meaningful only down to some finite time limit. Early work indicated that this might be the case for times as long as , the time at which all forces were unified. If so, then it would be meaningless to consider the universe at times earlier than this. Subsequent studies indicate that the crucial time may be as short as . But the point remains—quantum gravity seems to imply that there is no such thing as a vanishingly short time. Time may, in fact, be grainy with no meaning to time intervals shorter than some tiny but finite size. Not only is quantum gravity in its infancy, no one knows how to get started on a theory of gravitons and unification of forces. The energies at which TOE should be valid may be so high (at least ) and the necessary particle separation so small (less than ) that only indirect evidence can provide clues. For some time, the common lament of theoretical physicists was one so familiar to struggling students—how do you even get started? But Hawking and others have made a start, and the approach many theorists have taken is called Superstring theory, the topic of the Superstrings. ### Section Summary 1. Einstein’s theory of general relativity includes accelerated frames and, thus, encompasses special relativity and gravity. Created by use of careful thought experiments, it has been repeatedly verified by real experiments. 2. One direct result of this behavior of nature is the gravitational lensing of light by massive objects, such as galaxies, also seen in the microlensing of light by smaller bodies in our galaxy. 3. Another prediction is the existence of black holes, objects for which the escape velocity is greater than the speed of light and from which nothing can escape. 4. The event horizon is the distance from the object at which the escape velocity equals the speed of light . It is called the Schwarzschild radius and is given by where 5. Physics is unknown inside the event horizon, and the possibility of wormholes and time travel are being studied. 6. Candidates for black holes may power the extremely energetic emissions of quasars, distant objects that seem to be early stages of galactic evolution. 7. Neutron stars are stellar remnants, having the density of a nucleus, that hint that black holes could form from supernovas, too. 8. Gravitational waves are wrinkles in space, predicted by general relativity but not yet observed, caused by changes in very massive objects. 9. Quantum gravity is an incompletely developed theory that strives to include general relativity, quantum mechanics, and unification of forces (thus, a TOE). 10. One unconfirmed connection between general relativity and quantum mechanics is the prediction of characteristic radiation from just outside black holes. ### Conceptual Questions ### Problems & Exercises
# Frontiers of Physics ## Superstrings ### Learning Objectives By the end of this section, you will be able to: 1. Define Superstring theory. 2. Explain the relationship between Superstring theory and the Big Bang. Introduced earlier in GUTS: The Unification of Forces Superstring theory is an attempt to unify gravity with the other three forces and, thus, must contain quantum gravity. The main tenet of Superstring theory is that fundamental particles, including the graviton that carries the gravitational force, act like one-dimensional vibrating strings. Since gravity affects the time and space in which all else exists, Superstring theory is an attempt at a Theory of Everything (TOE). Each independent quantum number is thought of as a separate dimension in some super space (analogous to the fact that the familiar dimensions of space are independent of one another) and is represented by a different type of Superstring. As the universe evolved after the Big Bang and forces became distinct (spontaneous symmetry breaking), some of the dimensions of superspace are imagined to have curled up and become unnoticed. Forces are expected to be unified only at extremely high energies and at particle separations on the order of . This could mean that Superstrings must have dimensions or wavelengths of this size or smaller. Just as quantum gravity may imply that there are no time intervals shorter than some finite value, it also implies that there may be no sizes smaller than some tiny but finite value. That may be about . If so, and if Superstring theory can explain all it strives to, then the structures of Superstrings are at the lower limit of the smallest possible size and can have no further substructure. This would be the ultimate answer to the question the ancient Greeks considered. There is a finite lower limit to space. Not only is Superstring theory in its infancy, it deals with dimensions about 17 orders of magnitude smaller than the details that we have been able to observe directly. It is thus relatively unconstrained by experiment, and there are a host of theoretical possibilities to choose from. This has led theorists to make choices subjectively (as always) on what is the most elegant theory, with less hope than usual that experiment will guide them. It has also led to speculation of alternate universes, with their Big Bangs creating each new universe with a random set of rules. These speculations may not be tested even in principle, since an alternate universe is by definition unattainable. It is something like exploring a self-consistent field of mathematics, with its axioms and rules of logic that are not consistent with nature. Such endeavors have often given insight to mathematicians and scientists alike and occasionally have been directly related to the description of new discoveries. ### Section Summary 1. Superstring theory holds that fundamental particles are one-dimensional vibrations analogous to those on strings and is an attempt at a theory of quantum gravity. ### Problems & Exercises
# Frontiers of Physics ## Dark Matter and Closure ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the existence of dark matter. 2. Explain neutrino oscillations and their consequences. One of the most exciting problems in physics today is the fact that there is far more matter in the universe than we can see. The motion of stars in galaxies and the motion of galaxies in clusters imply that there is about 10 times as much mass as in the luminous objects we can see. The indirectly observed non-luminous matter is called dark matter. Why is dark matter a problem? For one thing, we do not know what it is. It may well be 90% of all matter in the universe, yet there is a possibility that it is of a completely unknown form—a stunning discovery if verified. Dark matter has implications for particle physics. It may be possible that neutrinos actually have small masses or that there are completely unknown types of particles. Dark matter also has implications for cosmology, since there may be enough dark matter to stop the expansion of the universe. That is another problem related to dark matter—we do not know how much there is. We keep finding evidence for more matter in the universe, and we have an idea of how much it would take to eventually stop the expansion of the universe, but whether there is enough is still unknown. ### Evidence The first clues that there is more matter than meets the eye came from the Swiss-born American astronomer Fritz Zwicky in the 1930s; major work was also done by the American astronomer Vera Rubin. Zwicky measured the velocities of stars orbiting the galaxy, using the relativistic Doppler shift of their spectra (see (a)). He found that velocity varied with distance from the center of the galaxy, as graphed in (b). If the mass of the galaxy was concentrated in its center, as are its luminous stars, the velocities should decrease as the square root of the distance from the center. Instead, the velocity curve is almost flat, implying that there is a tremendous amount of matter in the galactic halo. Using instruments and methods that offered a greater degree of precision, Rubin investigated the movement of spiral galaxies and observed that their outermost reaches were rotating as quickly as their centers. She also calculated that the rotational velocity of galaxies should have been enough to cause them to fly apart, unless there was a significant discrepancy between their observable matter and their actual matter. This became known as the galaxy rotation problem, which can be "solved" by the presence of unobserved or dark matter. Although not immediately recognized for its significance, such measurements have now been made for many galaxies, with similar results. Further, studies of galactic clusters have also indicated that galaxies have a mass distribution greater than that obtained from their brightness (proportional to the number of stars), which also extends into large halos surrounding the luminous parts of galaxies. Observations of other EM wavelengths, such as radio waves and X rays, have similarly confirmed the existence of dark matter. Take, for example, X rays in the relatively dark space between galaxies, which indicates the presence of previously unobserved hot, ionized gas (see (c)). ### Theoretical Yearnings for Closure Is the universe open or closed? That is, will the universe expand forever or will it stop, perhaps to contract? This, until recently, was a question of whether there is enough gravitation to stop the expansion of the universe. In the past few years, it has become a question of the combination of gravitation and what is called the cosmological constant. The cosmological constant was invented by Einstein to prohibit the expansion or contraction of the universe. At the time he developed general relativity, Einstein considered that an illogical possibility. The cosmological constant was discarded after Hubble discovered the expansion, but has been re-invoked in recent years. Gravitational attraction between galaxies is slowing the expansion of the universe, but the amount of slowing down is not known directly. In fact, the cosmological constant can counteract gravity’s effect. As recent measurements indicate, the universe is expanding faster now than in the past—perhaps a “modern inflationary era” in which the dark energy is thought to be causing the expansion of the present-day universe to accelerate. If the expansion rate were affected by gravity alone, we should be able to see that the expansion rate between distant galaxies was once greater than it is now. However, measurements show it was less than now. We can, however, calculate the amount of slowing based on the average density of matter we observe directly. Here we have a definite answer—there is far less visible matter than needed to stop expansion. The critical density is defined to be the density needed to just halt universal expansion in a universe with no cosmological constant. It is estimated to be about However, this estimate of is only good to about a factor of two, due to uncertainties in the expansion rate of the universe. The critical density is equivalent to an average of only a few nucleons per cubic meter, remarkably small and indicative of how truly empty intergalactic space is. Luminous matter seems to account for roughly to of the critical density, far less than that needed for closure. Taking into account the amount of dark matter we detect indirectly and all other types of indirectly observed normal matter, there is only to of what is needed for closure. If we are able to refine the measurements of expansion rates now and in the past, we will have our answer regarding the curvature of space and we will determine a value for the cosmological constant to justify this observation. Finally, the most recent measurements of the CMBR have implications for the cosmological constant, so it is not simply a device concocted for a single purpose. After the recent experimental discovery of the cosmological constant, most researchers feel that the universe should be just barely open. Since matter can be thought to curve the space around it, we call an open universe negatively curved. This means that you can in principle travel an unlimited distance in any direction. A universe that is closed is called positively curved. This means that if you travel far enough in any direction, you will return to your starting point, analogous to circumnavigating the Earth. In between these two is a flat (zero curvature) universe. The recent discovery of the cosmological constant has shown the universe is very close to flat, and will expand forever. Why do theorists feel the universe is flat? Flatness is a part of the inflationary scenario that helps explain the flatness of the microwave background. In fact, since general relativity implies that matter creates the space in which it exists, there is a special symmetry to a flat universe. ### What Is the Dark Matter We See Indirectly? There is no doubt that dark matter exists, but its form and the amount in existence are two facts that are still being studied vigorously. As always, we seek to explain new observations in terms of known principles. However, as more discoveries are made, it is becoming more and more difficult to explain dark matter as a known type of matter. One of the possibilities for normal matter is being explored using the Hubble Space Telescope and employing the lensing effect of gravity on light (see ). Stars glow because of nuclear fusion in them, but planets are visible primarily by reflected light. Jupiter, for example, is too small to ignite fusion in its core and become a star, but we can see sunlight reflected from it, since we are relatively close. If Jupiter orbited another star, we would not be able to see it directly. The question is open as to how many planets or other bodies smaller than about 1/1000 the mass of the Sun are there. If such bodies pass between us and a star, they will not block the star’s light, being too small, but they will form a gravitational lens, as discussed in General Relativity and Quantum Gravity. In a process called microlensing, light from the star is focused and the star appears to brighten in a characteristic manner. Searches for dark matter in this form are particularly interested in galactic halos because of the huge amount of mass that seems to be there. Such microlensing objects are thus called massive compact halo objects, or MACHOs. To date, a few MACHOs have been observed, but not predominantly in galactic halos, nor in the numbers needed to explain dark matter. MACHOs are among the most conventional of unseen objects proposed to explain dark matter. Others being actively pursued are red dwarfs, which are small dim stars, but too few have been seen so far, even with the Hubble Telescope, to be of significance. Old remnants of stars called white dwarfs are also under consideration, since they contain about a solar mass, but are small as the Earth and may dim to the point that we ordinarily do not observe them. While white dwarfs are known, old dim ones are not. Yet another possibility is the existence of large numbers of smaller than stellar mass black holes left from the Big Bang—here evidence is entirely absent. There is a very real possibility that dark matter is composed of the known neutrinos, which may have small, but finite, masses. As discussed earlier, neutrinos are thought to be massless, but we only have upper limits on their masses, rather than knowing they are exactly zero. So far, these upper limits come from difficult measurements of total energy emitted in the decays and reactions in which neutrinos are involved. There is an amusing possibility of proving that neutrinos have mass in a completely different way. We have noted in Particles, Patterns, and Conservation Laws that there are three flavors of neutrinos (, , and ) and that the weak interaction could change quark flavor. It should also change neutrino flavor—that is, any type of neutrino could change spontaneously into any other, a process called neutrino oscillations. However, this can occur only if neutrinos have a mass. Why? Crudely, because if neutrinos are massless, they must travel at the speed of light and time will not pass for them, so that they cannot change without an interaction. In 1999, results began to be published containing convincing evidence that neutrino oscillations do occur. Using the Super-Kamiokande detector in Japan, the oscillations have been observed and are being verified and further explored at present at the same facility and others. Neutrino oscillations may also explain the low number of observed solar neutrinos. Detectors for observing solar neutrinos are specifically designed to detect electron neutrinos produced in huge numbers by fusion in the Sun. A large fraction of electron neutrinos may be changing flavor to muon neutrinos on their way out of the Sun, possibly enhanced by specific interactions, reducing the flux of electron neutrinos to observed levels. There is also a discrepancy in observations of neutrinos produced in cosmic ray showers. While these showers of radiation produced by extremely energetic cosmic rays should contain twice as many s as s, their numbers are nearly equal. This may be explained by neutrino oscillations from muon flavor to electron flavor. Massive neutrinos are a particularly appealing possibility for explaining dark matter, since their existence is consistent with a large body of known information and explains more than dark matter. The question is not settled at this writing. The most radical proposal to explain dark matter is that it consists of previously unknown leptons (sometimes obtusely referred to as non-baryonic matter). These are called weakly interacting massive particles, or WIMPs, and would also be chargeless, thus interacting negligibly with normal matter, except through gravitation. One proposed group of WIMPs would have masses several orders of magnitude greater than nucleons and are sometimes called neutralinos. Others are called axions and would have masses about that of an electron mass. Both neutralinos and axions would be gravitationally attached to galaxies, but because they are chargeless and only feel the weak force, they would be in a halo rather than interact and coalesce into spirals, and so on, like normal matter (see ). Some particle theorists have built WIMPs into their unified force theories and into the inflationary scenario of the evolution of the universe so popular today. These particles would have been produced in just the correct numbers to make the universe flat, shortly after the Big Bang. The proposal is radical in the sense that it invokes entirely new forms of matter, in fact two entirely new forms, in order to explain dark matter and other phenomena. WIMPs have the extra burden of automatically being very difficult to observe directly. This is somewhat analogous to quark confinement, which guarantees that quarks are there, but they can never be seen directly. One of the primary goals of the LHC at CERN, however, is to produce and detect WIMPs. At any rate, before WIMPs are accepted as the best explanation, all other possibilities utilizing known phenomena will have to be shown inferior. Should that occur, we will be in the unanticipated position of admitting that, to date, all we know is only 10% of what exists. A far cry from the days when people firmly believed themselves to be not only the center of the universe, but also the reason for its existence. ### Section Summary 1. Dark matter is non-luminous matter detected in and around galaxies and galactic clusters. 2. It may be 10 times the mass of the luminous matter in the universe, and its amount may determine whether the universe is open or closed (expands forever or eventually stops). 3. The determining factor is the critical density of the universe and the cosmological constant, a theoretical construct intimately related to the expansion and closure of the universe. 4. The critical density ρc is the density needed to just halt universal expansion. It is estimated to be approximately 10–26 kg/m3. 5. An open universe is negatively curved, a closed universe is positively curved, whereas a universe with exactly the critical density is flat. 6. Dark matter’s composition is a major mystery, but it may be due to the suspected mass of neutrinos or a completely unknown type of leptonic matter. 7. If neutrinos have mass, they will change families, a process known as neutrino oscillations, for which there is growing evidence. ### Conceptual Questions ### Problems Exercises
# Frontiers of Physics ## Complexity and Chaos ### Learning Objectives By the end of this section, you will be able to: 1. Explain complex systems. 2. Discuss chaotic behavior of different systems. Much of what impresses us about physics is related to the underlying connections and basic simplicity of the laws we have discovered. The language of physics is precise and well defined because many basic systems we study are simple enough that we can perform controlled experiments and discover unambiguous relationships. Our most spectacular successes, such as the prediction of previously unobserved particles, come from the simple underlying patterns we have been able to recognize. But there are systems of interest to physicists that are inherently complex. The simple laws of physics apply, of course, but complex systems may reveal patterns that simple systems do not. The emerging field of complexity is devoted to the study of complex systems, including those outside the traditional bounds of physics. Of particular interest is the ability of complex systems to adapt and evolve. What are some examples of complex adaptive systems? One is the primordial ocean. When the oceans first formed, they were a random mix of elements and compounds that obeyed the laws of physics and chemistry. In a relatively short geological time (about 500 million years), life had emerged. Laboratory simulations indicate that the emergence of life was far too fast to have come from random combinations of compounds, even if driven by lightning and heat. There must be an underlying ability of the complex system to organize itself, resulting in the self-replication we recognize as life. Living entities, even at the unicellular level, are highly organized and systematic. Systems of living organisms are themselves complex adaptive systems. The grandest of these evolved into the biological system we have today, leaving traces in the geological record of steps taken along the way. Complexity as a discipline examines complex systems, how they adapt and evolve, looking for similarities with other complex adaptive systems. Can, for example, parallels be drawn between biological evolution and the evolution of economic systems? Economic systems do emerge quickly, they show tendencies for self-organization, they are complex (in the number and types of transactions), and they adapt and evolve. Biological systems do all the same types of things. There are other examples of complex adaptive systems being studied for fundamental similarities. Cultures show signs of adaptation and evolution. The comparison of different cultural evolutions may bear fruit as well as comparisons to biological evolution. Science also is a complex system of human interactions, like culture and economics, that adapts to new information and political pressure, and evolves, usually becoming more organized rather than less. Those who study creative thinking also see parallels with complex systems. Humans sometimes organize almost random pieces of information, often subconsciously while doing other things, and come up with brilliant creative insights. The development of language is another complex adaptive system that may show similar tendencies. Artificial intelligence is an overt attempt to devise an adaptive system that will self-organize and evolve in the same manner as an intelligent living being learns. These are a few of the broad range of topics being studied by those who investigate complexity. There are now institutes, journals, and meetings, as well as popularizations of the emerging topic of complexity. In traditional physics, the discipline of complexity may yield insights in certain areas. Thermodynamics treats systems on the average, while statistical mechanics deals in some detail with complex systems of atoms and molecules in random thermal motion. Yet there is organization, adaptation, and evolution in those complex systems. Non-equilibrium phenomena, such as heat transfer and phase changes, are characteristically complex in detail, and new approaches to them may evolve from complexity as a discipline. Crystal growth is another example of self-organization spontaneously emerging in a complex system. Alloys are also inherently complex mixtures that show certain simple characteristics implying some self-organization. The organization of iron atoms into magnetic domains as they cool is another. Perhaps insights into these difficult areas will emerge from complexity. But at the minimum, the discipline of complexity is another example of human effort to understand and organize the universe around us, partly rooted in the discipline of physics. A predecessor to complexity is the topic of chaos, which has been widely publicized and has become a discipline of its own. It is also based partly in physics and treats broad classes of phenomena from many disciplines. Chaos is a word used to describe systems whose outcomes are extremely sensitive to initial conditions. The orbit of the planet Pluto, for example, may be chaotic in that it can change tremendously due to small interactions with other planets. This makes its long-term behavior impossible to predict with precision, just as we cannot tell precisely where a decaying Earth satellite will land or how many pieces it will break into. But the discipline of chaos has found ways to deal with such systems and has been applied to apparently unrelated systems. For example, the heartbeat of people with certain types of potentially lethal arrhythmias seems to be chaotic, and this knowledge may allow more sophisticated monitoring and recognition of the need for intervention. Chaos is related to complexity. Some chaotic systems are also inherently complex; for example, vortices in a fluid as opposed to a double pendulum. Both are chaotic and not predictable in the same sense as other systems. But there can be organization in chaos and it can also be quantified. Examples of chaotic systems are beautiful fractal patterns such as in . Some chaotic systems exhibit self-organization, a type of stable chaos. The orbits of the planets in our solar system, for example, may be chaotic (we are not certain yet). But they are definitely organized and systematic, with a simple formula describing the orbital radii of the first eight planets and the asteroid belt. Large-scale vortices in Jupiter’s atmosphere are chaotic, but the Great Red Spot is a stable self-organization of rotational energy. (See .) The Great Red Spot has been in existence for at least 400 years and is a complex self-adaptive system. The emerging field of complexity, like the now almost traditional field of chaos, is partly rooted in physics. Both attempt to see similar systematics in a very broad range of phenomena and, hence, generate a better understanding of them. Time will tell what impact these fields have on more traditional areas of physics as well as on the other disciplines they relate to. ### Section Summary 1. Complexity is an emerging field, rooted primarily in physics, that considers complex adaptive systems and their evolution, including self-organization. 2. Complexity has applications in physics and many other disciplines, such as biological evolution. 3. Chaos is a field that studies systems whose properties depend extremely sensitively on some variables and whose evolution is impossible to predict. 4. Chaotic systems may be simple or complex. 5. Studies of chaos have led to methods for understanding and predicting certain chaotic behaviors. ### Conceptual Questions
# Frontiers of Physics ## High-temperature Superconductors ### Learning Objectives By the end of this section, you will be able to: 1. Identify superconductors and their uses. 2. Discuss the need for a high-Tc superconductor. Superconductors are materials with a resistivity of zero. They are familiar to the general public because of their practical applications and have been mentioned at a number of points in the text. Because the resistance of a piece of superconductor is zero, there are no heat losses for currents through them; they are used in magnets needing high currents, such as in MRI machines, and could cut energy losses in power transmission. But most superconductors must be cooled to temperatures only a few kelvin above absolute zero, a costly procedure limiting their practical applications. In the past decade, tremendous advances have been made in producing materials that become superconductors at relatively high temperatures. There is hope that room temperature superconductors may someday be manufactured. Superconductivity was discovered accidentally in 1911 by the Dutch physicist H. Kamerlingh Onnes (1853–1926) when he used liquid helium to cool mercury. Onnes had been the first person to liquefy helium a few years earlier and was surprised to observe the resistivity of a mediocre conductor like mercury drop to zero at a temperature of 4.2 K. We define the temperature at which and below which a material becomes a superconductor to be its critical temperature, denoted by . (See .) Progress in understanding how and why a material became a superconductor was relatively slow, with the first workable theory coming in 1957. Certain other elements were also found to become superconductors, but all had s less than 10 K, which are expensive to maintain. Although Onnes received a Nobel prize in 1913, it was primarily for his work with liquid helium. In 1986, a breakthrough was announced—a ceramic compound was found to have an unprecedented of 35 K. It looked as if much higher critical temperatures could be possible, and by early 1988 another ceramic (this of thallium, calcium, barium, copper, and oxygen) had been found to have (see .) The economic potential of perfect conductors saving electric energy is immense for s above 77 K, since that is the temperature of liquid nitrogen. Although liquid helium has a boiling point of 4 K and can be used to make materials superconducting, it costs about $5 per liter. Liquid nitrogen boils at 77 K, but only costs about $0.30 per liter. There was general euphoria at the discovery of these complex ceramic superconductors, but this soon subsided with the sobering difficulty of forming them into usable wires. The first commercial use of a high temperature superconductor is in an electronic filter for cellular phones. High-temperature superconductors are used in experimental apparatus, and they are actively being researched, particularly in thin film applications. The search is on for even higher superconductors, many of complex and exotic copper oxide ceramics, sometimes including strontium, mercury, or yttrium as well as barium, calcium, and other elements. Room temperature (about 293 K) would be ideal, but any temperature close to room temperature is relatively cheap to produce and maintain. There are persistent reports of s over 200 K and some in the vicinity of 270 K. Unfortunately, these observations are not routinely reproducible, with samples losing their superconducting nature once heated and recooled (cycled) a few times (see .) They are now called USOs or unidentified superconducting objects, out of frustration and the refusal of some samples to show high even though produced in the same manner as others. Reproducibility is crucial to discovery, and researchers are justifiably reluctant to claim the breakthrough they all seek. Time will tell whether USOs are real or an experimental quirk. The theory of ordinary superconductors is difficult, involving quantum effects for widely separated electrons traveling through a material. Electrons couple in a manner that allows them to get through the material without losing energy to it, making it a superconductor. High- superconductors are more difficult to understand theoretically, but theorists seem to be closing in on a workable theory. The difficulty of understanding how electrons can sneak through materials without losing energy in collisions is even greater at higher temperatures, where vibrating atoms should get in the way. Discoverers of high may feel something analogous to what a politician once said upon an unexpected election victory—“I wonder what we did right?” ### Section Summary 1. High-temperature superconductors are materials that become superconducting at temperatures well above a few kelvin. 2. The critical temperature is the temperature below which a material is superconducting. 3. Some high-temperature superconductors have verified s above 125 K, and there are reports of s as high as 250 K. ### Conceptual Questions ### Problem Exercises
# Frontiers of Physics ## Some Questions We Know to Ask ### Learning Objectives By the end of this section, you will be able to: 1. Identify sample questions to be asked on the largest scales. 2. Identify sample questions to be asked on the intermediate scale. 3. Identify sample questions to be asked on the smallest scales. Throughout the text we have noted how essential it is to be curious and to ask questions in order to first understand what is known, and then to go a little farther. Some questions may go unanswered for centuries; others may not have answers, but some bear delicious fruit. Part of discovery is knowing which questions to ask. You have to know something before you can even phrase a decent question. As you may have noticed, the mere act of asking a question can give you the answer. The following questions are a sample of those physicists now know to ask and are representative of the forefronts of physics. Although these questions are important, they will be replaced by others if answers are found to them. The fun continues. ### On the Largest Scale 1. Is the universe open or closed? Theorists would like it to be just barely closed and evidence is building toward that conclusion. Recent measurements in the expansion rate of the universe and in CMBR support a flat universe. There is a connection to small-scale physics in the type and number of particles that may contribute to closing the universe. 2. What is dark matter? It is definitely there, but we really do not know what it is. Conventional possibilities are being ruled out, but one of them still may explain it. The answer could reveal whole new realms of physics and the disturbing possibility that most of what is out there is unknown to us, a completely different form of matter. 3. How do galaxies form? They exist since very early in the evolution of the universe and it remains difficult to understand how they evolved so quickly. The recent finer measurements of fluctuations in the CMBR may yet allow us to explain galaxy formation. 4. What is the nature of various-mass black holes? Only recently have we become confident that many black hole candidates cannot be explained by other, less exotic possibilities. But we still do not know much about how they form, what their role in the history of galactic evolution has been, and the nature of space in their vicinity. However, so many black holes are now known that correlations between black hole mass and galactic nuclei characteristics are being studied. 5. What is the mechanism for the energy output of quasars? These distant and extraordinarily energetic objects now seem to be early stages of galactic evolution with a supermassive black-hole-devouring material. Connections are now being made with galaxies having energetic cores, and there is evidence consistent with less consuming, supermassive black holes at the center of older galaxies. New instruments are allowing us to see deeper into our own galaxy for evidence of our own massive black hole. 6. Where do the ? We see bursts of rays coming from all directions in space, indicating the sources are very distant objects rather than something associated with our own galaxy. Some bursts finally are being correlated with known sources so that the possibility they may originate in binary neutron star interactions or black holes eating a companion neutron star can be explored. ### On the Intermediate Scale 1. How do phase transitions take place on the microscopic scale? We know a lot about phase transitions, such as water freezing, but the details of how they occur molecule by molecule are not well understood. Similar questions about specific heat a century ago led to early quantum mechanics. It is also an example of a complex adaptive system that may yield insights into other self-organizing systems. 2. Is there a way to deal with nonlinear phenomena that reveals underlying connections? Nonlinear phenomena lack a direct or linear proportionality that makes analysis and understanding a little easier. There are implications for nonlinear optics and broader topics such as chaos. 3. How do high- ? Understanding how they work may help make them more practical or may result in surprises as unexpected as the discovery of superconductivity itself. 4. There are magnetic effects in materials we do not understand—how do they work? Although beyond the scope of this text, there is a great deal to learn in condensed matter physics (the physics of solids and liquids). We may find surprises analogous to lasing, the quantum Hall effect, and the quantization of magnetic flux. Complexity may play a role here, too. ### On the Smallest Scale 1. Are quarks and leptons fundamental, or do they have a substructure? The higher energy accelerators that are just completed or being constructed may supply some answers, but there will also be input from cosmology and other systematics. 2. Why do leptons have integral charge while quarks have fractional charge? If both are fundamental and analogous as thought, this question deserves an answer. It is obviously related to the previous question. 3. Why are there three families of quarks and leptons? First, does this imply some relationship? Second, why three and only three families? 4. Are all forces truly equal (unified) under certain circumstances? They don’t have to be equal just because we want them to be. The answer may have to be indirectly obtained because of the extreme energy at which we think they are unified. 5. Are there other fundamental forces? There was a flurry of activity with claims of a fifth and even a sixth force a few years ago. Interest has subsided, since those forces have not been detected consistently. Moreover, the proposed forces have strengths similar to gravity, making them extraordinarily difficult to detect in the presence of stronger forces. But the question remains; and if there are no other forces, we need to ask why only four and why these four. 6. Is the proton stable? We have discussed this in some detail, but the question is related to fundamental aspects of the unification of forces. We may never know from experiment that the proton is stable, only that it is very long lived. 7. Are there magnetic monopoles? Many particle theories call for very massive individual north- and south-pole particles—magnetic monopoles. If they exist, why are they so different in mass and elusiveness from electric charges, and if they do not exist, why not? 8. Do neutrinos have mass? Definitive evidence has emerged for neutrinos having mass. The implications are significant, as discussed in this chapter. There are effects on the closure of the universe and on the patterns in particle physics. 9. What are the systematic characteristics of high- ? All elements with or less (with the exception of 115 and 117) have now been discovered. It has long been conjectured that there may be an island of relative stability near , and the study of the most recently discovered nuclei will contribute to our understanding of nuclear forces. These lists of questions are not meant to be complete or consistently important—you can no doubt add to it yourself. There are also important questions in topics not broached in this text, such as certain particle symmetries, that are of current interest to physicists. Hopefully, the point is clear that no matter how much we learn, there always seems to be more to know. Although we are fortunate to have the hard-won wisdom of those who preceded us, we can look forward to new enlightenment, undoubtedly sprinkled with surprise. ### Section Summary 1. On the largest scale, the questions which can be asked may be about dark matter, dark energy, black holes, quasars, and other aspects of the universe. 2. On the intermediate scale, we can query about gravity, phase transitions, nonlinear phenomena, high- superconductors, and magnetic effects on materials. 3. On the smallest scale, questions may be about quarks and leptons, fundamental forces, stability of protons, and existence of monopoles. ### Conceptual Questions
# Introduction: The Nature of Science and Physics ## Connection for AP® Courses What is your first reaction when you hear the word “physics”? Did you imagine working through difficult equations or memorizing formulas that seem to have no real use in life outside the physics classroom? Many people come to the subject of physics with a bit of fear. But as you begin your exploration of this broad-ranging subject, you may soon come to realize that physics plays a much larger role in your life than you first thought, no matter your life goals or career choice. For example, take a look at the image above. This image is of the Andromeda Galaxy, which contains billions of individual stars, huge clouds of gas, and dust. Two smaller galaxies are also visible as bright blue spots in the background. At a staggering 2.5 million light years from Earth, this galaxy is the nearest one to our own galaxy (which is called the Milky Way). The stars and planets that make up Andromeda might seem to be the furthest thing from most people's regular, everyday lives. But Andromeda is a great starting point to think about the forces that hold together the universe. The forces that cause Andromeda to act as it does are the same forces we contend with here on Earth, whether we are planning to send a rocket into space or simply raise the walls for a new home. The same gravity that causes the stars of Andromeda to rotate and revolve also causes water to flow over hydroelectric dams here on Earth. Tonight, take a moment to look up at the stars. The forces out there are the same as the ones here on Earth. Through a study of physics, you may gain a greater understanding of the interconnectedness of everything we can see and know in this universe. Think now about all of the technological devices that you use on a regular basis. Computers, smart phones, GPS systems, MP3 players, and satellite radio might come to mind. Next, think about the most exciting modern technologies that you have heard about in the news, such as trains that levitate above tracks, “invisibility cloaks” that bend light around them, and microscopic robots that fight cancer cells in our bodies. All of these groundbreaking advancements, commonplace or unbelievable, rely on the principles of physics. Aside from playing a significant role in technology, professionals such as engineers, pilots, physicians, physical therapists, electricians, and computer programmers apply physics concepts in their daily work. For example, a pilot must understand how wind forces affect a flight path and a physical therapist must understand how the muscles in the body experience forces as they move and bend. As you will learn in this text, physics principles are propelling new, exciting technologies, and these principles are applied in a wide range of careers. In this text, you will begin to explore the history of the formal study of physics, beginning with natural philosophy and the ancient Greeks, and leading up through a review of Sir Isaac Newton and the laws of physics that bear his name. You will also be introduced to the standards scientists use when they study physical quantities and the interrelated system of measurements most of the scientific community uses to communicate in a single mathematical language. Finally, you will study the limits of our ability to be accurate and precise, and the reasons scientists go to painstaking lengths to be as clear as possible regarding their own limitations. Chapter 1 introduces many fundamental skills and understandings needed for success with the AP® Learning Objectives. While this chapter does not directly address any Big Ideas, its content will allow for a more meaningful understanding when these Big Ideas are addressed in future chapters. For instance, the discussion of models, theories, and laws will assist you in understanding the concept of fields as addressed in Big Idea 2, and the section titled ‘The Evolution of Natural Philosophy into Modern Physics' will help prepare you for the statistical topics addressed in Big Idea 7. This chapter will also prepare you to understand the Science Practices. In explicitly addressing the role of models in representing and communicating scientific phenomena, Section 1.1 supports Science Practice 1. Additionally, anecdotes about historical investigations and the inset on the scientific method will help you to engage in the scientific questioning referenced in Science Practice 3. The appropriate use of mathematics, as called for in Science Practice 2, is a major focus throughout sections 1.2, 1.3, and 1.4.
# Introduction: The Nature of Science and Physics ## Physics: An Introduction ### Learning Objectives By the end of this section, you will be able to: 1. Explain the difference between a principle and a law. 2. Explain the difference between a model and a theory. The physical universe is enormously complex in its detail. Every day, each of us observes a great variety of objects and phenomena. Over the centuries, the curiosity of the human race has led us collectively to explore and catalog a tremendous wealth of information. From the flight of birds to the colors of flowers, from lightning to gravity, from quarks to clusters of galaxies, from the flow of time to the mystery of the creation of the universe, we have asked questions and assembled huge arrays of facts. In the face of all these details, we have discovered that a surprisingly small and unified set of physical laws can explain what we observe. As humans, we make generalizations and seek order. We have found that nature is remarkably cooperative—it exhibits the underlying order and simplicity we so value. It is the underlying order of nature that makes science in general, and physics in particular, so enjoyable to study. For example, what do a bag of chips and a car battery have in common? Both contain energy that can be converted to other forms. The law of conservation of energy (which says that energy can change form but is never lost) ties together such topics as food calories, batteries, heat, light, and watch springs. Understanding this law makes it easier to learn about the various forms energy takes and how they relate to one another. Apparently unrelated topics are connected through broadly applicable physical laws, permitting an understanding beyond just the memorization of lists of facts. The unifying aspect of physical laws and the basic simplicity of nature form the underlying themes of this text. In learning to apply these laws, you will, of course, study the most important topics in physics. More importantly, you will gain analytical abilities that will enable you to apply these laws far beyond the scope of what can be included in a single book. These analytical skills will help you to excel academically, and they will also help you to think critically in any professional career you choose to pursue. This module discusses the realm of physics (to define what physics is), some applications of physics (to illustrate its relevance to other disciplines), and more precisely what constitutes a physical law (to illuminate the importance of experimentation to theory). ### Science and the Realm of Physics Science consists of the theories and laws that are the general truths of nature as well as the body of knowledge they encompass. Scientists are continually trying to expand this body of knowledge and to perfect the expression of the laws that describe it. Physics is concerned with describing the interactions of energy, matter, space, and time, and it is especially interested in what fundamental mechanisms underlie every phenomenon. The concern for describing the basic phenomena in nature essentially defines the realm of physics. Physics aims to describe the function of everything around us, from the movement of tiny charged particles to the motion of people, cars, and spaceships. In fact, almost everything around you can be described quite accurately by the laws of physics. Consider a smart phone (). Physics describes how electricity interacts with the various circuits inside the device. This knowledge helps engineers select the appropriate materials and circuit layout when building the smart phone. Next, consider a GPS system. Physics describes the relationship between the speed of an object, the distance over which it travels, and the time it takes to travel that distance. GPS relies on precise calculations that account for variations in the Earth's landscapes, the exact distance between orbiting satellites, and even the effect of a complex occurrence of time dilation. Most of these calculations are founded on algorithms developed by Gladys West, a mathematician and computer scientist who programmed the first computers capable of highly accurate remote sensing and positioning. When you use a GPS device, it utilizes these algorithms to recognize where you are and how your position relates to other objects on Earth. ### Applications of Physics You need not be a scientist to use physics. On the contrary, knowledge of physics is useful in everyday situations as well as in nonscientific professions. It can help you understand how microwave ovens work, why metals should not be put into them, and why they might affect pacemakers. (See and .) Physics allows you to understand the hazards of radiation and rationally evaluate these hazards more easily. Physics also explains the reason why a black car radiator helps remove heat in a car engine, and it explains why a white roof helps keep the inside of a house cool. Similarly, the operation of a car’s ignition system as well as the transmission of electrical signals through our body’s nervous system are much easier to understand when you think about them in terms of basic physics. Physics is the foundation of many important disciplines and contributes directly to others. Chemistry, for example—since it deals with the interactions of atoms and molecules—is rooted in atomic and molecular physics. Most branches of engineering are applied physics. In architecture, physics is at the heart of structural stability, and is involved in the acoustics, heating, lighting, and cooling of buildings. Parts of geology rely heavily on physics, such as radioactive dating of rocks, earthquake analysis, and heat transfer in the Earth. Some disciplines, such as biophysics and geophysics, are hybrids of physics and other disciplines. Physics has many applications in the biological sciences. On the microscopic level, it helps describe the properties of cell walls and cell membranes ( and ). On the macroscopic level, it can explain the heat, work, and power associated with the human body. Physics is involved in medical diagnostics, such as x-rays, magnetic resonance imaging (MRI), and ultrasonic blood flow measurements. Medical therapy sometimes directly involves physics; for example, cancer radiotherapy uses ionizing radiation. Physics can also explain sensory phenomena, such as how musical instruments make sound, how the eye detects color, and how lasers can transmit information. It is not necessary to formally study all applications of physics. What is most useful is knowledge of the basic laws of physics and a skill in the analytical methods for applying them. The study of physics also can improve your problem-solving skills. Furthermore, physics has retained the most basic aspects of science, so it is used by all of the sciences, and the study of physics makes other sciences easier to understand. ### Models, Theories, and Laws; The Role of Experimentation The laws of nature are concise descriptions of the universe around us; they are human statements of the underlying laws or rules that all natural processes follow. Such laws are intrinsic to the universe; humans did not create them and so cannot change them. We can only discover and understand them. Their discovery is a very human endeavor, with all the elements of mystery, imagination, struggle, triumph, and disappointment inherent in any creative effort. (See and .) The cornerstone of discovering natural laws is observation; science must describe the universe as it is, not as we may imagine it to be. We all are curious to some extent. We look around, make generalizations, and try to understand what we see—for example, we look up and wonder whether one type of cloud signals an oncoming storm. As we become serious about exploring nature, we become more organized and formal in collecting and analyzing data. We attempt greater precision, perform controlled experiments (if we can), and write down ideas about how the data may be organized and unified. We then formulate models, theories, and laws based on the data we have collected and analyzed to generalize and communicate the results of these experiments. A model is a representation of something that is often too difficult (or impossible) to display directly. While a model is justified with experimental proof, it is only accurate under limited situations. An example is the planetary model of the atom in which electrons are pictured as orbiting the nucleus, analogous to the way planets orbit the Sun. (See .) We cannot observe electron orbits directly, but the mental image helps explain the observations we can make, such as the emission of light from hot gases (atomic spectra). Physicists use models for a variety of purposes. For example, models can help physicists analyze a scenario and perform a calculation, or they can be used to represent a situation in the form of a computer simulation. A theory is an explanation for patterns in nature that is supported by scientific evidence and verified multiple times by various groups of researchers. Some theories include models to help visualize phenomena, whereas others do not. Newton’s theory of gravity, for example, does not require a model or mental image, because we can observe the objects directly with our own senses. The kinetic theory of gases, on the other hand, is a model in which a gas is viewed as being composed of atoms and molecules. Atoms and molecules are too small to be observed directly with our senses—thus, we picture them mentally to understand what our instruments tell us about the behavior of gases. A law uses concise language to describe a generalized pattern in nature that is supported by scientific evidence and repeated experiments. Often, a law can be expressed in the form of a single mathematical equation. Laws and theories are similar in that they are both scientific statements that result from a tested hypothesis and are supported by scientific evidence. However, the designation law is reserved for a concise and very general statement that describes phenomena in nature, such as the law that energy is conserved during any process, or Newton’s second law of motion, which relates force, mass, and acceleration by the simple equation . A theory, in contrast, is a less concise statement of observed phenomena. For example, the Theory of Evolution and the Theory of Relativity cannot be expressed concisely enough to be considered a law. The biggest difference between a law and a theory is that a theory is much more complex and dynamic. A law describes a single action, whereas a theory explains an entire group of related phenomena. And, whereas a law is a postulate that forms the foundation of the scientific method, a theory is the end result of that process. Less broadly applicable statements are usually called principles (such as Pascal’s principle, which is applicable only in fluids), but the distinction between laws and principles often is not carefully made. The models, theories, and laws we devise sometimes imply the existence of objects or phenomena as yet unobserved. These predictions are remarkable triumphs and tributes to the power of science. It is the underlying order in the universe that enables scientists to make such spectacular predictions. However, if experiment does not verify our predictions, then the theory or law is wrong, no matter how elegant or convenient it is. Laws can never be known with absolute certainty because it is impossible to perform every imaginable experiment in order to confirm a law in every possible scenario. Physicists operate under the assumption that all scientific laws and theories are valid until a counterexample is observed. If a good-quality, verifiable experiment contradicts a well-established law, then the law must be modified or overthrown completely. The study of science in general and physics in particular is an adventure much like the exploration of uncharted ocean. Discoveries are made; models, theories, and laws are formulated; and the beauty of the physical universe is made more sublime for the insights gained. ### The Evolution of Natural Philosophy into Modern Physics Physics was not always a separate and distinct discipline. It remains connected to other sciences to this day. The word physics comes from Greek, meaning nature. The study of nature came to be called “natural philosophy.” From ancient times through the Renaissance, natural philosophy encompassed many fields, including astronomy, biology, chemistry, physics, mathematics, and medicine. Over the last few centuries, the growth of knowledge has resulted in ever-increasing specialization and branching of natural philosophy into separate fields, with physics retaining the most basic facets. (See , , and .) Physics as it developed from the Renaissance to the end of the 19th century is called classical physics. It was transformed into modern physics by revolutionary discoveries made starting at the beginning of the 20th century. Classical physics is not an exact description of the universe, but it is an excellent approximation under the following conditions: Matter must be moving at speeds less than about 1% of the speed of light, the objects dealt with must be large enough to be seen with a microscope, and only weak gravitational fields, such as the field generated by the Earth, can be involved. Because humans live under such circumstances, classical physics seems intuitively reasonable, while many aspects of modern physics seem bizarre. This is why models are so useful in modern physics—they let us conceptualize phenomena we do not ordinarily experience. We can relate to models in human terms and visualize what happens when objects move at high speeds or imagine what objects too small to observe with our senses might be like. For example, we can understand an atom’s properties because we can picture it in our minds, although we have never seen an atom with our eyes. New tools, of course, allow us to better picture phenomena we cannot see. In fact, new instrumentation has allowed us in recent years to actually “picture” the atom. Some of the most spectacular advances in science have been made in modern physics. Many of the laws of classical physics have been modified or rejected, and revolutionary changes in technology, society, and our view of the universe have resulted. Like science fiction, modern physics is filled with fascinating objects beyond our normal experiences, but it has the advantage over science fiction of being very real. Why, then, is the majority of this text devoted to topics of classical physics? There are two main reasons: Classical physics gives an extremely accurate description of the universe under a wide range of everyday circumstances, and knowledge of classical physics is necessary to understand modern physics. Modern physics itself consists of the two revolutionary theories, relativity and quantum mechanics. These theories deal with the very fast and the very small, respectively. Relativity must be used whenever an object is traveling at greater than about 1% of the speed of light or experiences a strong gravitational field such as that near the Sun. Quantum mechanics must be used for objects smaller than can be seen with a microscope. The combination of these two theories is relativistic quantum mechanics, and it describes the behavior of small objects traveling at high speeds or experiencing a strong gravitational field. Relativistic quantum mechanics is the best universally applicable theory we have. Because of its mathematical complexity, it is used only when necessary, and the other theories are used whenever they will produce sufficiently accurate results. We will find, however, that we can do a great deal of modern physics with the algebra and trigonometry used in this text. ### Summary 1. Science seeks to discover and describe the underlying order and simplicity in nature. 2. Physics is the most basic of the sciences, concerning itself with energy, matter, space and time, and their interactions. 3. Scientific laws and theories express the general truths of nature and the body of knowledge they encompass. These laws of nature are rules that all natural processes appear to follow. ### Conceptual Questions
# Introduction: The Nature of Science and Physics ## Physical Quantities and Units ### Learning Objectives By the end of this section, you will be able to: 1. Perform unit conversions both in the SI and English units. 2. Explain the most common prefixes in the SI units and be able to write them in scientific notation. The range of objects and phenomena studied in physics is immense. From the incredibly short lifetime of a nucleus to the age of the Earth, from the tiny sizes of sub-nuclear particles to the vast distance to the edges of the known universe, from the force exerted by a jumping flea to the force between Earth and the Sun, there are enough factors of 10 to challenge the imagination of even the most experienced scientist. Giving numerical values for physical quantities and equations for physical principles allows us to understand nature much more deeply than does qualitative description alone. To comprehend these vast ranges, we must also have accepted units in which to express them. And we shall find that (even in the potentially mundane discussion of meters, kilograms, and seconds) a profound simplicity of nature appears—most physical quantities can be expressed as combinations of only four fundamental physical quantities: length, mass, time, and electric current. We define a physical quantity either by specifying how it is measured or by stating how it is calculated from other measurements. For example, we define distance and time by specifying methods for measuring them, whereas we define average speed by stating that it is calculated as distance traveled divided by time of travel. Measurements of physical quantities are expressed in terms of units, which are standardized values. For example, the length of a race, which is a physical quantity, can be expressed in units of meters (for sprinters) or kilometers (for distance runners). Without standardized units, it would be extremely difficult for scientists to express and compare measured values in a meaningful way. (See .) There are two major systems of units used in the world: SI units (also known as the metric system) and English units (also known as the customary or imperial system). English units were historically used in nations once ruled by the British Empire and are still widely used in the United States. Virtually every other country in the world now uses SI units as the standard; the metric system is also the standard system agreed upon by scientists and mathematicians. The acronym “SI” is derived from the French Système International. ### SI Units: Fundamental and Derived Units gives the fundamental SI units that are used throughout this textbook. This text uses non-SI units in a few applications where they are in very common use, such as the measurement of blood pressure in millimeters of mercury (mm Hg). Whenever non-SI units are discussed, they will be tied to SI units through conversions. It is an intriguing fact that some physical quantities are more fundamental than others and that the most fundamental physical quantities can be defined only in terms of the procedure used to measure them. The units in which they are measured are thus called fundamental units. In this textbook, the fundamental physical quantities are taken to be length, mass, time, and electric current. (Note that electric current will not be introduced until much later in this text.) All other physical quantities, such as force and electric charge, can be expressed as algebraic combinations of length, mass, time, and current (for example, speed is length divided by time); these units are called derived units. ### Units of Time, Length, and Mass: The Second, Meter, and Kilogram ### The Second The SI unit for time, the second (abbreviated s), has a long history. For many years it was defined as 1/86,400 of a mean solar day. More recently, a new standard was adopted to gain greater accuracy and to define the second in terms of a non-varying, or constant, physical phenomenon (because the solar day is getting longer due to very gradual slowing of the Earth’s rotation). Cesium atoms can be made to vibrate in a very steady way, and these vibrations can be readily observed and counted. In 1967 the second was redefined as the time required for 9,192,631,770 of these vibrations. (See .) Accuracy in the fundamental units is essential, because all measurements are ultimately expressed in terms of fundamental units and can be no more accurate than are the fundamental units themselves. ### The Meter The SI unit for length is the meter (abbreviated m); its definition has also changed over time to become more accurate and precise. The meter was first defined in 1791 as 1/10,000,000 of the distance from the equator to the North Pole. This measurement was improved in 1889 by redefining the meter to be the distance between two engraved lines on a platinum-iridium bar now kept near Paris. By 1960, it had become possible to define the meter even more accurately in terms of the wavelength of light, so it was again redefined as 1,650,763.73 wavelengths of orange light emitted by krypton atoms. In 1983, the meter was given its present definition (partly for greater accuracy) as the distance light travels in a vacuum in 1/299,792,458 of a second. (See .) This change defines the speed of light to be exactly 299,792,458 meters per second. The length of the meter will change if the speed of light is someday measured with greater accuracy. ### The Kilogram The SI unit for mass is the kilogram (abbreviated kg); it was previously defined to be the mass of a platinum-iridium cylinder kept with the old meter standard at the International Bureau of Weights and Measures near Paris. Exact replicas of the previously defined kilogram are also kept at the United States’ National Institute of Standards and Technology, or NIST, located in Gaithersburg, Maryland outside of Washington D.C., and at other locations around the world. The determination of all other masses could be ultimately traced to a comparison with the standard mass. Even though the platinum-iridium cylinder was resistant to corrosion, airborne contaminants were able to adhere to its surface, slightly changing its mass over time. In May 2019, the scientific community adopted a more stable definition of the kilogram. The kilogram is now defined in terms of the second, the meter, and Planck's constant, h (a quantum mechanical value that relates a photon's energy to its frequency). Electric current and its accompanying unit, the ampere, will be introduced in Electric Current, Resistance, and Ohm's Law when electricity and magnetism are covered. The initial modules in this textbook are concerned with mechanics, fluids, heat, and waves. In these subjects all pertinent physical quantities can be expressed in terms of the fundamental units of length, mass, and time. ### Metric Prefixes SI units are part of the metric system. The metric system is convenient for scientific and engineering calculations because the units are categorized by factors of 10. gives metric prefixes and symbols used to denote various factors of 10. Metric systems have the advantage that conversions of units involve only powers of 10. There are 100 centimeters in a meter, 1000 meters in a kilometer, and so on. In nonmetric systems, such as the system of U.S. customary units, the relationships are not as simple—there are 12 inches in a foot, 5280 feet in a mile, and so on. Another advantage of the metric system is that the same unit can be used over extremely large ranges of values simply by using an appropriate metric prefix. For example, distances in meters are suitable in construction, while distances in kilometers are appropriate for air travel, and the tiny measure of nanometers are convenient in optical design. With the metric system there is no need to invent new units for particular applications. The term order of magnitude refers to the scale of a value expressed in the metric system. Each power of , and so forth are all different orders of magnitude. All quantities that can be expressed as a product of a specific power of are said to be of the same order of magnitude. For example, the number can be written as , and the number can be written as Thus, the numbers and are of the same order of magnitude: Order of magnitude can be thought of as a ballpark estimate for the scale of a value. The diameter of an atom is on the order of while the diameter of the Sun is on the order of ### Known Ranges of Length, Mass, and Time The vastness of the universe and the breadth over which physics applies are illustrated by the wide range of examples of known lengths, masses, and times in . Examination of this table will give you some feeling for the range of possible topics and numerical values. (See and .) ### Unit Conversion and Dimensional Analysis It is often necessary to convert from one type of unit to another. For example, if you are reading a European cookbook, some quantities may be expressed in units of liters and you need to convert them to cups. Or, perhaps you are reading walking directions from one location to another and you are interested in how many miles you will be walking. In this case, you will need to convert units of feet to miles. Let us consider a simple example of how to convert units. Let us say that we want to convert 80 meters (m) to kilometers (km). The first thing to do is to list the units that you have and the units that you want to convert to. In this case, we have units in meters and we want to convert to kilometers. Next, we need to determine a conversion factor relating meters to kilometers. A conversion factor is a ratio expressing how many of one unit are equal to another unit. For example, there are 12 inches in 1 foot, 100 centimeters in 1 meter, 60 seconds in 1 minute, and so on. In this case, we know that there are 1,000 meters in 1 kilometer. Now we can set up our unit conversion. We will write the units that we have and then multiply them by the conversion factor so that the units cancel out, as shown: Note that the unwanted m unit cancels, leaving only the desired km unit. You can use this method to convert between any types of unit. Click Appendix C for a more complete list of conversion factors. ### Summary 1. Physical quantities are a characteristic or property of an object that can be measured or calculated from other measurements. 2. Units are standards for expressing and comparing the measurement of physical quantities. All units can be expressed as combinations of four fundamental units. 3. The four fundamental units we will use in this text are the meter (for length), the kilogram (for mass), the second (for time), and the ampere (for electric current). These units are part of the metric system, which uses powers of 10 to relate quantities over the vast ranges encountered in nature. 4. The four fundamental units are abbreviated as follows: meter, m; kilogram, kg; second, s; and ampere, A. The metric system also uses a standard set of prefixes to denote each order of magnitude greater than or lesser than the fundamental unit itself. 5. Unit conversions involve changing a value expressed in one type of unit to another type of unit. This is done by using conversion factors, which are ratios relating equal quantities of different units. ### Conceptual Questions ### Problems & Exercises
# Introduction: The Nature of Science and Physics ## Accuracy, Precision, and Significant Figures ### Learning Objectives By the end of this section, you will be able to: 1. Determine the appropriate number of significant figures in both addition and subtraction, as well as multiplication and division calculations. 2. Calculate the percent uncertainty of a measurement. ### Accuracy and Precision of a Measurement Science is based on observation and experiment—that is, on measurements. Accuracy is how close a measurement is to the correct value for that measurement. For example, let us say that you are measuring the length of standard computer paper. The packaging in which you purchased the paper states that it is 11.0 inches long. You measure the length of the paper three times and obtain the following measurements: 11.1 in., 11.2 in., and 10.9 in. These measurements are quite accurate because they are very close to the correct value of 11.0 inches. In contrast, if you had obtained a measurement of 12 inches, your measurement would not be very accurate. The precision of a measurement system refers to how close the agreement is between repeated measurements (which are repeated under the same conditions). Consider the example of the paper measurements. The precision of the measurements refers to the spread of the measured values. One way to analyze the precision of the measurements would be to determine the range, or difference, between the lowest and the highest measured values. In that case, the lowest value was 10.9 in. and the highest value was 11.2 in. Thus, the measured values deviated from each other by at most 0.3 in. These measurements were relatively precise because they did not vary too much in value. However, if the measured values had been 10.9, 11.1, and 11.9, then the measurements would not be very precise because there would be significant variation from one measurement to another. The measurements in the paper example are both accurate and precise, but in some cases, measurements are accurate but not precise, or they are precise but not accurate. Let us consider an example of a GPS system that is attempting to locate the position of a restaurant in a city. Think of the restaurant location as existing at the center of a bull’s-eye target, and think of each GPS attempt to locate the restaurant as a black dot. In , you can see that the GPS measurements are spread out far apart from each other, but they are centered close to the actual location of the restaurant at the center of the target. This indicates a low precision, high accuracy measuring system. However, in , the GPS measurements are concentrated quite closely to one another, but they are far away from the target location. This indicates a high precision, low accuracy measuring system. ### Accuracy, Precision, and Uncertainty The degree of accuracy and precision of a measuring system are related to the uncertainty in the measurements. Uncertainty is a quantitative measure of how much your measured values deviate from a standard or expected value. If your measurements are not very accurate or precise, then the uncertainty of your values will be very high. In more general terms, uncertainty can be thought of as a disclaimer for your measured values. For example, if someone asked you to provide the mileage on your car, you might say that it is 45,000 miles, plus or minus 500 miles. The plus or minus amount is the uncertainty in your value. That is, you are indicating that the actual mileage of your car might be as low as 44,500 miles or as high as 45,500 miles, or anywhere in between. All measurements contain some amount of uncertainty. In our example of measuring the length of the paper, we might say that the length of the paper is 11 in., plus or minus 0.2 in. The uncertainty in a measurement, , is often denoted as (“delta ”), so the measurement result would be recorded as . In our paper example, the length of the paper could be expressed as The factors contributing to uncertainty in a measurement include: 1. Limitations of the measuring device, 2. The skill of the person making the measurement, 3. Irregularities in the object being measured, 4. Any other factors that affect the outcome (highly dependent on the situation). In our example, such factors contributing to the uncertainty could be the following: the smallest division on the ruler is 0.1 in., the person using the ruler has bad eyesight, or one side of the paper is slightly longer than the other. At any rate, the uncertainty in a measurement must be based on a careful consideration of all the factors that might contribute and their possible effects. ### Percent Uncertainty One method of expressing uncertainty is as a percent of the measured value. If a measurement is expressed with uncertainty, , the percent uncertainty (%unc) is defined to be ### Uncertainties in Calculations There is an uncertainty in anything calculated from measured quantities. For example, the area of a floor calculated from measurements of its length and width has an uncertainty because the length and width have uncertainties. How big is the uncertainty in something you calculate by multiplication or division? If the measurements going into the calculation have small uncertainties (a few percent or less), then the method of adding percents can be used for multiplication or division. This method says that the percent uncertainty in a quantity calculated by multiplication or division is the sum of the percent uncertainties in the items used to make the calculation. For example, if a floor has a length of and a width of , with uncertainties of and , respectively, then the area of the floor is and has an uncertainty of . (Expressed as an area this is , which we round to since the area of the floor is given to a tenth of a square meter.) ### Precision of Measuring Tools and Significant Figures An important factor in the accuracy and precision of measurements involves the precision of the measuring tool. In general, a precise measuring tool is one that can measure values in very small increments. For example, a standard ruler can measure length to the nearest millimeter, while a caliper can measure length to the nearest 0.01 millimeter. The caliper is a more precise measuring tool because it can measure extremely small differences in length. The more precise the measuring tool, the more precise and accurate the measurements can be. When we express measured values, we can only list as many digits as we initially measured with our measuring tool. For example, if you use a standard ruler to measure the length of a stick, you may measure it to be . You could not express this value as because your measuring tool was not precise enough to measure a hundredth of a centimeter. It should be noted that the last digit in a measured value has been estimated in some way by the person performing the measurement. For example, the person measuring the length of a stick with a ruler notices that the stick length seems to be somewhere in between and , and they must estimate the value of the last digit. Using the method of significant figures, the rule is that the last digit written down in a measurement is the first digit with some uncertainty. In order to determine the number of significant digits in a value, start with the first measured value at the left and count the number of digits through the last digit written on the right. For example, the measured value has three digits, or significant figures. Significant figures indicate the precision of a measuring tool that was used to measure a value. ### Zeros Special consideration is given to zeros when counting significant figures. The zeros in 0.053 are not significant, because they are only placekeepers that locate the decimal point. There are two significant figures in 0.053. The zeros in 10.053 are not placekeepers but are significant—this number has five significant figures. The zeros in 1300 may or may not be significant depending on the style of writing numbers. They could mean the number is known to the last digit, or they could be placekeepers. So 1300 could have two, three, or four significant figures. (To avoid this ambiguity, write 1300 in scientific notation.) Zeros are significant except when they serve only as placekeepers. ### Significant Figures in Calculations When combining measurements with different degrees of accuracy and precision, the number of significant digits in the final answer can be no greater than the number of significant digits in the least precise measured value. There are two different rules, one for multiplication and division and the other for addition and subtraction, as discussed below. 1. For multiplication and division: The result should have the same number of significant figures as the quantity having the least significant figures entering into the calculation. For example, the area of a circle can be calculated from its radius using . Let us see how many significant figures the area has if the radius has only two—say, . Then, is what you would get using a calculator that has an eight-digit output. But because the radius has only two significant figures, it limits the calculated quantity to two significant figures or even though is good to at least eight digits. 2. For addition and subtraction: The answer can contain no more decimal places than the least precise measurement. Suppose that you buy 7.56-kg of potatoes in a grocery store as measured with a scale with precision 0.01 kg. Then you drop off 6.052-kg of potatoes at your laboratory as measured by a scale with precision 0.001 kg. Finally, you go home and add 13.7 kg of potatoes as measured by a bathroom scale with precision 0.1 kg. How many kilograms of potatoes do you now have, and how many significant figures are appropriate in the answer? The mass is found by simple addition and subtraction: Next, we identify the least precise measurement: 13.7 kg. This measurement is expressed to the 0.1 decimal place, so our final answer must also be expressed to the 0.1 decimal place. Thus, the answer is rounded to the tenths place, giving us 15.2 kg. ### Significant Figures in this Text In this text, most numbers are assumed to have three significant figures. Furthermore, consistent numbers of significant figures are used in all worked examples. You will note that an answer given to three digits is based on input good to at least three digits, for example. If the input has fewer significant figures, the answer will also have fewer significant figures. Care is also taken that the number of significant figures is reasonable for the situation posed. In some topics, particularly in optics, more accurate numbers are needed and more than three significant figures will be used. Finally, if a number is exact, such as the two in the formula for the circumference of a circle, , it does not affect the number of significant figures in a calculation. ### Summary 1. Accuracy of a measured value refers to how close a measurement is to the correct value. The uncertainty in a measurement is an estimate of the amount by which the measurement result may differ from this value. 2. Precision of measured values refers to how close the agreement is between repeated measurements. 3. The precision of a measuring tool is related to the size of its measurement increments. The smaller the measurement increment, the more precise the tool. 4. Significant figures express the precision of a measuring tool. 5. When multiplying or dividing measured values, the final answer can contain only as many significant figures as the least precise value. 6. When adding or subtracting measured values, the final answer cannot contain more decimal places than the least precise value. ### Conceptual Questions ### Problems & Exercises Express your answers to problems in this section to the correct number of significant figures and proper units.
# Introduction: The Nature of Science and Physics ## Approximation ### Learning Objectives By the end of this section, you will be able to: 1. Make reasonable approximations based on given data. On many occasions, physicists, other scientists, and engineers need to make approximations or “guesstimates” for a particular quantity. What is the distance to a certain destination? What is the approximate density of a given item? About how large a current will there be in a circuit? Many approximate numbers are based on formulae in which the input quantities are known only to a limited accuracy. As you develop problem-solving skills (that can be applied to a variety of fields through a study of physics), you will also develop skills at approximating. You will develop these skills through thinking more quantitatively, and by being willing to take risks. As with any endeavor, experience helps, as well as familiarity with units. These approximations allow us to rule out certain scenarios or unrealistic numbers. Approximations also allow us to challenge others and guide us in our approaches to our scientific world. Let us do two examples to illustrate this concept. ### Summary Scientists often approximate the values of quantities to perform calculations and analyze systems. ### Problems & Exercises
# Kinematics ## Connection for AP® Courses Objects are in motion everywhere we look. Everything from a tennis game to a space-probe flyby of the planet Neptune involves motion. When you are resting, your heart moves blood through your veins. Even in inanimate objects, there is a continuous motion in the vibrations of atoms and molecules. Questions about motion are interesting in and of themselves: How long will it take for a space probe to get to Mars? Where will a football land if it is thrown at a certain angle? Understanding motion will not only provide answers to these questions, but will be key to understanding more advanced concepts in physics. For example, the discussion of force in Chapter 4 will not fully make sense until you understand acceleration. This relationship between force and acceleration is also critical to understanding Big Idea 3. Additionally, this unit will explore the topic of reference frames, a critical component to quantifying how things move. If you have ever waved to a departing friend at a train station, you are likely familiar with this idea. While you see your friend move away from you at a considerable rate, those sitting with her will likely see her as not moving. The effect that the chosen reference frame has on your observations is substantial, and an understanding of this is needed to grasp both Enduring Understanding 3.A and Essential Knowledge 3.A.1. Our formal study of physics begins with kinematics, which is defined as the study of motion without considering its causes. In one- and two-dimensional kinematics we will study only the motion of a football, for example, without worrying about what forces cause or change its motion. In this chapter, we examine the simplest type of motion—namely, motion along a straight line, or one-dimensional motion. Later, in two-dimensional kinematics, we apply concepts developed here to study motion along curved paths (two- and three-dimensional motion), for example, that of a car rounding a curve. The content in this chapter supports: Big Idea 3 The interactions of an object with other objects can be described by forces. Enduring Understanding 3.A All forces share certain common characteristics when considered by observers in inertial reference frames. Essential Knowledge 3.A.1 An observer in a particular reference frame can describe the motion of an object using such quantities as position, displacement, distance, velocity, speed, and acceleration.
# Kinematics ## Displacement ### Learning Objectives By the end of this section, you will be able to: 1. Define position, displacement, distance, and distance traveled. 2. Explain the relationship between position and displacement. 3. Distinguish between displacement and distance traveled. 4. Calculate displacement and distance given initial position, final position, and the path between the two. ### Position In order to describe the motion of an object, you must first be able to describe its position—where it is at any particular time. More precisely, you need to specify its position relative to a convenient reference frame. Earth is often used as a reference frame, and we often describe the position of an object as it relates to stationary objects in that reference frame. For example, a rocket launch would be described in terms of the position of the rocket with respect to the Earth as a whole, while a professor’s position could be described in terms of where she is in relation to the nearby white board. (See .) In other cases, we use reference frames that are not stationary but are in motion relative to the Earth. To describe the position of a person in an airplane, for example, we use the airplane, not the Earth, as the reference frame. (See .) ### Displacement If an object moves relative to a reference frame (for example, if a professor moves to the right relative to a white board or a passenger moves toward the rear of an airplane), then the object’s position changes. This change in position is known as displacement. The word “displacement” implies that an object has moved, or has been displaced. In this text the upper case Greek letter (delta) always means “change in” whatever quantity follows it; thus, means change in position. Always solve for displacement by subtracting initial position from final position . Note that the SI unit for displacement is the meter (m) (see Physical Quantities and Units), but sometimes kilometers, miles, feet, and other units of length are used. Keep in mind that when units other than the meter are used in a problem, you may need to convert them into meters to complete the calculation. Note that displacement has a direction as well as a magnitude. The professor’s displacement is 2.0 m to the right, and the airline passenger’s displacement is 4.0 m toward the rear. In one-dimensional motion, direction can be specified with a plus or minus sign. When you begin a problem, you should select which direction is positive (usually that will be to the right or up, but you are free to select positive as being any direction). The professor’s initial position is and her final position is . Thus her displacement is In this coordinate system, motion to the right is positive, whereas motion to the left is negative. Similarly, the airplane passenger’s initial position is and his final position is , so his displacement is His displacement is negative because his motion is toward the rear of the plane, or in the negative direction in our coordinate system. ### Distance Although displacement is described in terms of direction, distance is not. Distance is defined to be the magnitude or size of displacement between two positions. Note that the distance between two positions is not the same as the distance traveled between them. Distance traveled is the total length of the path traveled between two positions. Distance has no direction and, thus, no sign. For example, the distance the professor walks is 2.0 m. The distance the airplane passenger walks is 4.0 m. ### Test Prep for AP Courses ### Section Summary 1. Kinematics is the study of motion without considering its causes. In this chapter, it is limited to motion along a straight line, called one-dimensional motion. 2. Displacement is the change in position of an object. 3. In symbols, displacement is defined to be where is the initial position and is the final position. In this text, the Greek letter (delta) always means “change in” whatever quantity follows it. The SI unit for displacement is the meter (m). Displacement has a direction as well as a magnitude. 4. When you start a problem, assign which direction will be positive. 5. Distance is the magnitude of displacement between two positions. 6. Distance traveled is the total length of the path traveled between two positions. ### Conceptual Questions ### Problems & Exercises
# Kinematics ## Vectors, Scalars, and Coordinate Systems ### Learning Objectives By the end of this section, you will be able to: 1. Define and distinguish between scalar and vector quantities. 2. Assign a coordinate system for a scenario involving one-dimensional motion. What is the difference between distance and displacement? Whereas displacement is defined by both direction and magnitude, distance is defined only by magnitude. Displacement is an example of a vector quantity. Distance is an example of a scalar quantity. A vector is any quantity with both magnitude and direction. Other examples of vectors include a velocity of 90 km/h east and a force of 500 newtons straight down. The direction of a vector in one-dimensional motion is given simply by a plus or minus sign. Vectors are represented graphically by arrows. An arrow used to represent a vector has a length proportional to the vector’s magnitude (e.g., the larger the magnitude, the longer the length of the vector) and points in the same direction as the vector. Some physical quantities, like distance, either have no direction or none is specified. A scalar is any quantity that has a magnitude, but no direction. For example, a temperature, the 250 kilocalories (250 Calories) of energy in a candy bar, a 90 km/h speed limit, a person’s 1.8 m height, and a distance of 2.0 m are all scalars—quantities with no specified direction. Note, however, that a scalar can be negative, such as a temperature. In this case, the minus sign indicates a point on a scale rather than a direction. Scalars are never represented by arrows. ### Coordinate Systems for One-Dimensional Motion In order to describe the direction of a vector quantity, you must designate a coordinate system within the reference frame. For one-dimensional motion, this is a simple coordinate system consisting of a one-dimensional coordinate line. In general, when describing horizontal motion, motion to the right is usually considered positive, and motion to the left is considered negative. With vertical motion, motion up is usually positive and motion down is negative. In some cases, however, as with the jet in , it can be more convenient to switch the positive and negative directions. For example, if you are analyzing the motion of falling objects, it can be useful to define downwards as the positive direction. If people in a race are running to the left, it is useful to define left as the positive direction. It does not matter as long as the system is clear and consistent. Once you assign a positive direction and start solving a problem, you cannot change it. ### Test Prep for AP Courses ### Section Summary 1. A vector is any quantity that has magnitude and direction. 2. A scalar is any quantity that has magnitude but no direction. 3. Displacement and velocity are vectors, whereas distance and speed are scalars. 4. In one-dimensional motion, direction is specified by a plus or minus sign to signify left or right, up or down, and the like. ### Conceptual Questions
# Kinematics ## Time, Velocity, and Speed ### Learning Objectives By the end of this section, you will be able to: 1. Explain the relationships between instantaneous velocity, average velocity, instantaneous speed, average speed, displacement, and time. 2. Calculate velocity and speed given initial position, initial time, final position, and final time. 3. Derive a graph of velocity vs. time given a graph of position vs. time. 4. Interpret a graph of velocity vs. time. There is more to motion than distance and displacement. Questions such as, “How long does a foot race take?” and “What was the runner’s speed?” cannot be answered without an understanding of other concepts. In this section we add definitions of time, velocity, and speed to expand our description of motion. ### Time As discussed in Physical Quantities and Units, the most fundamental physical quantities are defined by how they are measured. This is the case with time. Every measurement of time involves measuring a change in some physical quantity. It may be a number on a digital clock, a heartbeat, or the position of the Sun in the sky. In physics, the definition of time is simple—time is change, or the interval over which change occurs. It is impossible to know that time has passed unless something changes. The amount of time or change is calibrated by comparison with a standard. The SI unit for time is the second, abbreviated s. We might, for example, observe that a certain pendulum makes one full swing every 0.75 s. We could then use the pendulum to measure time by counting its swings or, of course, by connecting the pendulum to a clock mechanism that registers time on a dial. This allows us to not only measure the amount of time, but also to determine a sequence of events. How does time relate to motion? We are usually interested in elapsed time for a particular motion, such as how long it takes an airplane passenger to get from his seat to the back of the plane. To find elapsed time, we note the time at the beginning and end of the motion and subtract the two. For example, a lecture may start at 11:00 A.M. and end at 11:50 A.M., so that the elapsed time would be 50 min. Elapsed time is the difference between the ending time and beginning time, where is the change in time or elapsed time, is the time at the end of the motion, and is the time at the beginning of the motion. (As usual, the delta symbol, , means the change in the quantity that follows it.) Life is simpler if the beginning time is taken to be zero, as when we use a stopwatch. If we were using a stopwatch, it would simply read zero at the start of the lecture and 50 min at the end. If , then . In this text, for simplicity’s sake, 1. motion starts at time equal to zero 2. the symbol is used for elapsed time unless otherwise specified ### Velocity Your notion of velocity is probably the same as its scientific definition. You know that if you have a large displacement in a small amount of time you have a large velocity, and that velocity has units of distance divided by time, such as miles per hour or kilometers per hour. Notice that this definition indicates that velocity is a vector because displacement is a vector. It has both magnitude and direction. The SI unit for velocity is meters per second or m/s, but many other units, such as km/h, mi/h (also written as mph), and cm/s, are in common use. Suppose, for example, an airplane passenger took 5 seconds to move −4 m (the negative sign indicates that displacement is toward the back of the plane). His average velocity would be The minus sign indicates the average velocity is also toward the rear of the plane. The average velocity of an object does not tell us anything about what happens to it between the starting point and ending point, however. For example, we cannot tell from average velocity whether the airplane passenger stops momentarily or backs up before he goes to the back of the plane. To get more details, we must consider smaller segments of the trip over smaller time intervals. The smaller the time intervals considered in a motion, the more detailed the information. When we carry this process to its logical conclusion, we are left with an infinitesimally small interval. Over such an interval, the average velocity becomes the instantaneous velocity or the velocity at a specific instant. A car’s speedometer, for example, shows the magnitude (but not the direction) of the instantaneous velocity of the car. (Police give tickets based on instantaneous velocity, but when calculating how long it will take to get from one place to another on a road trip, you need to use average velocity.) Instantaneous velocity is the average velocity at a specific instant in time (or over an infinitesimally small time interval). Mathematically, finding instantaneous velocity, , at a precise instant can involve taking a limit, a calculus operation beyond the scope of this text. However, under many circumstances, we can find precise values for instantaneous velocity without calculus. ### Speed In everyday language, most people use the terms “speed” and “velocity” interchangeably. In physics, however, they do not have the same meaning and they are distinct concepts. One major difference is that speed has no direction. Thus speed is a scalar. Just as we need to distinguish between instantaneous velocity and average velocity, we also need to distinguish between instantaneous speed and average speed. Instantaneous speed is the magnitude of instantaneous velocity. For example, suppose the airplane passenger at one instant had an instantaneous velocity of −3.0 m/s (the minus meaning toward the rear of the plane). At that same time his instantaneous speed was 3.0 m/s. Or suppose that at one time during a shopping trip your instantaneous velocity is 40 km/h due north. Your instantaneous speed at that instant would be 40 km/h—the same magnitude but without a direction. Average speed, however, is very different from average velocity. Average speed is the distance traveled divided by elapsed time. We have noted that distance traveled can be greater than the magnitude of displacement. So average speed can be greater than average velocity, which is displacement divided by time. For example, if you drive to a store and return home in half an hour, and your car’s odometer shows the total distance traveled was 6 km, then your average speed was 12 km/h. Your average velocity, however, was zero, because your displacement for the round trip is zero. (Displacement is change in position and, thus, is zero for a round trip.) Thus average speed is not simply the magnitude of average velocity. Another way of visualizing the motion of an object is to use a graph. A plot of position or of velocity as a function of time can be very useful. For example, for this trip to the store, the position, velocity, and speed-vs.-time graphs are displayed in . (Note that these graphs depict a very simplified model of the trip. We are assuming that speed is constant during the trip, which is unrealistic given that we’ll probably stop at the store. But for simplicity’s sake, we will model it with no stops or changes in speed. We are also assuming that the route between the store and the house is a perfectly straight line.) ### Test Prep for AP Courses ### Section Summary 1. Time is measured in terms of change, and its SI unit is the second (s). Elapsed time for an event is where is the final time and is the initial time. The initial time is often taken to be zero, as if measured with a stopwatch; the elapsed time is then just . 2. Average velocity is defined as displacement divided by the travel time. In symbols, average velocity is 3. The SI unit for velocity is m/s. 4. Velocity is a vector and thus has a direction. 5. Instantaneous velocity is the velocity at a specific instant or the average velocity for an infinitesimal interval. 6. Instantaneous speed is the magnitude of the instantaneous velocity. 7. Instantaneous speed is a scalar quantity, as it has no direction specified. 8. Average speed is the total distance traveled divided by the elapsed time. (Average speed is not the magnitude of the average velocity.) Speed is a scalar quantity; it has no direction associated with it. ### Conceptual Questions ### Problems & Exercises
# Kinematics ## Acceleration ### Learning Objectives By the end of this section, you will be able to: 1. Define and distinguish between instantaneous acceleration, average acceleration, and deceleration. 2. Calculate acceleration given initial time, initial velocity, final time, and final velocity. In everyday conversation, to accelerate means to speed up. The accelerator in a car can in fact cause it to speed up. The greater the acceleration, the greater the change in velocity over a given time. The formal definition of acceleration is consistent with these notions, but more inclusive. Because acceleration is velocity in m/s divided by time in s, the SI units for acceleration are , meters per second squared or meters per second per second, which literally means by how many meters per second the velocity changes every second. Recall that velocity is a vector—it has both magnitude and direction. This means that a change in velocity can be a change in magnitude (or speed), but it can also be a change in direction. For example, if a car turns a corner at constant speed, it is accelerating because its direction is changing. The quicker you turn, the greater the acceleration. So there is an acceleration when velocity changes either in magnitude (an increase or decrease in speed) or in direction, or both. Keep in mind that although acceleration is in the direction of the change in velocity, it is not always in the direction of motion. When an object slows down, its acceleration is opposite to the direction of its motion. This is known as deceleration. ### Instantaneous Acceleration Instantaneous acceleration , or the acceleration at a specific instant in time, is obtained by the same process as discussed for instantaneous velocity in Time, Velocity, and Speed—that is, by considering an infinitesimally small interval of time. How do we find instantaneous acceleration using only algebra? The answer is that we choose an average acceleration that is representative of the motion. shows graphs of instantaneous acceleration versus time for two very different motions. In (a), the acceleration varies slightly and the average over the entire interval is nearly the same as the instantaneous acceleration at any time. In this case, we should treat this motion as if it had a constant acceleration equal to the average (in this case about ). In (b), the acceleration varies drastically over time. In such situations it is best to consider smaller time intervals and choose an average acceleration for each. For example, we could consider motion over the time intervals from 0 to 1.0 s and from 1.0 to 3.0 s as separate motions with accelerations of and , respectively. The next several examples consider the motion of the subway train shown in . In (a) the shuttle moves to the right, and in (b) it moves to the left. The examples are designed to further illustrate aspects of motion and to illustrate some of the reasoning that goes into solving problems. The graphs of position, velocity, and acceleration vs. time for the trains in and are displayed in . (We have taken the velocity to remain constant from 20 to 40 s, after which the train decelerates.) ### Sign and Direction Perhaps the most important thing to note about these examples is the signs of the answers. In our chosen coordinate system, plus means the quantity is to the right and minus means it is to the left. This is easy to imagine for displacement and velocity. But it is a little less obvious for acceleration. Most people interpret negative acceleration as the slowing of an object. This was not the case in , where a positive acceleration slowed a negative velocity. The crucial distinction was that the acceleration was in the opposite direction from the velocity. In fact, a negative acceleration will increase a negative velocity. For example, the train moving to the left in is sped up by an acceleration to the left. In that case, both and are negative. The plus and minus signs give the directions of the accelerations. If acceleration has the same sign as the velocity, the object is speeding up. If acceleration has the opposite sign as the velocity, the object is slowing down. ### Test Prep for AP Courses ### Section Summary 1. Acceleration is the rate at which velocity changes. In symbols, average acceleration is 2. The SI unit for acceleration is . 3. Acceleration is a vector, and thus has a both a magnitude and direction. 4. Acceleration can be caused by either a change in the magnitude or the direction of the velocity. 5. Instantaneous acceleration is the acceleration at a specific instant in time. 6. Deceleration is an acceleration with a direction opposite to that of the velocity. ### Conceptual Questions ### Problems & Exercises
# Kinematics ## Motion Equations for Constant Acceleration in One Dimension ### Learning Objectives By the end of this section, you will be able to: 1. Calculate displacement of an object that is not accelerating, given initial position and velocity. 2. Calculate final velocity of an accelerating object, given initial velocity, acceleration, and time. 3. Calculate displacement and final position of an accelerating object, given initial position, initial velocity, time, and acceleration. We might know that the greater the acceleration of, say, a car moving away from a stop sign, the greater the displacement in a given time. But we have not developed a specific equation that relates acceleration and displacement. In this section, we develop some convenient equations for kinematic relationships, starting from the definitions of displacement, velocity, and acceleration already covered. ### Notation: t, x, v, a First, let us make some simplifications in notation. Taking the initial time to be zero, as if time is measured with a stopwatch, is a great simplification. Since elapsed time is , taking means that , the final time on the stopwatch. When initial time is taken to be zero, we use the subscript 0 to denote initial values of position and velocity. That is, is the initial position and is the initial velocity. We put no subscripts on the final values. That is, is the final time, is the final position, and is the final velocity. This gives a simpler expression for elapsed time—now, . It also simplifies the expression for displacement, which is now . Also, it simplifies the expression for change in velocity, which is now . To summarize, using the simplified notation, with the initial time taken to be zero, where the subscript 0 denotes an initial value and the absence of a subscript denotes a final value in whatever motion is under consideration. We now make the important assumption that acceleration is constant. This assumption allows us to avoid using calculus to find instantaneous acceleration. Since acceleration is constant, the average and instantaneous accelerations are equal. That is, so we use the symbol for acceleration at all times. Assuming acceleration to be constant does not seriously limit the situations we can study nor degrade the accuracy of our treatment. For one thing, acceleration is constant in a great number of situations. Furthermore, in many other situations we can accurately describe motion by assuming a constant acceleration equal to the average acceleration for that motion. Finally, in motions where acceleration changes drastically, such as a car accelerating to top speed and then braking to a stop, the motion can be considered in separate parts, each of which has its own constant acceleration. The equation reflects the fact that, when acceleration is constant, is just the simple average of the initial and final velocities. For example, if you steadily increase your velocity (that is, with constant acceleration) from 30 to 60 km/h, then your average velocity during this steady increase is 45 km/h. Using the equation to check this, we see that which seems logical. The equation gives insight into the relationship between displacement, average velocity, and time. It shows, for example, that displacement is a linear function of average velocity. (By linear function, we mean that displacement depends on rather than on raised to some other power, such as . When graphed, linear functions look like straight lines with a constant slope.) On a car trip, for example, we will get twice as far in a given time if we average 90 km/h than if we average 45 km/h. In addition to being useful in problem solving, the equation gives us insight into the relationships among velocity, acceleration, and time. From it we can see, for example, that 1. final velocity depends on how large the acceleration is and how long it lasts 2. if the acceleration is zero, then the final velocity equals the initial velocity , as expected (i.e., velocity is constant) 3. if is negative, then the final velocity is less than the initial velocity (All of these observations fit our intuition, and it is always useful to examine basic equations in light of our intuition and experiences to check that they do indeed describe nature accurately.) What else can we learn by examining the equation We see that: 1. displacement depends on the square of the elapsed time when acceleration is not zero. In , the dragster covers only one fourth of the total distance in the first half of the elapsed time 2. if acceleration is zero, then the initial velocity equals average velocity () and becomes An examination of the equation can produce further insights into the general relationships among physical quantities: 1. The final velocity depends on how large the acceleration is and the distance over which it acts 2. For a fixed deceleration, a car that is going twice as fast doesn’t simply stop in twice the distance—it takes much further to stop. (This is why we have reduced speed zones near schools.) ### Putting Equations Together In the following examples, we further explore one-dimensional motion, but in situations requiring slightly more algebraic manipulation. The examples also give insight into problem-solving techniques. The box below provides easy reference to the equations needed. With the basics of kinematics established, we can go on to many other interesting examples and applications. In the process of developing kinematics, we have also glimpsed a general approach to problem solving that produces both correct answers and insights into physical relationships. Problem-Solving Basics discusses problem-solving basics and outlines an approach that will help you succeed in this invaluable task. ### Test Prep for AP Courses ### Section Summary 1. To simplify calculations we take acceleration to be constant, so that at all times. 2. We also take initial time to be zero. 3. Initial position and velocity are given a subscript 0; final values have no subscript. Thus, 4. The following kinematic equations for motion with constant are useful: 5. In vertical motion, is substituted for . ### Problems & Exercises
# Kinematics ## Problem-Solving Basics for One-Dimensional Kinematics ### Learning Objectives By the end of this section, you will be able to: 1. Apply problem-solving steps and strategies to solve problems of one-dimensional kinematics. 2. Apply strategies to determine whether or not the result of a problem is reasonable, and if not, determine the cause. Problem-solving skills are obviously essential to success in a quantitative course in physics. More importantly, the ability to apply broad physical principles, usually represented by equations, to specific situations is a very powerful form of knowledge. It is much more powerful than memorizing a list of facts. Analytical skills and problem-solving abilities can be applied to new situations, whereas a list of facts cannot be made long enough to contain every possible circumstance. Such analytical skills are useful both for solving problems in this text and for applying physics in everyday and professional life. ### Problem-Solving Steps While there is no simple step-by-step method that works for every problem, the following general procedures facilitate problem solving and make it more meaningful. A certain amount of creativity and insight is required as well. ### Step 1 Examine the situation to determine which physical principles are involved. It often helps to draw a simple sketch at the outset. You will also need to decide which direction is positive and note that on your sketch. Once you have identified the physical principles, it is much easier to find and apply the equations representing those principles. Although finding the correct equation is essential, keep in mind that equations represent physical principles, laws of nature, and relationships among physical quantities. Without a conceptual understanding of a problem, a numerical solution is meaningless. ### Step 2 Make a list of what is given or can be inferred from the problem as stated (identify the knowns). Many problems are stated very succinctly and require some inspection to determine what is known. A sketch can also be very useful at this point. Formally identifying the knowns is of particular importance in applying physics to real-world situations. Remember, “stopped” means velocity is zero, and we often can take initial time and position as zero. ### Step 3 Identify exactly what needs to be determined in the problem (identify the unknowns). In complex problems, especially, it is not always obvious what needs to be found or in what sequence. Making a list can help. ### Step 4 Find an equation or set of equations that can help you solve the problem. Your list of knowns and unknowns can help here. It is easiest if you can find equations that contain only one unknown—that is, all of the other variables are known, so you can easily solve for the unknown. If the equation contains more than one unknown, then an additional equation is needed to solve the problem. In some problems, several unknowns must be determined to get at the one needed most. In such problems it is especially important to keep physical principles in mind to avoid going astray in a sea of equations. You may have to use two (or more) different equations to get the final answer. ### Step 5 Substitute the knowns along with their units into the appropriate equation, and obtain numerical solutions complete with units. This step produces the numerical answer; it also provides a check on units that can help you find errors. If the units of the answer are incorrect, then an error has been made. However, be warned that correct units do not guarantee that the numerical part of the answer is also correct. ### Step 6 Check the answer to see if it is reasonable: Does it make sense? This final step is extremely important—the goal of physics is to accurately describe nature. To see if the answer is reasonable, check both its magnitude and its sign, in addition to its units. Your judgment will improve as you solve more and more physics problems, and it will become possible for you to make finer and finer judgments regarding whether nature is adequately described by the answer to a problem. This step brings the problem back to its conceptual meaning. If you can judge whether the answer is reasonable, you have a deeper understanding of physics than just being able to mechanically solve a problem. When solving problems, we often perform these steps in different order, and we also tend to do several steps simultaneously. There is no rigid procedure that will work every time. Creativity and insight grow with experience, and the basics of problem solving become almost automatic. One way to get practice is to work out the text’s examples for yourself as you read. Another is to work as many end-of-section problems as possible, starting with the easiest to build confidence and progressing to the more difficult. Once you become involved in physics, you will see it all around you, and you can begin to apply it to situations you encounter outside the classroom, just as is done in many of the applications in this text. ### Unreasonable Results Physics must describe nature accurately. Some problems have results that are unreasonable because one premise is unreasonable or because certain premises are inconsistent with one another. The physical principle applied correctly then produces an unreasonable result. For example, if a person starting a foot race accelerates at for 100 s, his final speed will be 40 m/s (about 150 km/h)—clearly unreasonable because the time of 100 s is an unreasonable premise. The physics is correct in a sense, but there is more to describing nature than just manipulating equations correctly. Checking the result of a problem to see if it is reasonable does more than help uncover errors in problem solving—it also builds intuition in judging whether nature is being accurately described. Use the following strategies to determine whether an answer is reasonable and, if it is not, to determine what is the cause. ### Step 1 Solve the problem using strategies as outlined and in the format followed in the worked examples in the text. In the example given in the preceding paragraph, you would identify the givens as the acceleration and time and use the equation below to find the unknown final velocity. That is, ### Step 2 Check to see if the answer is reasonable. Is it too large or too small, or does it have the wrong sign, improper units, …? In this case, you may need to convert meters per second into a more familiar unit, such as miles per hour. This velocity is about four times greater than a person can run—so it is too large. ### Step 3 If the answer is unreasonable, look for what specifically could cause the identified difficulty. In the example of the runner, there are only two assumptions that are suspect. The acceleration could be too great or the time too long. First look at the acceleration and think about what the number means. If someone accelerates at , their velocity is increasing by 0.4 m/s each second. Does this seem reasonable? If so, the time must be too long. It is not possible for someone to accelerate at a constant rate of for 100 s (almost two minutes). ### Section Summary 1. The six basic problem solving steps for physics are: ### Conceptual Questions
# Kinematics ## Falling Objects ### Learning Objectives By the end of this section, you will be able to: 1. Describe the effects of gravity on objects in motion. 2. Describe the motion of objects that are in free fall. 3. Calculate the position and velocity of objects in free fall. Falling objects form an interesting class of motion problems. For example, we can estimate the depth of a vertical mine shaft by dropping a rock into it and listening for the rock to hit the bottom. By applying the kinematics developed so far to falling objects, we can examine some interesting situations and learn much about gravity in the process. ### Gravity The most remarkable and unexpected fact about falling objects is that, if air resistance and friction are negligible, then in a given location all objects fall toward the center of Earth with the same constant acceleration, independent of their mass. This experimentally determined fact is unexpected, because we are so accustomed to the effects of air resistance and friction that we expect light objects to fall slower than heavy ones. In the real world, air resistance can cause a lighter object to fall slower than a heavier object of the same size. A tennis ball will reach the ground after a hard baseball dropped at the same time. (It might be difficult to observe the difference if the height is not large.) Air resistance opposes the motion of an object through the air, while friction between objects—such as between clothes and a laundry chute or between a stone and a pool into which it is dropped—also opposes motion between them. For the ideal situations of these first few chapters, an object falling without air resistance or friction is defined to be in free-fall. The force of gravity causes objects to fall toward the center of Earth. The acceleration of free-falling objects is therefore called the acceleration due to gravity. The acceleration due to gravity is constant, which means we can apply the kinematics equations to any falling object where air resistance and friction are negligible. This opens a broad class of interesting situations to us. The acceleration due to gravity is so important that its magnitude is given its own symbol, . It is constant at any given location on Earth and has the average value Although varies from to , depending on latitude, altitude, underlying geological formations, and local topography, the average value of will be used in this text unless otherwise specified. The direction of the acceleration due to gravity is downward (towards the center of Earth). In fact, its direction defines what we call vertical. Note that whether the acceleration in the kinematic equations has the value or depends on how we define our coordinate system. If we define the upward direction as positive, then , and if we define the downward direction as positive, then . ### One-Dimensional Motion Involving Gravity The best way to see the basic features of motion involving gravity is to start with the simplest situations and then progress toward more complex ones. So we start by considering straight up and down motion with no air resistance or friction. These assumptions mean that the velocity (if there is any) is vertical. If the object is dropped, we know the initial velocity is zero. Once the object has left contact with whatever held or threw it, the object is in free-fall. Under these circumstances, the motion is one-dimensional and has constant acceleration of magnitude . We will also represent vertical displacement with the symbol and use for horizontal displacement. ### Test Prep for AP Courses ### Section Summary 1. An object in free-fall experiences constant acceleration if air resistance is negligible. 2. On Earth, all free-falling objects have an acceleration due to gravity , which averages 3. Whether the acceleration a should be taken as or is determined by your choice of coordinate system. If you choose the upward direction as positive, is negative. In the opposite case, is positive. Since acceleration is constant, the kinematic equations above can be applied with the appropriate or substituted for . 4. For objects in free-fall, up is normally taken as positive for displacement, velocity, and acceleration. ### Conceptual Questions ### Problems & Exercises Assume air resistance is negligible unless otherwise stated.
# Kinematics ## Graphical Analysis of One-Dimensional Motion ### Learning Objectives By the end of this section, you will be able to: 1. Describe a straight-line graph in terms of its slope and y-intercept. 2. Determine average velocity or instantaneous velocity from a graph of position vs. time. 3. Determine average or instantaneous acceleration from a graph of velocity vs. time. 4. Derive a graph of velocity vs. time from a graph of position vs. time. 5. Derive a graph of acceleration vs. time from a graph of velocity vs. time. A graph, like a picture, is worth a thousand words. Graphs not only contain numerical information; they also reveal relationships between physical quantities. This section uses graphs of position, velocity, and acceleration versus time to illustrate one-dimensional kinematics. ### Slopes and General Relationships First note that graphs in this text have perpendicular axes, one horizontal and the other vertical. When two physical quantities are plotted against one another in such a graph, the horizontal axis is usually considered to be an independent variable and the vertical axis a dependent variable. If we call the horizontal axis the -axis and the vertical axis the -axis, as in , a straight-line graph has the general form Here is the slope, defined to be the rise divided by the run (as seen in the figure) of the straight line. The letter is used for the , which is the point at which the line crosses the vertical axis. ### Graph of Position vs. Time (a = 0, so v is constant) Time is usually an independent variable that other quantities, such as position, depend upon. A graph of position versus time would, thus, have on the vertical axis and on the horizontal axis. is just such a straight-line graph. It shows a graph of position versus time for a jet-powered car on a very flat dry lake bed in Nevada. Using the relationship between dependent and independent variables, we see that the slope in the graph above is average velocity and the intercept is position at time zero—that is, . Substituting these symbols into gives or Thus a graph of position versus time gives a general relationship among displacement(change in position), velocity, and time, as well as giving detailed numerical information about a specific situation. From the figure we can see that the car has a position of 525 m at 0.50 s and 2000 m at 6.40 s. Its position at other times can be read from the graph; furthermore, information about its velocity and acceleration can also be obtained from the graph. ### Graphs of Motion when is constant but The graphs in below represent the motion of the jet-powered car as it accelerates toward its top speed, but only during the time when its acceleration is constant. Time starts at zero for this motion (as if measured with a stopwatch), and the position and velocity are initially 200 m and 15 m/s, respectively. The graph of position versus time in (a) is a curve rather than a straight line. The slope of the curve becomes steeper as time progresses, showing that the velocity is increasing over time. The slope at any point on a position-versus-time graph is the instantaneous velocity at that point. It is found by drawing a straight line tangent to the curve at the point of interest and taking the slope of this straight line. Tangent lines are shown for two points in (a). If this is done at every point on the curve and the values are plotted against time, then the graph of velocity versus time shown in (b) is obtained. Furthermore, the slope of the graph of velocity versus time is acceleration, which is shown in (c). Carrying this one step further, we note that the slope of a velocity versus time graph is acceleration. Slope is rise divided by run; on a vs. graph, rise = change in velocity and run = change in time . Since the velocity versus time graph in (b) is a straight line, its slope is the same everywhere, implying that acceleration is constant. Acceleration versus time is graphed in (c). Additional general information can be obtained from and the expression for a straight line, . In this case, the vertical axis is , the intercept is , the slope is , and the horizontal axis is . Substituting these symbols yields A general relationship for velocity, acceleration, and time has again been obtained from a graph. Notice that this equation was also derived algebraically from other motion equations in Motion Equations for Constant Acceleration in One Dimension. It is not accidental that the same equations are obtained by graphical analysis as by algebraic techniques. In fact, an important way to discover physical relationships is to measure various physical quantities and then make graphs of one quantity against another to see if they are correlated in any way. Correlations imply physical relationships and might be shown by smooth graphs such as those above. From such graphs, mathematical relationships can sometimes be postulated. Further experiments are then performed to determine the validity of the hypothesized relationships. ### Graphs of Motion Where Acceleration is Not Constant Now consider the motion of the jet car as it goes from 165 m/s to its top velocity of 250 m/s, graphed in . Time again starts at zero, and the initial velocity is 165 m/s. (This was the final velocity of the car in the motion graphed in .) Acceleration gradually decreases from to zero when the car hits 250 m/s. The velocity increases until 55 s and then becomes constant, since acceleration decreases to zero at 55 s and remains zero afterward. A graph of position versus time can be used to generate a graph of velocity versus time, and a graph of velocity versus time can be used to generate a graph of acceleration versus time. We do this by finding the slope of the graphs at every point. If the graph is linear (i.e., a line with a constant slope), it is easy to find the slope at any point and you have the slope for every point. Graphical analysis of motion can be used to describe both specific and general characteristics of kinematics. Graphs can also be used for other topics in physics. An important aspect of exploring physical relationships is to graph them and look for underlying relationships. ### Section Summary 1. Graphs of motion can be used to analyze motion. 2. Graphical solutions yield identical solutions to mathematical methods for deriving motion equations. 3. The slope of a graph of displacement vs. time is velocity . 4. The slope of a graph of velocity vs. time graph is acceleration . 5. Average velocity, instantaneous velocity, and acceleration can all be obtained by analyzing graphs. ### Conceptual Questions ### Problems & Exercises Note: There is always uncertainty in numbers taken from graphs. If your answers differ from expected values, examine them to see if they are within data extraction uncertainties estimated by you.
# Two-Dimensional Kinematics ## Connection for AP® Courses Most instances of motion in everyday life involve changes in displacement and velocity that occur in more than one direction. For example, when you take a long road trip, you drive on different roads in different directions for different amounts of time at different speeds. How can these motions all be combined to determine information about the trip such as the total displacement and average velocity? If you kick a ball from ground level at some angle above the horizontal, how can you describe its motion? To what maximum height does the object rise above the ground? How long is the object in the air? How much horizontal distance is covered before the ball lands? To answer questions such as these, we need to describe motion in two dimensions. Examining two-dimensional motion requires an understanding of both the scalar and the vector quantities associated with the motion. You will learn how to combine vectors to incorporate both the magnitude and direction of vectors into your analysis. You will learn strategies for simplifying the calculations involved by choosing the appropriate reference frame and by treating each dimension of the motion separately as a one-dimensional problem, but you will also see that the motion itself occurs in the same way regardless of your chosen reference frame (Essential Knowledge 3.A.1). This chapter lays a necessary foundation for examining interactions of objects described by forces (Big Idea 3). Changes in direction result from acceleration, which necessitates force on an object. In this chapter, you will concentrate on describing motion that involves changes in direction. In later chapters, you will apply this understanding as you learn about how forces cause these motions (Enduring Understanding 3.A). The concepts in this chapter support: Big Idea 3 The interactions of an object with other objects can be described by forces. Enduring Understanding 3.A All forces share certain common characteristics when considered by observers in inertial reference frames. Essential Knowledge 3.A.1 An observer in a particular reference frame can describe the motion of an object using such quantities as position, displacement, distance, velocity, speed, and acceleration.
# Two-Dimensional Kinematics ## Kinematics in Two Dimensions: An Introduction ### Learning Objectives By the end of this section, you will be able to: 1. Observe that motion in two dimensions consists of horizontal and vertical components. 2. Understand the independence of horizontal and vertical vectors in two-dimensional motion. ### Two-Dimensional Motion: Walking in a City Suppose you want to walk from one point to another in a city with uniform square blocks, as pictured in . The straight-line path that a helicopter might fly is blocked to you as a pedestrian, and so you are forced to take a two-dimensional path, such as the one shown. You walk 14 blocks in all, 9 east followed by 5 north. What is the straight-line distance? An old adage states that the shortest distance between two points is a straight line. The two legs of the trip and the straight-line path form a right triangle, and so the Pythagorean theorem, , can be used to find the straight-line distance. The hypotenuse of the triangle is the straight-line path, and so in this case its length in units of city blocks is , considerably shorter than the 14 blocks you walked. (Note that we are using three significant figures in the answer. Although it appears that “9” and “5” have only one significant digit, they are discrete numbers. In this case “9 blocks” is the same as “9.0 or 9.00 blocks.” We have decided to use three significant figures in the answer in order to show the result more precisely.) The fact that the straight-line distance (10.3 blocks) in is less than the total distance walked (14 blocks) is one example of a general characteristic of vectors. (Recall that vectors are quantities that have both magnitude and direction.) As for one-dimensional kinematics, we use arrows to represent vectors. The length of the arrow is proportional to the vector’s magnitude. The arrow’s length is indicated by hash marks in and . The arrow points in the same direction as the vector. For two-dimensional motion, the path of an object can be represented with three vectors: one vector shows the straight-line path between the initial and final points of the motion, one vector shows the horizontal component of the motion, and one vector shows the vertical component of the motion. The horizontal and vertical components of the motion add together to give the straight-line path. For example, observe the three vectors in . The first represents a 9-block displacement east. The second represents a 5-block displacement north. These vectors are added to give the third vector, with a 10.3-block total displacement. The third vector is the straight-line path between the two points. Note that in this example, the vectors that we are adding are perpendicular to each other and thus form a right triangle. This means that we can use the Pythagorean theorem to calculate the magnitude of the total displacement. (Note that we cannot use the Pythagorean theorem to add vectors that are not perpendicular. We will develop techniques for adding vectors having any direction, not just those perpendicular to one another, in Vector Addition and Subtraction: Graphical Methods and Vector Addition and Subtraction: Analytical Methods.) ### The Independence of Perpendicular Motions The person taking the path shown in walks east and then north (two perpendicular directions). How far they walk east is only affected by their motion eastward. Similarly, how far they walk north is only affected by their motion northward. This is true in a simple scenario like that of walking in one direction first, followed by another. It is also true of more complicated motion involving movement in two directions at once. For example, let’s compare the motions of two baseballs. One baseball is dropped from rest. At the same instant, another is thrown horizontally from the same height and follows a curved path. A stroboscope has captured the positions of the balls at fixed time intervals as they fall. It is remarkable that for each flash of the strobe, the vertical positions of the two balls are the same. This similarity implies that the vertical motion is independent of whether or not the ball is moving horizontally. (Assuming no air resistance, the vertical motion of a falling object is influenced by gravity only, and not by any horizontal forces.) Careful examination of the ball thrown horizontally shows that it travels the same horizontal distance between flashes. This is due to the fact that there are no additional forces on the ball in the horizontal direction after it is thrown. This result means that the horizontal velocity is constant, and affected neither by vertical motion nor by gravity (which is vertical). Note that this case is true only for ideal conditions. In the real world, air resistance will affect the speed of the balls in both directions. The two-dimensional curved path of the horizontally thrown ball is composed of two independent one-dimensional motions (horizontal and vertical). The key to analyzing such motion, called projectile motion, is to resolve (break) it into motions along perpendicular directions. Resolving two-dimensional motion into perpendicular components is possible because the components are independent. We shall see how to resolve vectors in Vector Addition and Subtraction: Graphical Methods and Vector Addition and Subtraction: Analytical Methods. We will find such techniques to be useful in many areas of physics. ### Test Prep for AP Courses ### Summary 1. The shortest path between any two points is a straight line. In two dimensions, this path can be represented by a vector with horizontal and vertical components. 2. The horizontal and vertical components of a vector are independent of one another. Motion in the horizontal direction does not affect motion in the vertical direction, and vice versa.
# Two-Dimensional Kinematics ## Vector Addition and Subtraction: Graphical Methods ### Learning Objectives By the end of this section, you will be able to: 1. Understand the rules of vector addition, subtraction, and multiplication. 2. Apply graphical methods of vector addition and subtraction to determine the displacement of moving objects. ### Vectors in Two Dimensions A vector is a quantity that has magnitude and direction. Displacement, velocity, acceleration, and force, for example, are all vectors. In one-dimensional, or straight-line, motion, the direction of a vector can be given simply by a plus or minus sign. In two dimensions (2-d), however, we specify the direction of a vector relative to some reference frame (i.e., coordinate system), using an arrow having length proportional to the vector’s magnitude and pointing in the direction of the vector. shows such a graphical representation of a vector, using as an example the total displacement for the person walking in a city considered in Kinematics in Two Dimensions: An Introduction. We shall use the notation that a boldface symbol, such as , stands for a vector. Its magnitude is represented by the symbol in italics, , and its direction by . ### Vector Addition: Head-to-Tail Method The head-to-tail method is a graphical way to add vectors, described in below and in the steps following. The tail of the vector is the starting point of the vector, and the head (or tip) of a vector is the final, pointed end of the arrow. Draw an arrow to represent the first vector (9 blocks to the east) using a ruler and protractor. Now draw an arrow to represent the second vector (5 blocks to the north). Place the tail of the second vector at the head of the first vector. If there are more than two vectors, continue this process for each vector to be added. Note that in our example, we have only two vectors, so we have finished placing arrows tip to tail. Draw an arrow from the tail of the first vector to the head of the last vector. This is the resultant, or the sum, of the other vectors. To get the magnitude of the resultant, measure its length with a ruler. (Note that in most calculations, we will use the Pythagorean theorem to determine this length.) To get the direction of the resultant, measure the angle it makes with the reference frame using a protractor. (Note that in most calculations, we will use trigonometric relationships to determine this angle.) The graphical addition of vectors is limited in accuracy only by the precision with which the drawings can be made and the precision of the measuring tools. It is valid for any number of vectors. ### Vector Subtraction Vector subtraction is a straightforward extension of vector addition. To define subtraction (say we want to subtract from , written , we must first define what we mean by subtraction. The negative of a vector is defined to be ; that is, graphically the negative of any vector has the same magnitude but the opposite direction, as shown in . In other words, has the same length as , but points in the opposite direction. Essentially, we just flip the vector so it points in the opposite direction. The subtraction of vector from vector is then simply defined to be the addition of to . Note that vector subtraction is the addition of a negative vector. The order of subtraction does not affect the results. This is analogous to the subtraction of scalars (where, for example, ). Again, the result is independent of the order in which the subtraction is made. When vectors are subtracted graphically, the techniques outlined above are used, as the following example illustrates. ### Multiplication of Vectors and Scalars If we decided to walk three times as far on the first leg of the trip considered in the preceding example, then we would walk , or 82.5 m, in a direction north of east. This is an example of multiplying a vector by a positive scalar. Notice that the magnitude changes, but the direction stays the same. If the scalar is negative, then multiplying a vector by it changes the vector’s magnitude and gives the new vector the opposite direction. For example, if you multiply by –2, the magnitude doubles but the direction changes. We can summarize these rules in the following way: When vector is multiplied by a scalar , 1. the magnitude of the vector becomes the absolute value of , 2. if is positive, the direction of the vector does not change, 3. if is negative, the direction is reversed. In our case, and . Vectors are multiplied by scalars in many situations. Note that division is the inverse of multiplication. For example, dividing by 2 is the same as multiplying by the value (1/2). The rules for multiplication of vectors by scalars are the same for division; simply treat the divisor as a scalar between 0 and 1. ### Resolving a Vector into Components In the examples above, we have been adding vectors to determine the resultant vector. In many cases, however, we will need to do the opposite. We will need to take a single vector and find what other vectors added together produce it. In most cases, this involves determining the perpendicular components of a single vector, for example the x- and y-components, or the north-south and east-west components. For example, we may know that the total displacement of a person walking in a city is 10.3 blocks in a direction north of east and want to find out how many blocks east and north had to be walked. This method is called finding the components (or parts) of the displacement in the east and north directions, and it is the inverse of the process followed to find the total displacement. It is one example of finding the components of a vector. There are many applications in physics where this is a useful thing to do. We will see this soon in Projectile Motion, and much more when we cover forces in Dynamics: Newton’s Laws of Motion. Most of these involve finding components along perpendicular axes (such as north and east), so that right triangles are involved. The analytical techniques presented in Vector Addition and Subtraction: Analytical Methods are ideal for finding vector components. ### Test Prep for AP Courses ### Summary 1. The graphical method of adding vectors and involves drawing vectors on a graph and adding them using the head-to-tail method. The resultant vector is defined such that . The magnitude and direction of are then determined with a ruler and protractor, respectively. 2. The graphical method of subtracting vector from involves adding the opposite of vector , which is defined as . In this case, . Then, the head-to-tail method of addition is followed in the usual way to obtain the resultant vector . 3. Addition of vectors is commutative such that . 4. The head-to-tail method of adding vectors involves drawing the first vector on a graph and then placing the tail of each subsequent vector at the head of the previous vector. The resultant vector is then drawn from the tail of the first vector to the head of the final vector. 5. If a vector is multiplied by a scalar quantity , the magnitude of the product is given by . If is positive, the direction of the product points in the same direction as ; if is negative, the direction of the product points in the opposite direction as . ### Conceptual Questions ### Problems & Exercises Use graphical methods to solve these problems. You may assume data taken from graphs is accurate to three digits.
# Two-Dimensional Kinematics ## Vector Addition and Subtraction: Analytical Methods ### Learning Objectives By the end of this section, you will be able to: 1. Understand the rules of vector addition and subtraction using analytical methods. 2. Apply analytical methods to determine vertical and horizontal component vectors. 3. Apply analytical methods to determine the magnitude and direction of a resultant vector. Analytical methods of vector addition and subtraction employ geometry and simple trigonometry rather than the ruler and protractor of graphical methods. Part of the graphical technique is retained, because vectors are still represented by arrows for easy visualization. However, analytical methods are more concise, accurate, and precise than graphical methods, which are limited by the accuracy with which a drawing can be made. Analytical methods are limited only by the accuracy and precision with which physical quantities are known. ### Resolving a Vector into Perpendicular Components Analytical techniques and right triangles go hand-in-hand in physics because (among other things) motions along perpendicular directions are independent. We very often need to separate a vector into perpendicular components. For example, given a vector like in , we may wish to find which two perpendicular vectors, and , add to produce it. and are defined to be the components of along the x- and y-axes. The three vectors , , and form a right triangle: Note that this relationship between vector components and the resultant vector holds only for vector quantities (which include both magnitude and direction). The relationship does not apply for the magnitudes alone. For example, if east, north, and north-east, then it is true that the vectors . However, it is not true that the sum of the magnitudes of the vectors is also equal. That is, Thus, If the vector is known, then its magnitude (its length) and its angle (its direction) are known. To find and , its x- and y-components, we use the following relationships for a right triangle. and Suppose, for example, that is the vector representing the total displacement of the person walking in a city considered in Kinematics in Two Dimensions: An Introduction and Vector Addition and Subtraction: Graphical Methods. Then blocks and , so that ### Calculating a Resultant Vector If the perpendicular components and of a vector are known, then can also be found analytically. To find the magnitude and direction of a vector from its perpendicular components and , relative to the x-axis, we use the following relationships: Note that the equation is just the Pythagorean theorem relating the legs of a right triangle to the length of the hypotenuse. For example, if and are 9 and 5 blocks, respectively, then blocks, again consistent with the example of the person walking in a city. Finally, the direction is , as before. ### Adding Vectors Using Analytical Methods To see how to add vectors using perpendicular components, consider , in which the vectors and are added to produce the resultant . If and represent two legs of a walk (two displacements), then is the total displacement. The person taking the walk ends up at the tip of There are many ways to arrive at the same point. In particular, the person could have walked first in the x-direction and then in the y-direction. Those paths are the x- and y-components of the resultant, and . If we know and , we can find and using the equations and . When you use the analytical method of vector addition, you can determine the components or the magnitude and direction of a vector. Use the equations and to find the components. In , these components are , , , and . The angles that vectors and make with the x-axis are and , respectively. That is, as shown in , and Components along the same axis, say the x-axis, are vectors along the same line and, thus, can be added to one another like ordinary numbers. The same is true for components along the y-axis. (For example, a 9-block eastward walk could be taken in two legs, the first 3 blocks east and the second 6 blocks east, for a total of 9, because they are along the same direction.) So resolving vectors into components along common axes makes it easier to add them. Now that the components of are known, its magnitude and direction can be found. The following example illustrates this technique for adding vectors using perpendicular components. Analyzing vectors using perpendicular components is very useful in many areas of physics, because perpendicular quantities are often independent of one another. The next module, Projectile Motion, is one of many in which using perpendicular components helps make the picture clear and simplifies the physics. ### Summary 1. The analytical method of vector addition and subtraction involves using the Pythagorean theorem and trigonometric identities to determine the magnitude and direction of a resultant vector. 2. The steps to add vectors and using the analytical method are as follows: Step 1: Determine the coordinate system for the vectors. Then, determine the horizontal and vertical components of each vector using the equations and Step 2: Add the horizontal and vertical components of each vector to determine the components and Step 3: Use the Pythagorean theorem to determine the magnitude, Step 4: Use a trigonometric identity to determine the direction, ### Conceptual Questions ### Problems & Exercises
# Two-Dimensional Kinematics ## Projectile Motion ### Learning Objectives By the end of this section, you will be able to: 1. Identify and explain the properties of a projectile, such as acceleration due to gravity, range, maximum height, and trajectory. 2. Determine the location and velocity of a projectile at different points in its trajectory. 3. Apply the principle of independence of motion to solve projectile motion problems. Projectile motion is the motion of an object thrown or projected into the air, subject to only the acceleration of gravity. The object is called a projectile, and its path is called its trajectory. The motion of falling objects, as covered in Problem-Solving Basics for One-Dimensional Kinematics, is a simple one-dimensional type of projectile motion in which there is no horizontal movement. In this section, we consider two-dimensional projectile motion, such as that of a football or other object for which air resistance is negligible. The most important fact to remember here is that motions along perpendicular axes are independent and thus can be analyzed separately. This fact was discussed in Kinematics in Two Dimensions: An Introduction, where vertical and horizontal motions were seen to be independent. The key to analyzing two-dimensional projectile motion is to break it into two motions, one along the horizontal axis and the other along the vertical. (This choice of axes is the most sensible, because acceleration due to gravity is vertical—thus, there will be no acceleration along the horizontal axis when air resistance is negligible.) As is customary, we call the horizontal axis the x-axis and the vertical axis the y-axis. illustrates the notation for displacement, where is defined to be the total displacement and and are its components along the horizontal and vertical axes, respectively. The magnitudes of these vectors are s, x, and y. (Note that in the last section we used the notation to represent a vector with components and . If we continued this format, we would call displacement with components and . However, to simplify the notation, we will simply represent the component vectors as and .) Of course, to describe motion we must deal with velocity and acceleration, as well as with displacement. We must find their components along the x- and y-axes, too. We will assume all forces except gravity (such as air resistance and friction, for example) are negligible. The components of acceleration are then very simple: . (Note that this definition assumes that the upwards direction is defined as the positive direction. If you arrange the coordinate system instead such that the downwards direction is positive, then acceleration due to gravity takes a positive value.) Because gravity is vertical, . Both accelerations are constant, so the kinematic equations can be used. Given these assumptions, the following steps are then used to analyze projectile motion: Resolve or break the motion into horizontal and vertical components along the x- and y-axes. These axes are perpendicular, so and are used. The magnitude of the components of displacement along these axes are and The magnitudes of the components of the velocity are and where is the magnitude of the velocity and is its direction, as shown in . Initial values are denoted with a subscript 0, as usual. Treat the motion as two independent one-dimensional motions, one horizontal and the other vertical. The kinematic equations for horizontal and vertical motion take the following forms: Solve for the unknowns in the two separate motions—one horizontal and one vertical. Note that the only common variable between the motions is time . The problem solving procedures here are the same as for one-dimensional kinematics and are illustrated in the solved examples below. Recombine the two motions to find the total displacement and velocity . Because the x - and y -motions are perpendicular, we determine these vectors by using the techniques outlined in the Vector Addition and Subtraction: Analytical Methods and employing and in the following form, where is the direction of the displacement and is the direction of the velocity : Total displacement and velocity In solving part (a) of the preceding example, the expression we found for is valid for any projectile motion where air resistance is negligible. Call the maximum height ; then, This equation defines the maximum height of a projectile and depends only on the vertical component of the initial velocity. One of the most important things illustrated by projectile motion is that vertical and horizontal motions are independent of each other. Galileo was the first person to fully comprehend this characteristic. He used it to predict the range of a projectile. On level ground, we define range to be the horizontal distance traveled by a projectile. Galileo and many others were interested in the range of projectiles primarily for military purposes—such as aiming cannons. However, investigating the range of projectiles can shed light on other interesting phenomena, such as the orbits of satellites around the Earth. Let us consider projectile range further. How does the initial velocity of a projectile affect its range? Obviously, the greater the initial speed , the greater the range, as shown in (a). The initial angle also has a dramatic effect on the range, as illustrated in (b). For a fixed initial speed, such as might be produced by a cannon, the maximum range is obtained with . This is true only for conditions neglecting air resistance. If air resistance is considered, the maximum angle is approximately . Interestingly, for every initial angle except , there are two angles that give the same range—the sum of those angles is . The range also depends on the value of the acceleration of gravity . The lunar astronaut Alan Shepherd was able to drive a golf ball a great distance on the Moon because gravity is weaker there. The range of a projectile on level ground for which air resistance is negligible is given by where is the initial speed and is the initial angle relative to the horizontal. The proof of this equation is left as an end-of-chapter problem (hints are given), but it does fit the major features of projectile range as described. When we speak of the range of a projectile on level ground, we assume that is very small compared with the circumference of the Earth. If, however, the range is large, the Earth curves away below the projectile and acceleration of gravity changes direction along the path. The range is larger than predicted by the range equation given above because the projectile has farther to fall than it would on level ground. (See .) If the initial speed is great enough, the projectile goes into orbit. This possibility was recognized centuries before it could be accomplished. When an object is in orbit, the Earth curves away from underneath the object at the same rate as it falls. The object thus falls continuously but never hits the surface. These and other aspects of orbital motion, such as the rotation of the Earth, will be covered analytically and in greater depth later in this text. Once again we see that thinking about one topic, such as the range of a projectile, can lead us to others, such as the Earth orbits. In Addition of Velocities, we will examine the addition of velocities, which is another important aspect of two-dimensional kinematics and will also yield insights beyond the immediate topic. ### Test Prep for AP Courses ### Summary 1. Projectile motion is the motion of an object through the air that is subject only to the acceleration of gravity. 2. To solve projectile motion problems, perform the following steps: 3. The maximum height of a projectile launched with initial vertical velocity is given by 4. The maximum horizontal distance traveled by a projectile is called the range. The range of a projectile on level ground launched at an angle above the horizontal with initial speed is given by ### Conceptual Questions ### Problems & Exercises
# Two-Dimensional Kinematics ## Addition of Velocities ### Learning Objectives By the end of this section, you will be able to: 1. Apply principles of vector addition to determine relative velocity. 2. Explain the significance of the observer in the measurement of velocity. ### Relative Velocity If a person rows a boat across a rapidly flowing river and tries to head directly for the other shore, the boat instead moves diagonally relative to the shore, as in . The boat does not move in the direction in which it is pointed. The reason, of course, is that the river carries the boat downstream. Similarly, if a small airplane flies overhead in a strong crosswind, you can sometimes see that the plane is not moving in the direction in which it is pointed, as illustrated in . The plane is moving straight ahead relative to the air, but the movement of the air mass relative to the ground carries it sideways. In each of these situations, an object has a velocity relative to a medium (such as a river) and that medium has a velocity relative to an observer on solid ground. The velocity of the object relative to the observer is the sum of these velocity vectors, as indicated in and . These situations are only two of many in which it is useful to add velocities. In this module, we first re-examine how to add velocities and then consider certain aspects of what relative velocity means. How do we add velocities? Velocity is a vector (it has both magnitude and direction); the rules of vector addition discussed in Vector Addition and Subtraction: Graphical Methods and Vector Addition and Subtraction: Analytical Methods apply to the addition of velocities, just as they do for any other vectors. In one-dimensional motion, the addition of velocities is simple—they add like ordinary numbers. For example, if a field hockey player is moving at straight toward the goal and drives the ball in the same direction with a velocity of relative to her body, then the velocity of the ball is relative to the stationary, profusely sweating goalkeeper standing in front of the goal. In two-dimensional motion, either graphical or analytical techniques can be used to add velocities. We will concentrate on analytical techniques. The following equations give the relationships between the magnitude and direction of velocity ( and ) and its components ( and ) along the x- and y-axes of an appropriately chosen coordinate system: These equations are valid for any vectors and are adapted specifically for velocity. The first two equations are used to find the components of a velocity when its magnitude and direction are known. The last two are used to find the magnitude and direction of velocity when its components are known. Note that in both of the last two examples, we were able to make the mathematics easier by choosing a coordinate system with one axis parallel to one of the velocities. We will repeatedly find that choosing an appropriate coordinate system makes problem solving easier. For example, in projectile motion we always use a coordinate system with one axis parallel to gravity. ### Relative Velocities and Classical Relativity When adding velocities, we have been careful to specify that the velocity is relative to some reference frame. These velocities are called relative velocities. For example, the velocity of an airplane relative to an air mass is different from its velocity relative to the ground. Both are quite different from the velocity of an airplane relative to its passengers (which should be close to zero). Relative velocities are one aspect of relativity, which is defined to be the study of how different observers moving relative to each other measure the same phenomenon. Nearly everyone has heard of relativity and immediately associates it with Albert Einstein (1879–1955), the greatest physicist of the 20th century. Einstein revolutionized our view of nature with his modern theory of relativity, which we shall study in later chapters. The relative velocities in this section are actually aspects of classical relativity, first discussed correctly by Galileo and Isaac Newton. Classical relativity is limited to situations where speeds are less than about 1% of the speed of light—that is, less than . Most things we encounter in daily life move slower than this speed. Let us consider an example of what two different observers see in a situation analyzed long ago by Galileo. Suppose a sailor at the top of a mast on a moving ship drops their binoculars. Where will it hit the deck? Will it hit at the base of the mast, or will it hit behind the mast because the ship is moving forward? The answer is that if air resistance is negligible, the binoculars will hit at the base of the mast at a point directly below its point of release. Now let us consider what two different observers see when the binoculars drop. One observer is on the ship and the other on shore. The binoculars have no horizontal velocity relative to the observer on the ship, and so he sees them fall straight down the mast. (See .) To the observer on shore, the binoculars and the ship have the same horizontal velocity, so both move the same distance forward while the binoculars are falling. This observer sees the curved path shown in . Although the paths look different to the different observers, each sees the same result—the binoculars hit at the base of the mast and not behind it. To get the correct description, it is crucial to correctly specify the velocities relative to the observer. ### Summary 1. Velocities in two dimensions are added using the same analytical vector techniques, which are rewritten as 2. Relative velocity is the velocity of an object as observed from a particular reference frame, and it varies dramatically with reference frame. 3. Relativity is the study of how different observers measure the same phenomenon, particularly when the observers move relative to one another. Classical relativity is limited to situations where speed is less than about 1% of the speed of light (3000 km/s). ### Conceptual Questions ### Problems & Exercises
# Dynamics: Force and Newton's Laws of Motion ## Connection for AP® Courses Motion draws our attention. Motion itself can be beautiful, causing us to marvel at the forces needed to achieve spectacular motion, such as that of a jumping dolphin, a leaping pole vaulter, a bird in flight, or an orbiting satellite. The study of motion is kinematics, but kinematics only describes the way objects move—their velocity and their acceleration. Dynamics considers the forces that affect the motion of moving objects and systems. Newton’s laws of motion are the foundation of dynamics. These laws provide an example of the breadth and simplicity of principles under which nature functions. They are also universal laws in that they apply to situations on Earth as well as in space. Isaac Newton’s (1642–1727) laws of motion were just one part of the monumental work that has made him legendary. The development of Newton’s laws marks the transition from the Renaissance into the modern era. This transition was characterized by a revolutionary change in the way people thought about the physical universe. For many centuries natural philosophers had debated the nature of the universe based largely on certain rules of logic, with great weight given to the thoughts of earlier classical philosophers such as Aristotle (384–322 BC). Among the many great thinkers who contributed to this change were Newton and Galileo Galilei (1564–1647). Galileo was instrumental in establishing observation as the absolute determinant of truth, rather than “logical” argument. Galileo’s use of the telescope was his most notable achievement in demonstrating the importance of observation. He discovered moons orbiting Jupiter and made other observations that were inconsistent with certain ancient ideas and religious dogma. For this reason, and because of the manner in which he dealt with those in authority, Galileo was tried by the Inquisition and punished. He spent the final years of his life under a form of house arrest. Because others before Galileo had also made discoveries by observing the nature of the universe and because repeated observations verified those of Galileo, his work could not be suppressed or denied. After his death, his work was verified by others, and his ideas were eventually accepted by the church and scientific communities. Galileo also contributed to the formulation of what is now called Newton’s first law of motion. Newton made use of the work of his predecessors, which enabled him to develop laws of motion, discover the law of gravity, invent calculus, and make great contributions to the theories of light and color. It is amazing that many of these developments were made by Newton working alone, without the benefit of the usual interactions that take place among scientists today. Newton’s laws are introduced along with Big Idea 3, that interactions can be described by forces. These laws provide a theoretical basis for studying motion depending on interactions between the objects. In particular, Newton's laws are applicable to all forces in inertial frames of references (Enduring Understanding 3.A). We will find that all forces are vectors; that is, forces always have both a magnitude and a direction (Essential Knowledge 3.A.2). Furthermore, we will learn that all forces are a result of interactions between two or more objects (Essential Knowledge 3.A.3). These interactions between any two objects are described by Newton's third law, stating that the forces exerted on these objects are equal in magnitude and opposite in direction to each other (Essential Knowledge 3.A.4). We will discover that there is an empirical cause-effect relationship between the net force exerted on an object of mass m and its acceleration, with this relationship described by Newton's second law (Enduring Understanding 3.B). This supports Big Idea 1, that inertial mass is a property of an object or a system. The mass of an object or a system is one of the factors affecting changes in motion when an object or a system interacts with other objects or systems (Essential Knowledge 1.C.1). Another is the net force on an object, which is the vector sum of all the forces exerted on the object (Essential Knowledge 3.B.1). To analyze this, we use free-body diagrams to visualize the forces exerted on a given object in order to find the net force and analyze the object's motion (Essential Knowledge 3.B.2). Thinking of these objects as systems is a concept introduced in this chapter, where a system is a collection of elements that could be considered as a single object without any internal structure (Essential Knowledge 5.A.1). This will support Big Idea 5, that changes that occur to the system due to interactions are governed by conservation laws. These conservation laws will be the focus of later chapters in this book. They explain whether quantities are conserved in the given system or change due to transfer to or from another system due to interactions between the systems (Enduring Understanding 5.A). Furthermore, when a situation involves more than one object, it is important to define the system and analyze the motion of a whole system, not its elements, based on analysis of external forces on the system. This supports Big Idea 4, that interactions between systems cause changes in those systems. All kinematics variables in this case describe the motion of the center of mass of the system (Essential Knowledge 4.A.1, Essential Knowledge 4.A.2). The internal forces between the elements of the system do not affect the velocity of the center of mass (Essential Knowledge 4.A.3). The velocity of the center of mass will change only if there is a net external force exerted on the system (Enduring Understanding 4.A). We will learn that some of these interactions can be explained by the existence of fields extending through space, supporting Big Idea 2. For example, any object that has mass creates a gravitational field in space (Enduring Understanding 2.B). Any material object (one that has mass) placed in the gravitational field will experience gravitational force (Essential Knowledge 2.B.1). Forces may be categorized as contact or long-distance (Enduring Understanding 3.C). In this chapter we will work with both. An example of a long-distance force is gravitation (Essential Knowledge 3.C.1). Contact forces, such as tension, friction, normal force, and the force of a spring, result from interatomic electric forces at the microscopic level (Essential Knowledge 3.C.4). It was not until the advent of modern physics early in the twentieth century that it was discovered that Newton’s laws of motion produce a good approximation to motion only when the objects are moving at speeds much, much less than the speed of light and when those objects are larger than the size of most molecules (about 10–9 m in diameter). These constraints define the realm of classical mechanics, as discussed in Introduction to the Nature of Science and Physics. At the beginning of the twentieth century, Albert Einstein (1879–1955) developed the theory of relativity and, along with many other scientists, quantum theory. Quantum theory does not have the constraints present in classical physics. All of the situations we consider in this chapter, and all those preceding the introduction of relativity in Special Relativity, are in the realm of classical physics. The development of special relativity and empirical observations at atomic scales led to the idea that there are four basic forces that account for all known phenomena. These forces are called fundamental (Enduring Understanding 3.G). The properties of gravitational (Essential Knowledge 3.G.1) and electromagnetic (Essential Knowledge 3.G.2) forces are explained in more detail. Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure. Essential Knowledge 1.C.1 Inertial mass is the property of an object or a system that determines how its motion changes when it interacts with other objects or systems. Big Idea 2 Fields existing in space can be used to explain interactions. Enduring Understanding 2.A A field associates a value of some physical quantity with every point in space. Field models are useful for describing interactions that occur at a distance (long-range forces) as well as a variety of other physical phenomena. Essential Knowledge 2.A.1 A vector field gives, as a function of position (and perhaps time), the value of a physical quantity that is described by a vector. Essential Knowledge 2.A.2 A scalar field gives the value of a physical quantity. Enduring Understanding 2.B A gravitational field is caused by an object with mass. Essential Knowledge 2.B.1 A gravitational field g at the location of an object with mass m causes a gravitational force of magnitude mg to be exerted on the object in the direction of the field. Big Idea 3 The interactions of an object with other objects can be described by forces. Enduring Understanding 3.A All forces share certain common characteristics when considered by observers in inertial reference frames. Essential Knowledge 3.A.2 Forces are described by vectors. Essential Knowledge 3.A.3 A force exerted on an object is always due to the interaction of that object with another object. Essential Knowledge 3.A.4 If one object exerts a force on a second object, the second object always exerts a force of equal magnitude on the first object in the opposite direction. Enduring Understanding 3.B Classically, the acceleration of an object interacting with other objects can be predicted by using . Essential Knowledge 3.B.1 If an object of interest interacts with several other objects, the net force is the vector sum of the individual forces. Essential Knowledge 3.B.2 Free-body diagrams are useful tools for visualizing the forces being exerted on a single object and writing the equations that represent a physical situation. Enduring Understanding 3.C At the macroscopic level, forces can be categorized as either long-range (action-at-a-distance) forces or contact forces. Essential Knowledge 3.C.1 Gravitational force describes the interaction of one object that has mass with another object that has mass. Essential Knowledge 3.C.4 Contact forces result from the interaction of one object touching another object, and they arise from interatomic electric forces. These forces include tension, friction, normal, spring (Physics 1), and buoyant (Physics 2). Enduring Understanding 3.G Certain types of forces are considered fundamental. Essential Knowledge 3.G.1 Gravitational forces are exerted at all scales and dominate at the largest distance and mass scales. Essential Knowledge 3.G.2 Electromagnetic forces are exerted at all scales and can dominate at the human scale. Big Idea 4 Interactions between systems can result in changes in those systems. Enduring Understanding 4.A The acceleration of the center of mass of a system is related to the net force exerted on the system, where . Essential Knowledge 4.A.1 The linear motion of a system can be described by the displacement, velocity, and acceleration of its center of mass. Essential Knowledge 4.A.2 The acceleration is equal to the rate of change of velocity with time, and velocity is equal to the rate of change of position with time. Essential Knowledge 4.A.3 Forces that systems exert on each other are due to interactions between objects in the systems. If the interacting objects are parts of the same system, there will be no change in the center-of-mass velocity of that system. Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws. Enduring Understanding 5.A Certain quantities are conserved, in the sense that the changes of those quantities in a given system are always equal to the transfer of that quantity to or from the system by all possible interactions with other systems. Essential Knowledge 5.A.1 A system is an object or a collection of objects. The objects are treated as having no internal structure.
# Dynamics: Force and Newton's Laws of Motion ## Development of Force Concept ### Learning Objectives By the end of this section, you will be able to: 1. Understand the definition of force. Dynamics is the study of the forces that cause objects and systems to move. To understand this, we need a working definition of force. Our intuitive definition of force—that is, a push or a pull—is a good place to start. We know that a push or pull has both magnitude and direction (therefore, it is a vector quantity) and can vary considerably in each regard. For example, a cannon exerts a strong force on a cannonball that is launched into the air. In contrast, Earth exerts only a tiny downward pull on a flea. Our everyday experiences also give us a good idea of how multiple forces add. If two people push in different directions on a third person, as illustrated in , we might expect the total force to be in the direction shown. Since force is a vector, it adds just like other vectors, as illustrated in (a) for two ice skaters. Forces, like other vectors, are represented by arrows and can be added using the familiar head-to-tail method or by trigonometric methods. These ideas were developed in Two-Dimensional Kinematics. (b) is our first example of a free-body diagram, which is a technique used to illustrate all the external forces acting on a body. The body is represented by a single isolated point (or free body), and only those forces acting on the body from the outside (external forces) are shown. (These forces are the only ones shown, because only external forces acting on the body affect its motion. We can ignore any internal forces within the body.) Free-body diagrams are very useful in analyzing forces acting on a system and are employed extensively in the study and application of Newton’s laws of motion. A more quantitative definition of force can be based on some standard force, just as distance is measured in units relative to a standard distance. One possibility is to stretch a spring a certain fixed distance, as illustrated in , and use the force it exerts to pull itself back to its relaxed shape—called a restoring force—as a standard. The magnitude of all other forces can be stated as multiples of this standard unit of force. Many other possibilities exist for standard forces. (One that we will encounter in Magnetism is the magnetic force between two wires carrying electric current.) Some alternative definitions of force will be given later in this chapter. ### Test Prep for AP Courses ### Section Summary 1. Dynamics is the study of how forces affect the motion of objects. 2. Force is a push or pull that can be defined in terms of various standards, and it is a vector having both magnitude and direction. 3. External forces are any outside forces that act on a body. A free-body diagram is a drawing of all external forces acting on a body. ### Conceptual Questions
# Dynamics: Force and Newton's Laws of Motion ## Newton’s First Law of Motion: Inertia ### Learning Objectives By the end of this section, you will be able to: 1. Define mass and inertia. 2. Understand Newton's first law of motion. Experience suggests that an object at rest will remain at rest if left alone, and that an object in motion tends to slow down and stop unless some effort is made to keep it moving. What Newton’s first law of motion states, however, is the following: Note the repeated use of the verb “remains.” We can think of this law as preserving the status quo of motion. Rather than contradicting our experience, Newton’s first law of motion states that there must be a cause (which is a net external force) for there to be any change in velocity (either a change in magnitude or direction). We will define net external force in the next section. An object sliding across a table or floor slows down due to the net force of friction acting on the object. If friction disappeared, would the object still slow down? The idea of cause and effect is crucial in accurately describing what happens in various situations. For example, consider what happens to an object sliding along a rough horizontal surface. The object quickly grinds to a halt. If we spray the surface with talcum powder to make the surface smoother, the object slides farther. If we make the surface even smoother by rubbing lubricating oil on it, the object slides farther yet. Extrapolating to a frictionless surface, we can imagine the object sliding in a straight line indefinitely. Friction is thus the cause of the slowing (consistent with Newton’s first law). The object would not slow down at all if friction were completely eliminated. Consider an air hockey table. When the air is turned off, the puck slides only a short distance before friction slows it to a stop. However, when the air is turned on, it creates a nearly frictionless surface, and the puck glides long distances without slowing down. Additionally, if we know enough about the friction, we can accurately predict how quickly the object will slow down. Friction is an external force. Newton’s first law is completely general and can be applied to anything from an object sliding on a table to a satellite in orbit to blood pumped from the heart. Experiments have thoroughly verified that any change in velocity (speed or direction) must be caused by an external force. The idea of generally applicable or universal laws is important not only here—it is a basic feature of all laws of physics. Identifying these laws is like recognizing patterns in nature from which further patterns can be discovered. The genius of Galileo, who first developed the idea for the first law, and Newton, who clarified it, was to ask the fundamental question, “What is the cause?” Thinking in terms of cause and effect is a worldview fundamentally different from the typical ancient Greek approach when questions such as “Why does a tiger have stripes?” would have been answered in Aristotelian fashion, “That is the nature of the beast.” True perhaps, but not a useful insight. ### Mass The property of a body to remain at rest or to remain in motion with constant velocity is called inertia. Newton’s first law is often called the law of inertia. As we know from experience, some objects have more inertia than others. It is obviously more difficult to change the motion of a large boulder than that of a basketball, for example. The inertia of an object is measured by its mass. Roughly speaking, mass is a measure of the amount of “stuff” (or matter) in something. The quantity or amount of matter in an object is determined by the numbers of atoms and molecules of various types it contains. Unlike weight, mass does not vary with location. The mass of an object is the same on Earth, in orbit, or on the surface of the Moon. In practice, it is very difficult to count and identify all of the atoms and molecules in an object, so masses are not often determined in this manner. Operationally, the masses of objects are determined by comparison with the standard kilogram. ### Section Summary 1. Newton’s first law of motion states that a body at rest remains at rest, or, if in motion, remains in motion at a constant velocity unless acted on by a net external force. This is also known as the law of inertia. 2. Inertia is the tendency of an object to remain at rest or remain in motion. Inertia is related to an object’s mass. 3. Mass is the quantity of matter in a substance. ### Conceptual Questions
# Dynamics: Force and Newton's Laws of Motion ## Newton’s Second Law of Motion: Concept of a System ### Learning Objectives By the end of this section, you will be able to: 1. Define net force, external force, and system. 2. Understand Newton’s second law of motion. 3. Apply Newton’s second law to determine the weight of an object. Newton’s second law of motion is closely related to Newton’s first law of motion. It mathematically states the cause and effect relationship between force and changes in motion. Newton’s second law of motion is more quantitative and is used extensively to calculate what happens in situations involving a force. Before we can write down Newton’s second law as a simple equation giving the exact relationship of force, mass, and acceleration, we need to sharpen some ideas that have already been mentioned. First, what do we mean by a change in motion? The answer is that a change in motion is equivalent to a change in velocity. A change in velocity means, by definition, that there is an acceleration. Newton’s first law says that a net external force causes a change in motion; thus, we see that a net external force causes acceleration. Another question immediately arises. What do we mean by an external force? An intuitive notion of external is correct—an external force acts from outside the system (object or collection of objects) of interest. For example, in (a) the system of interest is the wagon plus the child in it. The two forces exerted by the other children are external forces. An internal force acts between elements of the system. Again looking at (a), the force the child in the wagon exerts to hang onto the wagon is an internal force between elements of the system of interest. Only external forces affect the motion of a system, according to Newton’s first law. (The internal forces actually cancel, as we shall see in the next section.) You must define the boundaries of the system before you can determine which forces are external. Sometimes the system is obvious, whereas other times identifying the boundaries of a system is more subtle. The concept of a system is fundamental to many areas of physics, as is the correct application of Newton’s laws. This concept will be revisited many times on our journey through physics. Now, it seems reasonable that acceleration should be directly proportional to and in the same direction as the net (total) external force acting on a system. This assumption has been verified experimentally and is illustrated in . In part (a), a smaller force causes a smaller acceleration than the larger force illustrated in part (c). For completeness, the vertical forces are also shown; they are assumed to cancel since there is no acceleration in the vertical direction. The vertical forces are the weight and the support of the ground , and the horizontal force represents the force of friction. These will be discussed in more detail in later sections. For now, we will define friction as a force that opposes the motion past each other of objects that are touching. (b) shows how vectors representing the external forces add together to produce a net force, . To obtain an equation for Newton’s second law, we first write the relationship of acceleration and net external force as the proportionality where the symbol means “proportional to,” and is the net external force. (The net external force is the vector sum of all external forces and can be determined graphically, using the head-to-tail method, or analytically, using components. The techniques are the same as for the addition of other vectors, and are covered in Two-Dimensional Kinematics.) This proportionality states what we have said in words—acceleration is directly proportional to the net external force. Once the system of interest is chosen, it is important to identify the external forces and ignore the internal ones. It is a tremendous simplification not to have to consider the numerous internal forces acting between objects within the system, such as muscular forces within the child’s body, let alone the myriad of forces between atoms in the objects, but by doing so, we can easily solve some very complex problems with only minimal error due to our simplification Now, it also seems reasonable that acceleration should be inversely proportional to the mass of the system. In other words, the larger the mass (the inertia), the smaller the acceleration produced by a given force. And indeed, as illustrated in , the same net external force applied to a car produces a much smaller acceleration than when applied to a basketball. The proportionality is written as where is the mass of the system. Experiments have shown that acceleration is exactly inversely proportional to mass, just as it is exactly linearly proportional to the net external force. It has been found that the acceleration of an object depends only on the net external force and the mass of the object. Combining the two proportionalities just given yields Newton's second law of motion. Although these last two equations are really the same, the first gives more insight into what Newton’s second law means. The law is a cause and effect relationship among three quantities that is not simply based on their definitions. The validity of the second law is completely based on experimental verification. ### Units of Force is used to define the units of force in terms of the three basic units for mass, length, and time. The SI unit of force is called the newton (abbreviated N) and is the force needed to accelerate a 1-kg system at the rate of . That is, since , While almost the entire world uses the newton for the unit of force, in the United States the most familiar unit of force is the pound (lb), where 1 N = 0.225 lb. ### Weight and the Gravitational Force When an object is dropped, it accelerates toward the center of Earth. Newton’s second law states that a net force on an object is responsible for its acceleration. If air resistance is negligible, the net force on a falling object is the gravitational force, commonly called its weight . Weight can be denoted as a vector because it has a direction; down is, by definition, the direction of gravity, and hence weight is a downward force. The magnitude of weight is denoted as . Galileo was instrumental in showing that, in the absence of air resistance, all objects fall with the same acceleration . Using Galileo’s result and Newton’s second law, we can derive an equation for weight. Consider an object with mass falling downward toward Earth. It experiences only the downward force of gravity, which has magnitude . Newton’s second law states that the magnitude of the net external force on an object is . Since the object experiences only the downward force of gravity, . We know that the acceleration of an object due to gravity is , or . Substituting these into Newton’s second law gives When the net external force on an object is its weight, we say that it is in free-fall. That is, the only force acting on the object is the force of gravity. In the real world, when objects fall downward toward Earth, they are never truly in free-fall because there is always some upward force from the air acting on the object. The acceleration due to gravity varies slightly over the surface of Earth, so that the weight of an object depends on location and is not an intrinsic property of the object. Weight varies dramatically if one leaves Earth’s surface. On the Moon, for example, the acceleration due to gravity is only . A 1.0-kg mass thus has a weight of 9.8 N on Earth and only about 1.7 N on the Moon. The broadest definition of weight in this sense is that the weight of an object is the gravitational force on it from the nearest large body, such as Earth, the Moon, the Sun, and so on. This is the most common and useful definition of weight in physics. It differs dramatically, however, from the definition of weight used by NASA and the popular media in relation to space travel and exploration. When they speak of “weightlessness” and “microgravity,” they are really referring to the phenomenon we call “free-fall” in physics. We shall use the above definition of weight, and we will make careful distinctions between free-fall and actual weightlessness. It is important to be aware that weight and mass are very different physical quantities, although they are closely related. Mass is the quantity of matter (how much “stuff”) and does not vary in classical physics, whereas weight is the gravitational force and does vary depending on gravity. It is tempting to equate the two, since most of our examples take place on Earth, where the weight of an object only varies a little with the location of the object. Furthermore, the terms mass and weight are used interchangeably in everyday language; for example, our medical records often show our “weight” in kilograms, but never in the correct units of newtons. ### Section Summary 1. Acceleration, , is defined as a change in velocity, meaning a change in its magnitude or direction, or both. 2. An external force is one acting on a system from outside the system, as opposed to internal forces, which act between components within the system. 3. Newton’s second law of motion states that the acceleration of a system is directly proportional to and in the same direction as the net external force acting on the system, and inversely proportional to its mass. 4. In equation form, Newton’s second law of motion is . 5. This is often written in the more familiar form: . 6. The weight of an object is defined as the force of gravity acting on an object of mass . The object experiences an acceleration due to gravity : 7. If the only force acting on an object is due to gravity, the object is in free fall. 8. Friction is a force that opposes the motion past each other of objects that are touching. ### Conceptual Questions ### Problem Exercises You may assume data taken from illustrations is accurate to three digits.
# Dynamics: Force and Newton's Laws of Motion ## Newton’s Third Law of Motion: Symmetry in Forces ### Learning Objectives By the end of this section, you will be able to: 1. Understand Newton's third law of motion. 2. Apply Newton's third law to define systems and solve problems of motion. Baseball relief pitcher Mariano Rivera was so highly regarded that during his retirement year, opposing teams conducted farewell presentations when he played at their stadiums. The Minnesota Twins offered a unique gift: A chair made of broken bats. Any pitch can break a bat, but with Rivera's signature pitch—known as a cutter—the ball and the bat frequently came together at a point that shattered the hardwood. Typically, we think of a baseball or softball hitter exerting a force on the incoming ball, and baseball analysts focus on the resulting "exit velocity" as a key statistic. But the force of the ball can do its own damage. This is exactly what happens whenever one body exerts a force on another—the first also experiences a force (equal in magnitude and opposite in direction). Numerous common experiences, such as stubbing a toe or pushing off the floor during a jump, confirm this. It is precisely stated in Newton’s third law of motion. This law represents a certain symmetry in nature: Forces always occur in pairs, and one body cannot exert a force on another without experiencing a force itself. We sometimes refer to this law loosely as “action-reaction,” where the force exerted is the action and the force experienced as a consequence is the reaction. Newton’s third law has practical uses in analyzing the origin of forces and understanding which forces are external to a system. We can readily see Newton’s third law at work by taking a look at how people move about. Consider a swimmer pushing off from the side of a pool, as illustrated in . She pushes against the pool wall with her feet and accelerates in the direction opposite to that of her push. The wall has exerted an equal and opposite force back on the swimmer. You might think that two equal and opposite forces would cancel, but they do not because they act on different systems. In this case, there are two systems that we could investigate: the swimmer or the wall. If we select the swimmer to be the system of interest, as in the figure, then is an external force on this system and affects its motion. The swimmer moves in the direction of . In contrast, the force acts on the wall and not on our system of interest. Thus does not directly affect the motion of the system and does not cancel . Note that the swimmer pushes in the direction opposite to that in which she wishes to move. The reaction to her push is thus in the desired direction. Other examples of Newton’s third law are easy to find. As a professor walks in front of a whiteboard, she exerts a force backward on the floor. The floor exerts a reaction force forward on the professor that causes her to accelerate forward. Similarly, a car accelerates because the ground pushes forward on the drive wheels in reaction to the drive wheels pushing backward on the ground. You can see evidence of the wheels pushing backward when tires spin on a gravel road and throw rocks backward. In another example, rockets move forward by expelling gas backward at high velocity. This means the rocket exerts a large backward force on the gas in the rocket combustion chamber, and the gas therefore exerts a large reaction force forward on the rocket. This reaction force is called thrust. It is a common misconception that rockets propel themselves by pushing on the ground or on the air behind them. They actually work better in a vacuum, where they can more readily expel the exhaust gases. Helicopters similarly create lift by pushing air down, thereby experiencing an upward reaction force. Birds and airplanes also fly by exerting force on air in a direction opposite to that of whatever force they need. For example, the wings of a bird force air downward and backward in order to get lift and move forward. An octopus propels itself in the water by ejecting water through a funnel from its body, similar to a jet ski. Boxers and other martial arts fighters experience reaction forces when they punch, sometimes breaking their hand by hitting an opponent’s body. ### Test Prep for AP Courses ### Section Summary 1. Newton’s third law of motion represents a basic symmetry in nature. It states: Whenever one body exerts a force on a second body, the first body experiences a force that is equal in magnitude and opposite in direction to the force that the first body exerts. 2. A thrust is a reaction force that pushes a body forward in response to a backward force. Rockets, airplanes, and cars are pushed forward by a thrust reaction force. ### Conceptual Questions ### Problem Exercises
# Dynamics: Force and Newton's Laws of Motion ## Normal, Tension, and Other Examples of Forces ### Learning Objectives By the end of this section, you will be able to: 1. Define normal and tension forces. 2. Apply Newton's laws of motion to solve problems involving a variety of forces. 3. Use trigonometric identities to resolve weight into components. Forces are given many names, such as push, pull, thrust, lift, weight, friction, and tension. Traditionally, forces have been grouped into several categories and given names relating to their source, how they are transmitted, or their effects. The most important of these categories are discussed in this section, together with some interesting applications. Further examples of forces are discussed later in this text. ### Normal Force Weight (also called force of gravity) is a pervasive force that acts at all times and must be counteracted to keep an object from falling. You definitely notice that you must support the weight of a heavy object by pushing up on it when you hold it stationary, as illustrated in (a). But how do inanimate objects like a table support the weight of a mass placed on them, such as shown in (b)? When the bag of dog food is placed on the table, the table actually sags slightly under the load. This would be noticeable if the load were placed on a card table, but even rigid objects deform when a force is applied to them. Unless the object is deformed beyond its limit, it will exert a restoring force much like a deformed spring (or trampoline or diving board). The greater the deformation, the greater the restoring force. So when the load is placed on the table, the table sags until the restoring force becomes as large as the weight of the load. At this point the net external force on the load is zero. That is the situation when the load is stationary on the table. The table sags quickly, and the sag is slight so we do not notice it. But it is similar to the sagging of a trampoline when you climb onto it. We must conclude that whatever supports a load, be it animate or not, must supply an upward force equal to the weight of the load, as we assumed in a few of the previous examples. If the force supporting a load is perpendicular to the surface of contact between the load and its support, this force is defined to be a normal force and here is given the symbol . (This is not the unit for force N.) The word normal means perpendicular to a surface. The normal force can be less than the object’s weight if the object is on an incline, as you will see in the next example. ### Tension A tension is a force along the length of a medium, especially a force carried by a flexible medium, such as a rope or cable. The word “tension” comes from a Latin word meaning “to stretch.” Not coincidentally, the flexible cords that carry muscle forces to other parts of the body are called tendons. Any flexible connector, such as a string, rope, chain, wire, or cable, can exert pulls only parallel to its length; thus, a force carried by a flexible connector is a tension with direction parallel to the connector. It is important to understand that tension is a pull in a connector. In contrast, consider the phrase: “You can’t push a rope.” The tension force pulls outward along the two ends of a rope. Consider a person holding a mass on a rope as shown in . Tension in the rope must equal the weight of the supported mass, as we can prove using Newton’s second law. If the 5.00-kg mass in the figure is stationary, then its acceleration is zero, and thus . The only external forces acting on the mass are its weight and the tension supplied by the rope. Thus, where and are the magnitudes of the tension and weight and their signs indicate direction, with up being positive here. Thus, just as you would expect, the tension equals the weight of the supported mass: For a 5.00-kg mass, then (neglecting the mass of the rope) we see that If we cut the rope and insert a spring, the spring would extend a length corresponding to a force of 49.0 N, providing a direct observation and measure of the tension force in the rope. Flexible connectors are often used to transmit forces around corners, such as in a hospital traction system, a finger joint, or a bicycle brake cable. If there is no friction, the tension is transmitted undiminished. Only its direction changes, and it is always parallel to the flexible connector. This is illustrated in (a) and (b). If we wish to create a very large tension, all we have to do is exert a force perpendicular to a flexible connector, as illustrated in . As we saw in the last example, the weight of the tightrope walker acted as a force perpendicular to the rope. We saw that the tension in the roped related to the weight of the tightrope walker in the following way: We can extend this expression to describe the tension created when a perpendicular force () is exerted at the middle of a flexible connector: Note that is the angle between the horizontal and the bent connector. In this case, becomes very large as approaches zero. Even the relatively small weight of any flexible connector will cause it to sag, since an infinite tension would result if it were horizontal (i.e., and ). (See .) ### Extended Topic: Real Forces and Inertial Frames There is another distinction among forces in addition to the types already mentioned. Some forces are real, whereas others are not. Real forces are those that have some physical origin, such as the gravitational pull. Contrastingly, fictitious forces are those that arise simply because an observer is in an accelerating frame of reference, such as one that rotates (like a merry-go-round) or undergoes linear acceleration (like a car slowing down). For example, if a satellite is heading due north above Earth’s northern hemisphere, then to an observer on Earth it will appear to experience a force to the west that has no physical origin. Of course, what is happening here is that Earth is rotating toward the east and moves east under the satellite. In Earth’s frame this looks like a westward force on the satellite, or it can be interpreted as a violation of Newton’s first law (the law of inertia). An inertial frame of reference is one in which all forces are real and, equivalently, one in which Newton’s laws have the simple forms given in this chapter. Earth’s rotation is slow enough that Earth is nearly an inertial frame. You ordinarily must perform precise experiments to observe fictitious forces and the slight departures from Newton’s laws, such as the effect just described. On the large scale, such as for the rotation of weather systems and ocean currents, the effects can be easily observed. The crucial factor in determining whether a frame of reference is inertial is whether it accelerates or rotates relative to a known inertial frame. Unless stated otherwise, all phenomena discussed in this text are considered in inertial frames. All the forces discussed in this section are real forces, but there are a number of other real forces, such as lift and thrust, that are not discussed in this section. They are more specialized, and it is not necessary to discuss every type of force. It is natural, however, to ask where the basic simplicity we seek to find in physics is in the long list of forces. Are some more basic than others? Are some different manifestations of the same underlying force? The answer to both questions is yes, as will be seen in the next (extended) section and in the treatment of modern physics later in the text. ### Test Prep for AP Courses ### Section Summary 1. When objects rest on a surface, the surface applies a force to the object that supports the weight of the object. This supporting force acts perpendicular to and away from the surface. It is called a normal force, . 2. When objects rest on a non-accelerating horizontal surface, the magnitude of the normal force is equal to the weight of the object: 3. When objects rest on an inclined plane that makes an angle with the horizontal surface, the weight of the object can be resolved into components that act perpendicular () and parallel () to the surface of the plane. These components can be calculated using: 4. The pulling force that acts along a stretched flexible connector, such as a rope or cable, is called tension, . When a rope supports the weight of an object that is at rest, the tension in the rope is equal to the weight of the object: 5. In any inertial frame of reference (one that is not accelerated or rotated), Newton’s laws have the simple forms given in this chapter and all forces are real forces having a physical origin. ### Conceptual Questions ### Problem Exercises
# Dynamics: Force and Newton's Laws of Motion ## Problem-Solving Strategies ### Learning Objectives By the end of this section, you will be able to: 1. Understand and apply a problem-solving procedure to solve problems using Newton's laws of motion. Success in problem solving is obviously necessary to understand and apply physical principles, not to mention the more immediate need of passing exams. The basics of problem solving, presented earlier in this text, are followed here, but specific strategies useful in applying Newton’s laws of motion are emphasized. These techniques also reinforce concepts that are useful in many other areas of physics. Many problem-solving strategies are stated outright in the worked examples, and so the following techniques should reinforce skills you have already begun to develop. ### Problem-Solving Strategy for Newton’s Laws of Motion Step 1. As usual, it is first necessary to identify the physical principles involved. Once it is determined that Newton’s laws of motion are involved (if the problem involves forces), it is particularly important to draw a careful sketch of the situation. Such a sketch is shown in (a). Then, as in (b), use arrows to represent all forces, label them carefully, and make their lengths and directions correspond to the forces they represent (whenever sufficient information exists). Step 2. Identify what needs to be determined and what is known or can be inferred from the problem as stated. That is, make a list of knowns and unknowns. Then carefully determine the system of interest. This decision is a crucial step, since Newton’s second law involves only external forces. Once the system of interest has been identified, it becomes possible to determine which forces are external and which are internal, a necessary step to employ Newton’s second law. (See (c).) Newton’s third law may be used to identify whether forces are exerted between components of a system (internal) or between the system and something outside (external). As illustrated earlier in this chapter, the system of interest depends on what question we need to answer. This choice becomes easier with practice, eventually developing into an almost unconscious process. Skill in clearly defining systems will be beneficial in later chapters as well. A diagram showing the system of interest and all of the external forces is called a free-body diagram. Only forces are shown on free-body diagrams, not acceleration or velocity. We have drawn several of these in worked examples. (c) shows a free-body diagram for the system of interest. Note that no internal forces are shown in a free-body diagram. Step 3. Once a free-body diagram is drawn, Newton’s second law can be applied to solve the problem. This is done in (d) for a particular situation. In general, once external forces are clearly identified in free-body diagrams, it should be a straightforward task to put them into equation form and solve for the unknown, as done in all previous examples. If the problem is one-dimensional—that is, if all forces are parallel—then they add like scalars. If the problem is two-dimensional, then it must be broken down into a pair of one-dimensional problems. This is done by projecting the force vectors onto a set of axes chosen for convenience. As seen in previous examples, the choice of axes can simplify the problem. For example, when an incline is involved, a set of axes with one axis parallel to the incline and one perpendicular to it is most convenient. It is almost always convenient to make one axis parallel to the direction of motion, if this is known. Step 4. As always, check the solution to see whether it is reasonable. In some cases, this is obvious. For example, it is reasonable to find that friction causes an object to slide down an incline more slowly than when no friction exists. In practice, intuition develops gradually through problem solving, and with experience it becomes progressively easier to judge whether an answer is reasonable. Another way to check your solution is to check the units. If you are solving for force and end up with units of m/s, then you have made a mistake. ### Test Prep for AP Courses ### Section Summary 1. To solve problems involving Newton’s laws of motion, follow the procedure described: ### Problem Exercises
# Dynamics: Force and Newton's Laws of Motion ## Further Applications of Newton’s Laws of Motion ### Learning Objectives By the end of this section, you will be able to: 1. Apply problem-solving techniques to solve for quantities in more complex systems of forces. 2. Integrate concepts from kinematics to solve problems using Newton's laws of motion. There are many interesting applications of Newton’s laws of motion, a few more of which are presented in this section. These serve also to illustrate some further subtleties of physics and to help build problem-solving skills. In the earlier example of a tightrope walker we noted that the tensions in wires supporting a mass were equal only because the angles on either side were equal. Consider the following example, where the angles are not equal; slightly more trigonometry is involved. The bathroom scale is an excellent example of a normal force acting on a body. It provides a quantitative reading of how much it must push upward to support the weight of an object. But can you predict what you would see on the dial of a bathroom scale if you stood on it during an elevator ride? Will you see a value greater than your weight when the elevator starts up? What about when the elevator moves upward at a constant speed: will the scale still read more than your weight at rest? Consider the following example. The solution to the previous example also applies to an elevator accelerating downward, as mentioned. When an elevator accelerates downward, is negative, and the scale reading is less than the weight of the person, until a constant downward velocity is reached, at which time the scale reading again becomes equal to the person’s weight. If the elevator is in free-fall and accelerating downward at , then the scale reading will be zero and the person will appear to be weightless. ### Integrating Concepts: Newton’s Laws of Motion and Kinematics Physics is most interesting and most powerful when applied to general situations that involve more than a narrow set of physical principles. Newton’s laws of motion can also be integrated with other concepts that have been discussed previously in this text to solve problems of motion. For example, forces produce accelerations, a topic of kinematics, and hence the relevance of earlier chapters. When approaching problems that involve various types of forces, acceleration, velocity, and/or position, use the following steps to approach the problem: Problem-Solving Strategy Step 1. Identify which physical principles are involved. Listing the givens and the quantities to be calculated will allow you to identify the principles involved. Step 2. Solve the problem using strategies outlined in the text. If these are available for the specific topic, you should refer to them. You should also refer to the sections of the text that deal with a particular topic. The following worked example illustrates how these strategies are applied to an integrated concept problem. ### Test Prep for AP Courses ### Summary 1. Newton’s laws of motion can be applied in numerous situations to solve problems of motion. 2. Some problems will contain multiple force vectors acting in different directions on an object. Be sure to draw diagrams, resolve all force vectors into horizontal and vertical components, and draw a free-body diagram. Always analyze the direction in which an object accelerates so that you can determine whether or . 3. The normal force on an object is not always equal in magnitude to the weight of the object. If an object is accelerating, the normal force will be less than or greater than the weight of the object. Also, if the object is on an inclined plane, the normal force will always be less than the full weight of the object. 4. Some problems will contain various physical quantities, such as forces, acceleration, velocity, or position. You can apply concepts from kinematics and dynamics in order to solve these problems of motion. ### Conceptual Questions ### Problem Exercises
# Dynamics: Force and Newton's Laws of Motion ## Extended Topic: The Four Basic Forces—An Introduction ### Learning Objectives By the end of this section, you will be able to: 1. Understand the four basic forces that underlie the processes in nature. One of the most remarkable simplifications in physics is that only four distinct forces account for all known phenomena. In fact, nearly all of the forces we experience directly are due to only one basic force, called the electromagnetic force. (The gravitational force is the only force we experience directly that is not electromagnetic.) This is a tremendous simplification of the myriad of apparently different forces we can list, only a few of which were discussed in the previous section. As we will see, the basic forces are all thought to act through the exchange of microscopic carrier particles, and the characteristics of the basic forces are determined by the types of particles exchanged. Action at a distance, such as the gravitational force of Earth on the Moon, is explained by the existence of a force field rather than by “physical contact.” The four basic forces are the gravitational force, the electromagnetic force, the weak nuclear force, and the strong nuclear force. Their properties are summarized in . Since the weak and strong nuclear forces act over an extremely short range, the size of a nucleus or less, we do not experience them directly, although they are crucial to the very structure of matter. These forces determine which nuclei are stable and which decay, and they are the basis of the release of energy in certain nuclear reactions. Nuclear forces determine not only the stability of nuclei, but also the relative abundance of elements in nature. The properties of the nucleus of an atom determine the number of electrons it has and, thus, indirectly determine the chemistry of the atom. More will be said of all of these topics in later chapters. The gravitational force is surprisingly weak—it is only because gravity is always attractive that we notice it at all. Our weight is the gravitational force due to the entire Earth acting on us. On the very large scale, as in astronomical systems, the gravitational force is the dominant force determining the motions of moons, planets, stars, and galaxies. The gravitational force also affects the nature of space and time. As we shall see later in the study of general relativity, space is curved in the vicinity of very massive bodies, such as the Sun, and time actually slows down near massive bodies. Electromagnetic forces can be either attractive or repulsive. They are long-range forces, which act over extremely large distances, and they nearly cancel for macroscopic objects. (Remember that it is the net external force that is important.) If they did not cancel, electromagnetic forces would completely overwhelm the gravitational force. The electromagnetic force is a combination of electrical forces (such as those that cause static electricity) and magnetic forces (such as those that affect a compass needle). These two forces were thought to be quite distinct until early in the 19th century, when scientists began to discover that they are different manifestations of the same force. This discovery is a classical case of the unification of forces. Similarly, friction, tension, and all of the other classes of forces we experience directly (except gravity, of course) are due to electromagnetic interactions of atoms and molecules. It is still convenient to consider these forces separately in specific applications, however, because of the ways they manifest themselves. Physicists are now exploring whether the four basic forces are in some way related. Attempts to unify all forces into one come under the rubric of Grand Unified Theories (GUTs), with which there has been some success in recent years. It is now known that under conditions of extremely high density and temperature, such as existed in the early universe, the electromagnetic and weak nuclear forces are indistinguishable. They can now be considered to be different manifestations of one force, called the electroweak force. So the list of four has been reduced in a sense to only three. Further progress in unifying all forces is proving difficult—especially the inclusion of the gravitational force, which has the special characteristics of affecting the space and time in which the other forces exist. While the unification of forces will not affect how we discuss forces in this text, it is fascinating that such underlying simplicity exists in the face of the overt complexity of the universe. There is no reason that nature must be simple—it simply is. ### Action at a Distance: Concept of a Field All forces act at a distance. This is obvious for the gravitational force. Earth and the Moon, for example, interact without coming into contact. It is also true for all other forces. Friction, for example, is an electromagnetic force between atoms that may not actually touch. What is it that carries forces between objects? One way to answer this question is to imagine that a force field surrounds whatever object creates the force. A second object (often called a test object) placed in this field will experience a force that is a function of location and other variables. The field itself is the “thing” that carries the force from one object to another. The field is defined so as to be a characteristic of the object creating it; the field does not depend on the test object placed in it. Earth’s gravitational field, for example, is a function of the mass of Earth and the distance from its center, independent of the presence of other masses. The concept of a field is useful because equations can be written for force fields surrounding objects (for gravity, this yields at Earth’s surface), and motions can be calculated from these equations. (See .) The field concept has been applied very successfully; we can calculate motions and describe nature to high precision using field equations. As useful as the field concept is, however, it leaves unanswered the question of what carries the force. It has been proposed in recent decades, starting in 1935 with Hideki Yukawa’s (1907–1981) work on the strong nuclear force, that all forces are transmitted by the exchange of elementary particles. We can visualize particle exchange as analogous to macroscopic phenomena such as two people passing a basketball back and forth, thereby exerting a repulsive force without touching one another. (See .) This idea of particle exchange deepens rather than contradicts field concepts. It is more satisfying philosophically to think of something physical actually moving between objects acting at a distance. lists the exchange or carrier particles, both observed and proposed, that carry the four forces. But the real fruit of the particle-exchange proposal is that searches for Yukawa’s proposed particle found it and a number of others that were completely unexpected, stimulating yet more research. All of this research eventually led to the proposal of quarks as the underlying substructure of matter, which is a basic tenet of GUTs. If successful, these theories will explain not only forces, but also the structure of matter itself. Yet physics is an experimental science, so the test of these theories must lie in the domain of the real world. As of this writing, scientists at the CERN laboratory in Switzerland are starting to test these theories using the world’s largest particle accelerator: the Large Hadron Collider. This accelerator (27 km in circumference) allows two high-energy proton beams, traveling in opposite directions, to collide. An energy of 14 trillion electron volts will be available. It is anticipated that some new particles, possibly force carrier particles, will be found. (See .) One of the force carriers of high interest that researchers hope to detect is the Higgs boson. The observation of its properties might tell us why different particles have different masses. Tiny particles also have wave-like behavior, something we will explore more in a later chapter. To better understand force-carrier particles from another perspective, let us consider gravity. The search for gravitational waves has been going on for a number of years. Over 100 years ago, Einstein predicted the existence of these waves as part of his general theory of relativity. Gravitational waves are created during the collision of massive stars, in black holes, or in supernova explosions—like shock waves. These gravitational waves will travel through space from such sites much like a pebble dropped into a pond sends out ripples—except these waves move at the speed of light. A detector apparatus has been built in the U.S., consisting of two large installations nearly 3000 km apart—one in Washington state and one in Louisiana! The facility is called the Laser Interferometer Gravitational-Wave Observatory (LIGO). Each installation is designed to use optical lasers to examine any slight shift in the relative positions of two masses due to the effect of gravity waves. The two sites allow simultaneous measurements of these small effects to be separated from other natural phenomena, such as earthquakes. Initial operation of the detectors began in 2002, and work is proceeding on increasing their sensitivity. Similar installations have been built in Italy (VIRGO), Germany (GEO600), and Japan (TAMA300) to provide a worldwide network of gravitational wave detectors. In September, 2015, LIGO fulfilled its promise and helped prove Einstein's predictions. The system detected the first gravitational waves arising from the merger of two black holes—one 29 times the mass of our Sun and the other 36 times the mass of our Sun—that occurred 1.3 billion years ago. About 3 times the mass of the Sun was converted into gravitational waves in a fraction of a second—with a peak power output about 50 times that of the whole visible universe. Due to the 7 millisecond delay in detection, researchers established that the merger occurred on the southern hemisphere side of Earth. Since then, LIGO and VIRGO have combined to detect about a dozen similar events, with better and more precise measurements. Waves from neutron star mergers and different-sized black holes have deepened our understanding of these objects and their impact on the universe. International collaboration in this area is moving into space with the joint EU/US project LISA (Laser Interferometer Space Antenna). Earthquakes and other Earthly noises will be no problem for these monitoring spacecraft. LISA will complement LIGO by looking at much more massive black holes through the observation of gravitational-wave sources emitting much larger wavelengths. Three satellites will be placed in space above Earth in an equilateral triangle (with 5,000,000-km sides) (). The system will measure the relative positions of each satellite to detect passing gravitational waves. Accuracy to within 10% of the size of an atom will be needed to detect any waves. The launch of this project will likely be in the 2030s. As you can see above, some of the most groundbreaking developments in physics are made with a relatively long gap from theoretical prediction to experimental detection. This pattern continues the process of science from its earliest days, where early thinkers and researchers made discoveries that only led to more questions. Einstein was unique in many ways, but he was not unique in that later scientists, building on his and each other's work, would prove his theories. Evidence for black holes became more and more concrete as scientists developed new and better ways to look for them. Some of the most prominent have been Roger Penrose, who developed new mathematical models related to black holes, as well as Reinhard Genzel and Andrea Ghez, who independently used telescope observations to identify a region of our galaxy where a massive unseen gravity source (4 million times the size of our Sun) was pulling on stars. And soon after, collaborators on the Event Horizon Telescope project produced the first actual image of a black hole. ### Test Prep for AP Courses ### Summary 1. The various types of forces that are categorized for use in many applications are all manifestations of the four basic forces in nature. 2. The properties of these forces are summarized in . 3. Everything we experience directly without sensitive instruments is due to either electromagnetic forces or gravitational forces. The nuclear forces are responsible for the submicroscopic structure of matter, but they are not directly sensed because of their short ranges. Attempts are being made to show all four forces are different manifestations of a single unified force. 4. A force field surrounds an object creating a force and is the carrier of that force. ### Conceptual Questions ### Problem Exercises
# Further Applications of Newton's Laws: Friction, Drag, and Elasticity ## Connection for AP® Courses Have you ever wondered why it is difficult to walk on a smooth surface like ice? The interaction between you and the surface is a result of forces that affect your motion. In the previous chapter, you learned Newton's laws of motion and examined how net force affects the motion, position and shape of an object. Now we will look at some interesting and common forces that will provide further applications of Newton's laws of motion. The information presented in this chapter supports learning objectives covered under Big Idea 3 of the AP Physics Curriculum Framework, which refer to the nature of forces and their roles in interactions among objects. The chapter discusses examples of specific contact forces, such as friction, air or liquid drag, and elasticity that may affect the motion or shape of an object. It also discusses the nature of forces on both macroscopic and microscopic levels (Enduring Understanding 3.C and Essential Knowledge 3.C.4). In addition, Newton's laws are applied to describe the motion of an object (Enduring Understanding 3.B) and to examine relationships between contact forces and other forces exerted on an object (Enduring Understanding 3.A, 3.A.3 and Essential Knowledge 3.A.4). The examples in this chapter give you practice in using vector properties of forces (Essential Knowledge 3.A.2) and free-body diagrams (Essential Knowledge 3.B.2) to determine net force (Essential Knowledge 3.B.1). Big Idea 3 The interactions of an object with other objects can be described by forces. Enduring Understanding 3.A All forces share certain common characteristics when considered by observers in inertial reference frames. Essential Knowledge 3.A.2 Forces are described by vectors. Essential Knowledge 3.A.3 A force exerted on an object is always due to the interaction of that object with another object. Essential Knowledge 3.A.4 If one object exerts a force on a second object, the second object always exerts a force of equal magnitude on the first object in the opposite direction. Enduring Understanding 3.B Classically, the acceleration of an object interacting with other objects can be predicted by using . Essential Knowledge 3.B.1 If an object of interest interacts with several other objects, the net force is the vector sum of the individual forces. Essential Knowledge 3.B.2 Free-body diagrams are useful tools for visualizing forces being exerted on a single object and writing the equations that represent a physical situation. Enduring Understanding 3.C At the macroscopic level, forces can be categorized as either long-range (action-at-a-distance) forces or contact forces. Essential Knowledge 3.C.4 Contact forces result from the interaction of one object touching another object, and they arise from interatomic electric forces. These forces include tension, friction, normal, spring (Physics 1), and buoyant (Physics 2).
# Further Applications of Newton's Laws: Friction, Drag, and Elasticity ## Friction ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the general characteristics of friction. 2. Describe the various types of friction. 3. Calculate the magnitude of static and kinetic friction. Friction is a force that is around us all the time that opposes relative motion between surfaces in contact but also allows us to move (which you have discovered if you have ever tried to walk on ice). While a common force, the behavior of friction is actually very complicated and is still not completely understood. We have to rely heavily on observations for whatever understandings we can gain. However, we can still deal with its more elementary general characteristics and understand the circumstances in which it behaves. One of the simpler characteristics of friction is that it is parallel to the contact surface between surfaces and always in a direction that opposes motion or attempted motion of the systems relative to each other. If two surfaces are in contact and moving relative to one another, then the friction between them is called kinetic friction. For example, friction slows a hockey puck sliding on ice. But when objects are stationary, static friction can act between them; the static friction is usually greater than the kinetic friction between the surfaces. Imagine, for example, trying to slide a heavy crate across a concrete floor—you may push harder and harder on the crate and not move it at all. This means that the static friction responds to what you do—it increases to be equal to and in the opposite direction of your push. But if you finally push hard enough, the crate seems to slip suddenly and starts to move. Once in motion it is easier to keep it in motion than it was to get it started, indicating that the kinetic friction force is less than the static friction force. If you add mass to the crate, say by placing a box on top of it, you need to push even harder to get it started and also to keep it moving. Furthermore, if you oiled the concrete you would find it to be easier to get the crate started and keep it going (as you might expect). is a crude pictorial representation of how friction occurs at the interface between two objects. Close-up inspection of these surfaces shows them to be rough. So when you push to get an object moving (in this case, a crate), you must raise the object until it can skip along with just the tips of the surface hitting, break off the points, or do both. A considerable force can be resisted by friction with no apparent motion. The harder the surfaces are pushed together (such as if another box is placed on the crate), the more force is needed to move them. Part of the friction is due to adhesive forces between the surface molecules of the two objects, which explain the dependence of friction on the nature of the substances. Adhesion varies with substances in contact and is a complicated aspect of surface physics. Once an object is moving, there are fewer points of contact (fewer molecules adhering), so less force is required to keep the object moving. At small but nonzero speeds, friction is nearly independent of speed. The magnitude of the frictional force has two forms: one for static situations (static friction), the other for when there is motion (kinetic friction). When there is no motion between the objects, the magnitude of static friction is where is the coefficient of static friction and is the magnitude of the normal force (the force perpendicular to the surface). The symbol means less than or equal to, implying that static friction can have a minimum and a maximum value of . Static friction is a responsive force that increases to be equal and opposite to whatever force is exerted, up to its maximum limit. Once the applied force exceeds , the object will move. Thus Once an object is moving, the magnitude of kinetic friction is given by where is the coefficient of kinetic friction. A system in which is described as a system in which friction behaves simply. As seen in , the coefficients of kinetic friction are less than their static counterparts. That values of in are stated to only one or, at most, two digits is an indication of the approximate description of friction given by the above two equations. The equations given earlier include the dependence of friction on materials and the normal force. The direction of friction is always opposite that of motion, parallel to the surface between objects, and perpendicular to the normal force. For example, if the crate you try to push (with a force parallel to the floor) has a mass of 100 kg, then the normal force would be equal to its weight, , perpendicular to the floor. If the coefficient of static friction is 0.45, you would have to exert a force parallel to the floor greater than to move the crate. Once there is motion, friction is less and the coefficient of kinetic friction might be 0.30, so that a force of only 290 N () would keep it moving at a constant speed. If the floor is lubricated, both coefficients are considerably less than they would be without lubrication. Coefficient of friction is a unit less quantity with a magnitude usually between 0 and 1.0. The coefficient of the friction depends on the two surfaces that are in contact. Many people have experienced the slipperiness of walking on ice. However, many parts of the body, especially the joints, have much smaller coefficients of friction—often three or four times less than ice. A joint is formed by the ends of two bones, which are connected by thick tissues. The knee joint is formed by the lower leg bone (the tibia) and the thighbone (the femur). The hip is a ball (at the end of the femur) and socket (part of the pelvis) joint. The ends of the bones in the joint are covered by cartilage, which provides a smooth, almost glassy surface. The joints also produce a fluid (synovial fluid) that reduces friction and wear. A damaged or arthritic joint can be replaced by an artificial joint (). These replacements can be made of metals (stainless steel or titanium) or plastic (polyethylene), also with very small coefficients of friction. Other natural lubricants include saliva produced in our mouths to aid in the swallowing process, and the slippery mucus found between organs in the body, allowing them to move freely past each other during heartbeats, during breathing, and when a person moves. Artificial lubricants are also common in hospitals and doctor’s clinics. For example, when ultrasonic imaging is carried out, the gel that couples the transducer to the skin also serves to lubricate the surface between the transducer and the skin—thereby reducing the coefficient of friction between the two surfaces. This allows the transducer to move freely over the skin. We have discussed that when an object rests on a horizontal surface, there is a normal force supporting it equal in magnitude to its weight. Furthermore, simple friction is always proportional to the normal force. illustrates one macroscopic characteristic of friction that is explained by microscopic (small-scale) research. We have noted that friction is proportional to the normal force, but not to the area in contact, a somewhat counterintuitive notion. When two rough surfaces are in contact, the actual contact area is a tiny fraction of the total area since only high spots touch. When a greater normal force is exerted, the actual contact area increases, and it is found that the friction is proportional to this area. But the atomic-scale view promises to explain far more than the simpler features of friction. The mechanism for how heat is generated is now being determined. In other words, why do surfaces get warmer when rubbed? Essentially, atoms are linked with one another to form lattices. When surfaces rub, the surface atoms adhere and cause atomic lattices to vibrate—essentially creating sound waves that penetrate the material. The sound waves diminish with distance and their energy is converted into heat. Chemical reactions that are related to frictional wear can also occur between atoms and molecules on the surfaces. shows how the tip of a probe drawn across another material is deformed by atomic-scale friction. The force needed to drag the tip can be measured and is found to be related to shear stress, which will be discussed later in this chapter. The variation in shear stress is remarkable (more than a factor of ) and difficult to predict theoretically, but shear stress is yielding a fundamental understanding of a large-scale phenomenon known since ancient times—friction. ### Test Prep for AP Courses ### Section Summary 1. Friction is a contact force between systems that opposes the motion or attempted motion between them. Simple friction is proportional to the normal force pushing the systems together. (A normal force is always perpendicular to the contact surface between systems.) Friction depends on both of the materials involved. The magnitude of static friction between systems stationary relative to one another is given by where is the coefficient of static friction, which depends on both of the materials. 2. The kinetic friction force between systems moving relative to one another is given by where is the coefficient of kinetic friction, which also depends on both materials. ### Conceptual Questions ### Problems & Exercises
# Further Applications of Newton's Laws: Friction, Drag, and Elasticity ## Drag Forces ### Learning Objectives By the end of this section, you will be able to: 1. Express mathematically the drag force. 2. Discuss the applications of drag force. 3. Define terminal velocity. 4. Determine the terminal velocity given mass. Another interesting force in everyday life is the force of drag on an object when it is moving in a fluid (either a gas or a liquid). You feel the drag force when you move your hand through water. You might also feel it if you move your hand during a strong wind. The faster you move your hand, the harder it is to move. You feel a smaller drag force when you tilt your hand so only the side goes through the air—you have decreased the area of your hand that faces the direction of motion. Like friction, the drag force always opposes the motion of an object. Unlike simple friction, the drag force is proportional to some function of the velocity of the object in that fluid. This functionality is complicated and depends upon the shape of the object, its size, its velocity, and the fluid it is in. For most large objects such as bicyclists, cars, and baseballs not moving too slowly, the magnitude of the drag force is found to be proportional to the square of the speed of the object. We can write this relationship mathematically as . When taking into account other factors, this relationship becomes where is the drag coefficient, is the area of the object facing the fluid, and is the density of the fluid. (Recall that density is mass per unit volume.) This equation can also be written in a more generalized fashion as , where is a constant equivalent to . We have set the exponent for these equations as 2 because, when an object is moving at high velocity through air, the magnitude of the drag force is proportional to the square of the speed. As we shall see in a few pages on fluid dynamics, for small particles moving at low speeds in a fluid, the exponent is equal to 1. Athletes as well as car designers seek to reduce the drag force to lower their race times. (See ). “Aerodynamic” shaping of an automobile can reduce the drag force and so increase a car’s gas mileage. The value of the drag coefficient, , is determined empirically, usually with the use of a wind tunnel. (See ). The drag coefficient can depend upon velocity, but we will assume that it is a constant here. lists some typical drag coefficients for a variety of objects. Notice that the drag coefficient is a dimensionless quantity. At highway speeds, over 50% of the power of a car is used to overcome air drag. The most fuel-efficient cruising speed is about 70–80 km/h (about 45–50 mi/h). For this reason, during the 1970s oil crisis in the United States, maximum speeds on highways were set at about 90 km/h (55 mi/h). Substantial research is under way in the sporting world to minimize drag. The dimples on golf balls are being redesigned as are the clothes that athletes wear. Bicycle racers and some swimmers and runners wear full bodysuits. Australian Cathy Freeman wore a full body suit in the 2000 Sydney Olympics, and won the gold medal for the 400 m race. Many swimmers in the 2008 Beijing Olympics wore (Speedo) body suits; it might have made a difference in breaking many world records (See ). Most elite swimmers (and cyclists) shave their body hair. Such innovations can have the effect of slicing away milliseconds in a race, sometimes making the difference between a gold and a silver medal. One consequence is that careful and precise guidelines must be continuously developed to maintain the integrity of the sport. Some interesting situations connected to Newton’s second law occur when considering the effects of drag forces upon a moving object. For instance, consider a skydiver falling through air under the influence of gravity. The two forces acting on him are the force of gravity and the drag force (ignoring the buoyant force). The downward force of gravity remains constant regardless of the velocity at which the person is moving. However, as the person’s velocity increases, the magnitude of the drag force increases until the magnitude of the drag force is equal to the gravitational force, thus producing a net force of zero. A zero net force means that there is no acceleration, as given by Newton’s second law. At this point, the person’s velocity remains constant and we say that the person has reached his terminal velocity (). Since is proportional to the speed, a heavier skydiver must go faster for to equal his weight. Let’s see how this works out more quantitatively. At the terminal velocity, Thus, Using the equation for drag force, we have Solving for the velocity, we obtain Assume the density of air is . A 75-kg skydiver descending head first will have an area approximately and a drag coefficient of approximately . We find that This means a skydiver with a mass of 75 kg achieves a maximum terminal velocity of about 350 km/h while traveling in a headfirst position, minimizing the area and his drag. In a spread-eagle position, that terminal velocity may decrease to about 200 km/h as the area increases. This terminal velocity becomes much smaller after the parachute opens. The size of the object that is falling through air presents another interesting application of air drag. If you fall from a 5-m high branch of a tree, you will likely get hurt—possibly fracturing a bone. However, a small squirrel does this all the time, without getting hurt. You don’t reach a terminal velocity in such a short distance, but the squirrel does. The following interesting quote on animal size and terminal velocity is from a 1928 essay by a British biologist, J.B.S. Haldane, titled “On Being the Right Size.” To the mouse and any smaller animal, [gravity] presents practically no dangers. You can drop a mouse down a thousand-yard mine shaft; and, on arriving at the bottom, it gets a slight shock and walks away, provided that the ground is fairly soft. A rat is killed, a man is broken, and a horse splashes. For the resistance presented to movement by the air is proportional to the surface of the moving object. Divide an animal’s length, breadth, and height each by ten; its weight is reduced to a thousandth, but its surface only to a hundredth. So the resistance to falling in the case of the small animal is relatively ten times greater than the driving force. The above quadratic dependence of air drag upon velocity does not hold if the object is very small, is going very slow, or is in a denser medium than air. Then we find that the drag force is proportional just to the velocity. This relationship is given by Stokes’ law, which states that where is the radius of the object, is the viscosity of the fluid, and is the object’s velocity. Good examples of this law are provided by microorganisms, pollen, and dust particles. Because each of these objects is so small, we find that many of these objects travel unaided only at a constant (terminal) velocity. Terminal velocities for bacteria (size about ) can be about . To move at a greater speed, many bacteria swim using flagella (organelles shaped like little tails) that are powered by little motors embedded in the cell. Sediment in a lake can move at a greater terminal velocity (about ), so it can take days to reach the bottom of the lake after being deposited on the surface. If we compare animals living on land with those in water, you can see how drag has influenced evolution. Fishes, dolphins, and even massive whales are streamlined in shape to reduce drag forces. Birds are streamlined and migratory species that fly large distances often have particular features such as long necks. Flocks of birds fly in the shape of a spear head as the flock forms a streamlined pattern (see ). In humans, one important example of streamlining is the shape of sperm, which need to be efficient in their use of energy. ### Section Summary 1. Drag forces acting on an object moving in a fluid oppose the motion. For larger objects (such as a baseball) moving at a velocity in air, the drag force is given by where is the drag coefficient (typical values are given in ), is the area of the object facing the fluid, and is the fluid density. 2. For small objects (such as a bacterium) moving in a denser medium (such as water), the drag force is given by Stokes’ law, where is the radius of the object, is the fluid viscosity, and is the object’s velocity. ### Conceptual Questions ### Problems & Exercise
# Further Applications of Newton's Laws: Friction, Drag, and Elasticity ## Elasticity: Stress and Strain ### Learning Objectives By the end of this section, you will be able to: 1. State Hooke’s law. 2. Explain Hooke’s law using graphical representation between deformation and applied force. 3. Discuss the three types of deformations such as changes in length, sideways shear and changes in volume. 4. Describe with examples the young’s modulus, shear modulus and bulk modulus. 5. Determine the change in length given mass, length and radius. We now move from consideration of forces that affect the motion of an object (such as friction and drag) to those that affect an object’s shape. If a bulldozer pushes a car into a wall, the car will not move but it will noticeably change shape. A change in shape due to the application of a force is a deformation. Even very small forces are known to cause some deformation. For small deformations, two important characteristics are observed. First, the object returns to its original shape when the force is removed—that is, the deformation is elastic for small deformations. Second, the size of the deformation is proportional to the force—that is, for small deformations, Hooke’s law is obeyed. In equation form, Hooke’s law is given by where is the amount of deformation (the change in length, for example) produced by the force , and is a proportionality constant that depends on the shape and composition of the object and the direction of the force. Note that this force is a function of the deformation —it is not constant as a kinetic friction force is. Rearranging this to makes it clear that the deformation is proportional to the applied force. shows the Hooke’s law relationship between the extension of a spring or of a human bone. For metals or springs, the straight line region in which Hooke’s law pertains is much larger. Bones are brittle and the elastic region is small and the fracture abrupt. Eventually a large enough stress to the material will cause it to break or fracture. Tensile strength is the breaking stress that will cause permanent deformation or fracture of a material. The proportionality constant depends upon a number of factors for the material. For example, a guitar string made of nylon stretches when it is tightened, and the elongation is proportional to the force applied (at least for small deformations). Thicker nylon strings and ones made of steel stretch less for the same applied force, implying they have a larger (see ). Finally, all three strings return to their normal lengths when the force is removed, provided the deformation is small. Most materials will behave in this manner if the deformation is less than about 0.1% or about 1 part in . We now consider three specific types of deformations: changes in length (tension and compression), sideways shear (stress), and changes in volume. All deformations are assumed to be small unless otherwise stated. ### Changes in Length—Tension and Compression: Elastic Modulus A change in length is produced when a force is applied to a wire or rod parallel to its length , either stretching it (a tension) or compressing it. (See .) Experiments have shown that the change in length () depends on only a few variables. As already noted, is proportional to the force and depends on the substance from which the object is made. Additionally, the change in length is proportional to the original length and inversely proportional to the cross-sectional area of the wire or rod. For example, a long guitar string will stretch more than a short one, and a thick string will stretch less than a thin one. We can combine all these factors into one equation for : where is the change in length, the applied force, is a factor, called the elastic modulus or Young’s modulus, that depends on the substance, is the cross-sectional area, and is the original length. lists values of for several materials—those with a large are said to have a large tensile stiffness because they deform less for a given tension or compression. Young’s moduli are not listed for liquids and gases in because they cannot be stretched or compressed in only one direction. Note that there is an assumption that the object does not accelerate, so that there are actually two applied forces of magnitude acting in opposite directions. For example, the strings in are being pulled down by a force of magnitude and held up by the ceiling, which also exerts a force of magnitude . Bones, on the whole, do not fracture due to tension or compression. Rather they generally fracture due to sideways impact or bending, resulting in the bone shearing or snapping. The behavior of bones under tension and compression is important because it determines the load the bones can carry. Bones are classified as weight-bearing structures such as columns in buildings and trees. Weight-bearing structures have special features; columns in building have steel-reinforcing rods while trees and bones are fibrous. The bones in different parts of the body serve different structural functions and are prone to different stresses. Thus the bone in the top of the femur is arranged in thin sheets separated by marrow while in other places the bones can be cylindrical and filled with marrow or just solid. Overweight people have a tendency toward bone damage due to sustained compressions in bone joints and tendons. Another biological example of Hooke’s law occurs in tendons. Functionally, the tendon (the tissue connecting muscle to bone) must stretch easily at first when a force is applied, but offer a much greater restoring force for a greater strain. shows a stress-strain relationship for a human tendon. Some tendons have a high collagen content so there is relatively little strain, or length change; others, like support tendons (as in the leg) can change length up to 10%. Note that this stress-strain curve is nonlinear, since the slope of the line changes in different regions. In the first part of the stretch called the toe region, the fibers in the tendon begin to align in the direction of the stress—this is called uncrimping. In the linear region, the fibrils will be stretched, and in the failure region individual fibers begin to break. A simple model of this relationship can be illustrated by springs in parallel: different springs are activated at different lengths of stretch. Examples of this are given in the problems at end of this chapter. Ligaments (tissue connecting bone to bone) behave in a similar way. Unlike bones and tendons, which need to be strong as well as elastic, the arteries and lungs need to be very stretchable. The elastic properties of the arteries are essential for blood flow. The pressure in the arteries increases and arterial walls stretch when the blood is pumped out of the heart. When the aortic valve shuts, the pressure in the arteries drops and the arterial walls relax to maintain the blood flow. When you feel your pulse, you are feeling exactly this—the elastic behavior of the arteries as the blood gushes through with each pump of the heart. If the arteries were rigid, you would not feel a pulse. The heart is also an organ with special elastic properties. The lungs expand with muscular effort when we breathe in but relax freely and elastically when we breathe out. Our skins are particularly elastic, especially for the young. A young person can go from 100 kg to 60 kg with no visible sag in their skins. The elasticity of all organs reduces with age. Gradual physiological aging through reduction in elasticity starts in the early 20s. The equation for change in length is traditionally rearranged and written in the following form: The ratio of force to area, , is defined as stress (measured in ), and the ratio of the change in length to length, , is defined as strain (a unitless quantity). In other words, In this form, the equation is analogous to Hooke’s law, with stress analogous to force and strain analogous to deformation. If we again rearrange this equation to the form we see that it is the same as Hooke’s law with a proportionality constant This general idea—that force and the deformation it causes are proportional for small deformations—applies to changes in length, sideways bending, and changes in volume. ### Sideways Stress: Shear Modulus illustrates what is meant by a sideways stress or a shearing force. Here the deformation is called and it is perpendicular to , rather than parallel as with tension and compression. Shear deformation behaves similarly to tension and compression and can be described with similar equations. The expression for shear deformation is where is the shear modulus (see ) and is the force applied perpendicular to and parallel to the cross-sectional area . Again, to keep the object from accelerating, there are actually two equal and opposite forces applied across opposite faces, as illustrated in . The equation is logical—for example, it is easier to bend a long thin pencil (small ) than a short thick one, and both are more easily bent than similar steel rods (large ). Examination of the shear moduli in reveals some telling patterns. For example, shear moduli are less than Young’s moduli for most materials. Bone is a remarkable exception. Its shear modulus is not only greater than its Young’s modulus, but it is as large as that of steel. This is why bones are so rigid. The spinal column (consisting of 26 vertebral segments separated by discs) provides the main support for the head and upper part of the body. The spinal column has normal curvature for stability, but this curvature can be increased, leading to increased shearing forces on the lower vertebrae. Discs are better at withstanding compressional forces than shear forces. Because the spine is not vertical, the weight of the upper body exerts some of both. Pregnant women and people that are overweight (with large abdomens) need to move their shoulders back to maintain balance, thereby increasing the curvature in their spine and so increasing the shear component of the stress. An increased angle due to more curvature increases the shear forces along the plane. These higher shear forces increase the risk of back injury through ruptured discs. The lumbosacral disc (the wedge shaped disc below the last vertebrae) is particularly at risk because of its location. The shear moduli for concrete and brick are very small; they are too highly variable to be listed. Concrete used in buildings can withstand compression, as in pillars and arches, but is very poor against shear, as might be encountered in heavily loaded floors or during earthquakes. Modern structures were made possible by the use of steel and steel-reinforced concrete. Almost by definition, liquids and gases have shear moduli near zero, because they flow in response to shearing forces. ### Changes in Volume: Bulk Modulus An object will be compressed in all directions if inward forces are applied evenly on all its surfaces as in . It is relatively easy to compress gases and extremely difficult to compress liquids and solids. For example, air in a wine bottle is compressed when it is corked. But if you try corking a brim-full bottle, you cannot compress the wine—some must be removed if the cork is to be inserted. The reason for these different compressibilities is that atoms and molecules are separated by large empty spaces in gases but packed close together in liquids and solids. To compress a gas, you must force its atoms and molecules closer together. To compress liquids and solids, you must actually compress their atoms and molecules, and very strong electromagnetic forces in them oppose this compression. We can describe the compression or volume deformation of an object with an equation. First, we note that a force “applied evenly” is defined to have the same stress, or ratio of force to area on all surfaces. The deformation produced is a change in volume , which is found to behave very similarly to the shear, tension, and compression previously discussed. (This is not surprising, since a compression of the entire object is equivalent to compressing each of its three dimensions.) The relationship of the change in volume to other physical quantities is given by where is the bulk modulus (see ), is the original volume, and is the force per unit area applied uniformly inward on all surfaces. Note that no bulk moduli are given for gases. What are some examples of bulk compression of solids and liquids? One practical example is the manufacture of industrial-grade diamonds by compressing carbon with an extremely large force per unit area. The carbon atoms rearrange their crystalline structure into the more tightly packed pattern of diamonds. In nature, a similar process occurs deep underground, where extremely large forces result from the weight of overlying material. Another natural source of large compressive forces is the pressure created by the weight of water, especially in deep parts of the oceans. Water exerts an inward force on all surfaces of a submerged object, and even on the water itself. At great depths, water is measurably compressed, as the following example illustrates. Conversely, very large forces are created by liquids and solids when they try to expand but are constrained from doing so—which is equivalent to compressing them to less than their normal volume. This often occurs when a contained material warms up, since most materials expand when their temperature increases. If the materials are tightly constrained, they deform or break their container. Another very common example occurs when water freezes. Water, unlike most materials, expands when it freezes, and it can easily fracture a boulder, rupture a biological cell, or crack an engine block that gets in its way. Other types of deformations, such as torsion or twisting, behave analogously to the tension, shear, and bulk deformations considered here. ### Section Summary 1. Hooke’s law is given by where where 2. The ratio of force to area, , is defined as stress, measured in N/m2. 3. The ratio of the change in length to length, , is defined as strain (a unitless quantity). In other words, 4. The expression for shear deformation is where 5. The relationship of the change in volume to other physical quantities is given by where ### Conceptual Questions ### Problems & Exercises
# Uniform Circular Motion and Gravitation ## Connection for AP® Courses Many motions, such as the arc of a bird's flight or Earth's path around the Sun, are curved. Recall that Newton's first law tells us that motion is along a straight line at constant speed unless there is a net external force. We will therefore study not only motion along curves, but also the forces that cause it, including gravitational forces. This chapter supports Big Idea 3 that interactions between objects are described by forces, and thus change in motion is a result of a net force exerted on an object. In this chapter, this idea is applied to uniform circular motion. In some ways, this chapter is a continuation of Dynamics: Newton's Laws of Motion as we study more applications of Newton's laws of motion. This chapter deals with the simplest form of curved motion, uniform circular motion, which is motion in a circular path at constant speed. As an object moves on a circular path, the magnitude of its velocity remains constant, but the direction of the velocity is changing. This means there is an acceleration that we will refer to as a “centripetal” acceleration caused by a net external force, also called the “centripetal” force (Enduring Understanding 3.B). The centripetal force is the net force totaling all external forces acting on the object (Essential Knowledge 3.B.1). In order to determine the net force, a free-body diagram may be useful (Essential Knowledge 3.B.2). Studying this topic illustrates most of the concepts associated with rotational motion and leads to many new topics we group under the name rotation. This motion can be described using kinematics variables (Essential Knowledge 3.A.1), but in addition to linear variables, we will introduce angular variables. We use various ways to describe motion, namely, verbally, algebraically and graphically (Learning Objective 3.A.1.1). Pure rotational motion occurs when points in an object move in circular paths centered on one point. Pure translational motion is motion with no rotation. Some motion combines both types, such as a rotating hockey puck moving over ice. Some combinations of both types of motion are conveniently described with fictitious forces which appear as a result of using a non-inertial frame of reference (Enduring Understanding 3.A). Furthermore, the properties of uniform circular motion can be applied to the motion of massive objects in a gravitational field. Thus, this chapter supports Big Idea 1 that gravitational mass is an important property of an object or a system. We have experimental evidence that gravitational and inertial masses are equal (Enduring Understanding 1.C), and that gravitational mass is a measure of the strength of the gravitational interaction (Essential Knowledge 1.C.2). Therefore, this chapter will support Big Idea 2 that fields existing in space can be used to explain interactions, because any massive object creates a gravitational field in space (Enduring Understanding 2.B). Mathematically, we use Newton's universal law of gravitation to provide a model for the gravitational interaction between two massive objects (Essential Knowledge 2.B.2). We will discover that this model describes the interaction of one object with mass with another object with mass (Essential Knowledge 3.C.1), and also that gravitational force is a long-range force (Enduring Understanding 3.C). The concepts in this chapter support: Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure. Enduring Understanding 1.C Objects and systems have properties of inertial mass and gravitational mass that are experimentally verified to be the same and that satisfy conservation principles. Essential Knowledge 1.C.2 Gravitational mass is the property of an object or a system that determines the strength of the gravitational interaction with other objects, systems, or gravitational fields. Essential Knowledge 1.C.3 Objects and systems have properties of inertial mass and gravitational mass that are experimentally verified to be the same and that satisfy conservation principles. Big Idea 2 Fields existing in space can be used to explain interactions. Enduring Understanding 2.B A gravitational field is caused by an object with mass. Essential Knowledge 2.B.2. The gravitational field caused by a spherically symmetric object with mass is radial and, outside the object, varies as the inverse square of the radial distance from the center of that object. Big Idea 3 The interactions of an object with other objects can be described by forces. Enduring Understanding 3.A All forces share certain common characteristics when considered by observers in inertial reference frames. Essential Knowledge 3.A.1. An observer in a particular reference frame can describe the motion of an object using such quantities as position, displacement, distance, velocity, speed, and acceleration. Essential Knowledge 3.A.3. A force exerted on an object is always due to the interaction of that object with another object. Enduring Understanding 3.B Classically, the acceleration of an object interacting with other objects can be predicted by using . Essential Knowledge 3.B.1 If an object of interest interacts with several other objects, the net force is the vector sum of the individual forces. Essential Knowledge 3.B.2 Free-body diagrams are useful tools for visualizing forces being exerted on a single object and writing the equations that represent a physical situation. Enduring Understanding 3.C At the macroscopic level, forces can be categorized as either long-range (action-at-a-distance) forces or contact forces. Essential Knowledge 3.C.1. Gravitational force describes the interaction of one object that has mass with another object that has mass.
# Uniform Circular Motion and Gravitation ## Rotation Angle and Angular Velocity ### Learning Objectives By the end of this section, you will be able to: 1. Define arc length, rotation angle, radius of curvature and angular velocity. 2. Calculate the angular velocity of a car wheel spin. In Kinematics, we studied motion along a straight line and introduced such concepts as displacement, velocity, and acceleration. Two-Dimensional Kinematics dealt with motion in two dimensions. Projectile motion is a special case of two-dimensional kinematics in which the object is projected into the air, while being subject to the gravitational force, and lands a distance away. In this chapter, we consider situations where the object does not land but moves in a curve. We begin the study of uniform circular motion by defining two angular quantities needed to describe rotational motion. ### Rotation Angle When objects rotate about some axis—for example, when the CD (compact disc) in rotates about its center—each point in the object follows a circular arc. Consider a line from the center of the CD to its edge. Each pit used to record sound along this line moves through the same angle in the same amount of time. The rotation angle is the amount of rotation and is analogous to linear distance. We define the rotation angle to be the ratio of the arc length to the radius of curvature: The arc length is the distance traveled along a circular path as shown in Note that is the radius of curvature of the circular path. We know that for one complete revolution, the arc length is the circumference of a circle of radius . The circumference of a circle is . Thus for one complete revolution the rotation angle is This result is the basis for defining the units used to measure rotation angles, to be radians (rad), defined so that A comparison of some useful angles expressed in both degrees and radians is shown in . If rad, then the CD has made one complete revolution, and every point on the CD is back at its original position. Because there are in a circle or one revolution, the relationship between radians and degrees is thus so that ### Angular Velocity How fast is an object rotating? We define angular velocity as the rate of change of an angle. In symbols, this is where an angular rotation takes place in a time . The greater the rotation angle in a given amount of time, the greater the angular velocity. The units for angular velocity are radians per second (rad/s). Angular velocity is analogous to linear velocity . To get the precise relationship between angular and linear velocity, we again consider a pit on the rotating CD. This pit moves an arc length in a time , and so it has a linear velocity From we see that . Substituting this into the expression for gives We write this relationship in two different ways and gain two different insights: The first relationship in states that the linear velocity is proportional to the distance from the center of rotation, thus, it is largest for a point on the rim (largest ), as you might expect. We can also call this linear speed of a point on the rim the tangential speed. The second relationship in can be illustrated by considering the tire of a moving car. Note that the speed of a point on the rim of the tire is the same as the speed of the car. See . So the faster the car moves, the faster the tire spins—large means a large , because . Similarly, a larger-radius tire rotating at the same angular velocity () will produce a greater linear speed () for the car. Both and have directions (hence they are angular and linear velocities, respectively). Angular velocity has only two directions with respect to the axis of rotation—it is either clockwise or counterclockwise. Linear velocity is tangent to the path, as illustrated in . ### Section Summary 1. Uniform circular motion is motion in a circle at constant speed. The rotation angle is defined as the ratio of the arc length to the radius of curvature: where arc length 2. The conversion between radians and degrees is . 3. Angular velocity is the rate of change of an angle, where a rotation ### Conceptual Questions ### Problem Exercises
# Uniform Circular Motion and Gravitation ## Centripetal Acceleration ### Learning Objectives By the end of this section, you will be able to: 1. Establish the expression for centripetal acceleration. 2. Explain the centrifuge. We know from kinematics that acceleration is a change in velocity, either in its magnitude or in its direction, or both. In uniform circular motion, the direction of the velocity changes constantly, so there is always an associated acceleration, even though the magnitude of the velocity might be constant. You experience this acceleration yourself when you turn a corner in your car. (If you hold the wheel steady during a turn and move at constant speed, you are in uniform circular motion.) What you notice is a sideways acceleration because you and the car are changing direction. The sharper the curve and the greater your speed, the more noticeable this acceleration will become. In this section we examine the direction and magnitude of that acceleration. shows an object moving in a circular path at constant speed. The direction of the instantaneous velocity is shown at two points along the path. Acceleration is in the direction of the change in velocity, which points directly toward the center of rotation (the center of the circular path). This pointing is shown with the vector diagram in the figure. We call the acceleration of an object moving in uniform circular motion (resulting from a net external force) the centripetal acceleration(); centripetal means “toward the center” or “center seeking.” The direction of centripetal acceleration is toward the center of curvature, but what is its magnitude? Note that the triangle formed by the velocity vectors and the one formed by the radii and are similar. Both the triangles ABC and PQR are isosceles triangles (two equal sides). The two equal sides of the velocity vector triangle are the speeds . Using the properties of two similar triangles, we obtain Acceleration is , and so we first solve this expression for : Then we divide this by , yielding Finally, noting that and that , the linear or tangential speed, we see that the magnitude of the centripetal acceleration is which is the acceleration of an object in a circle of radius at a speed . So, centripetal acceleration is greater at high speeds and in sharp curves (smaller radius), as you have noticed when driving a car. But it is a bit surprising that is proportional to speed squared, implying, for example, that it is four times as hard to take a curve at 100 km/h than at 50 km/h. A sharp corner has a small radius, so that is greater for tighter turns, as you have probably noticed. It is also useful to express in terms of angular velocity. Substituting into the above expression, we find . We can express the magnitude of centripetal acceleration using either of two equations: Recall that the direction of is toward the center. You may use whichever expression is more convenient, as illustrated in examples below. A centrifuge (see b) is a rotating device used to separate specimens of different densities. High centripetal acceleration significantly decreases the time it takes for separation to occur, and makes separation possible with small samples. Centrifuges are used in a variety of applications in science and medicine, including the separation of single cell suspensions such as bacteria, viruses, and blood cells from a liquid medium and the separation of macromolecules, such as DNA and protein, from a solution. Centrifuges are often rated in terms of their centripetal acceleration relative to acceleration due to gravity ; maximum centripetal acceleration of several hundred thousand is possible in a vacuum. Human centrifuges, extremely large centrifuges, have been used to test the tolerance of astronauts to the effects of accelerations larger than that of Earth’s gravity. Of course, a net external force is needed to cause any acceleration, just as Newton proposed in his second law of motion. So a net external force is needed to cause a centripetal acceleration. In Centripetal Force, we will consider the forces involved in circular motion. ### Section Summary 1. Centripetal acceleration is the acceleration experienced while in uniform circular motion. It always points toward the center of rotation. It is perpendicular to the linear velocity and has the magnitude 2. The unit of centripetal acceleration is . ### Conceptual Questions ### Problem Exercises
# Uniform Circular Motion and Gravitation ## Centripetal Force ### Learning Objectives By the end of this section, you will be able to: 1. Calculate coefficient of friction on a car tire. 2. Calculate ideal speed and angle of a car on a turn. Any force or combination of forces can cause a centripetal or radial acceleration. Just a few examples are the tension in the rope on a tether ball, the force of Earth’s gravity on the Moon, friction between roller skates and a rink floor, a banked roadway’s force on a car, and forces on the tube of a spinning centrifuge. Any net force causing uniform circular motion is called a centripetal force. The direction of a centripetal force is toward the center of curvature, the same as the direction of centripetal acceleration. According to Newton’s second law of motion, net force is mass times acceleration: net . For uniform circular motion, the acceleration is the centripetal acceleration— . Thus, the magnitude of centripetal force is By using the expressions for centripetal acceleration from , we get two expressions for the centripetal force in terms of mass, velocity, angular velocity, and radius of curvature: You may use whichever expression for centripetal force is more convenient. Centripetal force is always perpendicular to the path and pointing to the center of curvature, because is perpendicular to the velocity and pointing to the center of curvature. Note that if you solve the first expression for , you get This implies that for a given mass and velocity, a large centripetal force causes a small radius of curvature—that is, a tight curve. Let us now consider banked curves, where the slope of the road helps you negotiate the curve. See . The greater the angle , the faster you can take the curve. Race tracks for bikes as well as cars, for example, often have steeply banked curves. In an “ideally banked curve,” the angle is such that you can negotiate the curve at a certain speed without the aid of friction between the tires and the road. We will derive an expression for for an ideally banked curve and consider an example related to it. For ideal banking, the net external force equals the horizontal centripetal force in the absence of friction. The components of the normal force N in the horizontal and vertical directions must equal the centripetal force and the weight of the car, respectively. In cases in which forces are not parallel, it is most convenient to consider components along perpendicular axes—in this case, the vertical and horizontal directions. shows a free body diagram for a car on a frictionless banked curve. If the angle is ideal for the speed and radius, then the net external force will equal the necessary centripetal force. The only two external forces acting on the car are its weight and the normal force of the road . (A frictionless surface can only exert a force perpendicular to the surface—that is, a normal force.) These two forces must add to give a net external force that is horizontal toward the center of curvature and has magnitude . Because this is the crucial force and it is horizontal, we use a coordinate system with vertical and horizontal axes. Only the normal force has a horizontal component, and so this must equal the centripetal force—that is, Because the car does not leave the surface of the road, the net vertical force must be zero, meaning that the vertical components of the two external forces must be equal in magnitude and opposite in direction. From the figure, we see that the vertical component of the normal force is , and the only other vertical force is the car’s weight. These must be equal in magnitude; thus, Now we can combine the last two equations to eliminate and get an expression for , as desired. Solving the second equation for , and substituting this into the first yields Taking the inverse tangent gives This expression can be understood by considering how depends on and . A large will be obtained for a large and a small . That is, roads must be steeply banked for high speeds and sharp curves. Friction helps, because it allows you to take the curve at greater or lower speed than if the curve is frictionless. Note that does not depend on the mass of the vehicle. ### Section Summary 1. Centripetal force is any force causing uniform circular motion. It is a “center-seeking” force that always points toward the center of rotation. It is perpendicular to linear velocity and has magnitude which can also be expressed as ### Conceptual Questions ### Problems Exercise
# Uniform Circular Motion and Gravitation ## Fictitious Forces and Non-inertial Frames: The Coriolis Force ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the inertial frame of reference. 2. Discuss the non-inertial frame of reference. 3. Describe the effects of the Coriolis force. What do taking off in a jet airplane, turning a corner in a car, riding a merry-go-round, and the circular motion of a tropical cyclone have in common? Each exhibits fictitious forces—unreal forces that arise from motion and may seem real, because the observer’s frame of reference is accelerating or rotating. When taking off in a jet, most people would agree it feels as if you are being pushed back into the seat as the airplane accelerates down the runway. Yet a physicist would say that you tend to remain stationary while the seat pushes forward on you, and there is no real force backward on you. An even more common experience occurs when you make a tight curve in your car—say, to the right. You feel as if you are thrown (that is, forced) toward the left relative to the car. Again, a physicist would say that you are going in a straight line but the car moves to the right, and there is no real force on you to the left. Recall Newton’s first law. We can reconcile these points of view by examining the frames of reference used. Let us concentrate on people in a car. Passengers instinctively use the car as a frame of reference, while a physicist uses Earth. The physicist chooses Earth because it is very nearly an inertial frame of reference—one in which all forces are real (that is, in which all forces have an identifiable physical origin). In such a frame of reference, Newton’s laws of motion take the form given in Dynamics: Newton's Laws of Motion The car is a non-inertial frame of reference because it is accelerated to the side. The force to the left sensed by car passengers is a fictitious force having no physical origin. There is nothing real pushing them left—the car, as well as the driver, is actually accelerating to the right. Let us now take a mental ride on a merry-go-round—specifically, a rapidly rotating playground merry-go-round. You take the merry-go-round to be your frame of reference because you rotate together. In that non-inertial frame, you feel a fictitious force, named centrifugal force (not to be confused with centripetal force), trying to throw you off. You must hang on tightly to counteract the centrifugal force. In Earth’s frame of reference, there is no force trying to throw you off. Rather you must hang on to make yourself go in a circle because otherwise you would go in a straight line, right off the merry-go-round. This inertial effect, carrying you away from the center of rotation if there is no centripetal force to cause circular motion, is put to good use in centrifuges (see ). A centrifuge spins a sample very rapidly, as mentioned earlier in this chapter. Viewed from the rotating frame of reference, the fictitious centrifugal force throws particles outward, hastening their sedimentation. The greater the angular velocity, the greater the centrifugal force. But what really happens is that the inertia of the particles carries them along a line tangent to the circle while the test tube is forced in a circular path by a centripetal force. Let us now consider what happens if something moves in a frame of reference that rotates. For example, what if you slide a ball directly away from the center of the merry-go-round, as shown in ? The ball follows a straight path relative to Earth (assuming negligible friction) and a path curved to the right on the merry-go-round’s surface. A person standing next to the merry-go-round sees the ball moving straight and the merry-go-round rotating underneath it. In the merry-go-round’s frame of reference, we explain the apparent curve to the right by using a fictitious force, called the Coriolis force, that causes the ball to curve to the right. The fictitious Coriolis force can be used by anyone in that frame of reference to explain why objects follow curved paths and allows us to apply Newton’s Laws in non-inertial frames of reference. Up until now, we have considered Earth to be an inertial frame of reference with little or no worry about effects due to its rotation. Yet such effects do exist—in the rotation of weather systems, for example. Most consequences of Earth’s rotation can be qualitatively understood by analogy with the merry-go-round. Viewed from above the North Pole, Earth rotates counterclockwise, as does the merry-go-round in . As on the merry-go-round, any motion in Earth’s northern hemisphere experiences a Coriolis force to the right. Just the opposite occurs in the southern hemisphere; there, the force is to the left. Because Earth’s angular velocity is small, the Coriolis force is usually negligible, but for large-scale motions, such as wind patterns, it has substantial effects. The Coriolis force causes hurricanes in the northern hemisphere to rotate in the counterclockwise direction, while the tropical cyclones (what hurricanes are called below the equator) in the southern hemisphere rotate in the clockwise direction. The terms hurricane, typhoon, and tropical storm are regionally-specific names for tropical cyclones, storm systems characterized by low pressure centers, strong winds, and heavy rains. helps show how these rotations take place. Air flows toward any region of low pressure, and tropical cyclones contain particularly low pressures. Thus winds flow toward the center of a tropical cyclone or a low-pressure weather system at the surface. In the northern hemisphere, these inward winds are deflected to the right, as shown in the figure, producing a counterclockwise circulation at the surface for low-pressure zones of any type. Low pressure at the surface is associated with rising air, which also produces cooling and cloud formation, making low-pressure patterns quite visible from space. Conversely, wind circulation around high-pressure zones is clockwise in the northern hemisphere but is less visible because high pressure is associated with sinking air, producing clear skies. The rotation of tropical cyclones and the path of a ball on a merry-go-round can just as well be explained by inertia and the rotation of the system underneath. When non-inertial frames are used, fictitious forces, such as the Coriolis force, must be invented to explain the curved path. There is no identifiable physical source for these fictitious forces. In an inertial frame, inertia explains the path, and no force is found to be without an identifiable source. Either view allows us to describe nature, but a view in an inertial frame is the simplest and truest, in the sense that all forces have real origins and explanations. ### Section Summary 1. Rotating and accelerated frames of reference are non-inertial. 2. Fictitious forces, such as the Coriolis force, are needed to explain motion in such frames. ### Conceptual Questions
# Uniform Circular Motion and Gravitation ## Newton’s Universal Law of Gravitation ### Learning Objectives By the end of this section, you will be able to: 1. Explain Earth’s gravitational force. 2. Describe the gravitational effect of the Moon on Earth. 3. Discuss weightlessness in space. 4. Examine the Cavendish experiment What do aching feet, a falling apple, and the orbit of the Moon have in common? Each is caused by the gravitational force. Our feet are strained by supporting our weight—the force of Earth’s gravity on us. An apple falls from a tree because of the same force acting a few meters above Earth’s surface. And the Moon orbits Earth because gravity is able to supply the necessary centripetal force at a distance of hundreds of millions of meters. In fact, the same force causes planets to orbit the Sun, stars to orbit the center of the galaxy, and galaxies to cluster together. Gravity is another example of underlying simplicity in nature. It is the weakest of the four basic forces found in nature, and in some ways the least understood. It is a force that acts at a distance, without physical contact, and is expressed by a formula that is valid everywhere in the universe, for masses and distances that vary from the tiny to the immense. Sir Isaac Newton was the first scientist to precisely define the gravitational force, and to show that it could explain both falling bodies and astronomical motions. See . But Newton was not the first to suspect that the same force caused both our weight and the motion of planets. His forerunner Galileo Galilei had contended that falling bodies and planetary motions had the same cause. Some of Newton’s contemporaries, such as Robert Hooke, Christopher Wren, and Edmund Halley, had also made some progress toward understanding gravitation. But Newton was the first to propose an exact mathematical form and to use that form to show that the motion of heavenly bodies should be conic sections—circles, ellipses, parabolas, and hyperbolas. This theoretical prediction was a major triumph—it had been known for some time that moons, planets, and comets follow such paths, but no one had been able to propose a mechanism that caused them to follow these paths and not others. Other prominent scientists and mathematicians of the time, particularly those outside of England, were reluctant to accept Newton's principles. It took the work of another prominent philosopher, writer, and scientist, Émilie du Châtelet, to establish the Newtonian gravitation as the accurate and overarching law. Du Châtelet, who had earlier laid the foundation for the understanding of conservation of energy as well as the principle that light had no mass, translated and augmented Newton's key work. She also utilized calculus to explain gravity, which helped lead to its acceptance. The gravitational force is relatively simple. It is always attractive, and it depends only on the masses involved and the distance between them. Stated in modern language, Newton’s universal law of gravitation states that every particle in the universe attracts every other particle with a force along a line joining them. The force is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. The bodies we are dealing with tend to be large. To simplify the situation we assume that the body acts as if its entire mass is concentrated at one specific point called the center of mass (CM). The concept of center of mass applies both to a single, extended object and to a system of objects that may be moving relative to each other. The center of mass is the average location of the mass if we divide the object (or system) into pieces of equal mass. The center of mass of a system of objects will respond to external forces as if all of the mass of the system were concentrated there. For two bodies having masses and with a distance between their centers of mass, the equation for Newton’s universal law of gravitation is where is the magnitude of the gravitational force and is a proportionality factor called the gravitational constant. is a universal gravitational constant—that is, it is thought to be the same everywhere in the universe. It has been measured experimentally to be in SI units. Note that the units of are such that a force in newtons is obtained from , when considering masses in kilograms and distance in meters. For example, two 1.000 kg masses separated by 1.000 m will experience a gravitational attraction of . This is an extraordinarily small force. The small magnitude of the gravitational force is consistent with everyday experience. We are unaware that even large objects like mountains exert gravitational forces on us. In fact, our body weight is the force of attraction of the entire Earth on us with a mass of . Recall that the acceleration due to gravity is about on Earth. We can now determine why this is so. The weight of an object mg is the gravitational force between it and Earth. Substituting mg for in Newton’s universal law of gravitation gives where is the mass of the object, is the mass of Earth, and is the distance to the center of Earth (the distance between the centers of mass of the object and Earth). See . The mass of the object cancels, leaving an equation for : Substituting known values for Earth’s mass and radius (to three significant figures), and we obtain a value for the acceleration of a falling body: This is the expected value and is independent of the body’s mass. Newton’s law of gravitation takes Galileo’s observation that all masses fall with the same acceleration a step further, explaining the observation in terms of a force that causes objects to fall—in fact, in terms of a universally existing force of attraction between masses. In the following example, we make a comparison similar to one made by Newton himself. He noted that if the gravitational force caused the Moon to orbit Earth, then the acceleration due to gravity should equal the centripetal acceleration of the Moon in its orbit. Newton found that the two accelerations agreed “pretty nearly.” Why does Earth not remain stationary as the Moon orbits it? This is because, as expected from Newton’s third law, if Earth exerts a force on the Moon, then the Moon should exert an equal and opposite force on Earth (see ). We do not sense the Moon’s effect on Earth’s motion, because the Moon’s gravity moves our bodies right along with Earth but there are other signs on Earth that clearly show the effect of the Moon’s gravitational force as discussed in Satellites and Kepler's Laws: An Argument for Simplicity. ### Tides Ocean tides are one very observable result of the Moon’s gravity acting on Earth. is a simplified drawing of the Moon’s position relative to the tides. Because water easily flows on Earth’s surface, a high tide is created on the side of Earth nearest to the Moon, where the Moon’s gravitational pull is strongest. Why is there also a high tide on the opposite side of Earth? The answer is that Earth is pulled toward the Moon more than the water on the far side, because Earth is closer to the Moon. So the water on the side of Earth closest to the Moon is pulled away from Earth, and Earth is pulled away from water on the far side. As Earth rotates, the tidal bulge (an effect of the tidal forces between an orbiting natural satellite and the primary planet that it orbits) keeps its orientation with the Moon. Thus there are two tides per day (the actual tidal period is about 12 hours and 25.2 minutes), because the Moon moves in its orbit each day as well). The Sun also affects tides, although it has about half the effect of the Moon. However, the largest tides, called spring tides, occur when Earth, the Moon, and the Sun are aligned. The smallest tides, called neap tides, occur when the Sun is at a angle to the Earth-Moon alignment. Tides are not unique to Earth but occur in many astronomical systems. The most extreme tides occur where the gravitational force is the strongest and varies most rapidly, such as near black holes (see ). A few likely candidates for black holes have been observed in our galaxy. These have masses greater than the Sun but have diameters only a few kilometers across. The tidal forces near them are so great that they can actually tear matter from a companion star. ### ”Weightlessness” and Microgravity In contrast to the tremendous gravitational force near black holes is the apparent gravitational field experienced by astronauts orbiting Earth. What is the effect of “weightlessness” upon an astronaut who is in orbit for months? Or what about the effect of weightlessness upon plant growth? Weightlessness doesn’t mean that an astronaut is not being acted upon by the gravitational force. There is no “zero gravity” in an astronaut’s orbit. The term just means that the astronaut is in free-fall, accelerating with the acceleration due to gravity. If an elevator cable breaks, the passengers inside will be in free fall and will experience weightlessness. You can experience short periods of weightlessness in some rides in amusement parks. Microgravity refers to an environment in which the apparent net acceleration of a body is small compared with that produced by Earth at its surface. Many interesting biology and physics topics have been studied over the past three decades in the presence of microgravity. Of immediate concern is the effect on astronauts of extended times in outer space, such as at the International Space Station. Researchers have observed that muscles will atrophy (waste away) in this environment. There is also a corresponding loss of bone mass. Study continues on cardiovascular adaptation to space flight. On Earth, blood pressure is usually higher in the feet than in the head, because the higher column of blood exerts a downward force on it, due to gravity. When standing, 70% of your blood is below the level of the heart, while in a horizontal position, just the opposite occurs. What difference does the absence of this pressure differential have upon the heart? Some findings in human physiology in space can be clinically important to the management of diseases back on Earth. On a somewhat negative note, spaceflight is known to affect the human immune system, possibly making the crew members more vulnerable to infectious diseases. Experiments flown in space also have shown that some bacteria grow faster in microgravity than they do on Earth. However, on a positive note, studies indicate that microbial antibiotic production can increase by a factor of two in space-grown cultures. One hopes to be able to understand these mechanisms so that similar successes can be achieved on the ground. In another area of physics space research, inorganic crystals and protein crystals have been grown in outer space that have much higher quality than any grown on Earth, so crystallography studies on their structure can yield much better results. Plants have evolved with the stimulus of gravity and with gravity sensors. Roots grow downward and shoots grow upward. Plants might be able to provide a life support system for long duration space missions by regenerating the atmosphere, purifying water, and producing food. Some studies have indicated that plant growth and development are not affected by gravity, but there is still uncertainty about structural changes in plants grown in a microgravity environment. ### The Cavendish Experiment: Then and Now As previously noted, the universal gravitational constant is determined experimentally. This definition was first done accurately by Henry Cavendish (1731–1810), an English scientist, in 1798, more than 100 years after Newton published his universal law of gravitation. The measurement of is very basic and important because it determines the strength of one of the four forces in nature. Cavendish’s experiment was very difficult because he measured the tiny gravitational attraction between two ordinary-sized masses (tens of kilograms at most), using apparatus like that in . Remarkably, his value for differs by less than 1% from the best modern value. One important consequence of knowing was that an accurate value for Earth’s mass could finally be obtained. This was done by measuring the acceleration due to gravity as accurately as possible and then calculating the mass of Earth from the relationship Newton’s universal law of gravitation gives where is the mass of the object, is the mass of Earth, and is the distance to the center of Earth (the distance between the centers of mass of the object and Earth). See . The mass of the object cancels, leaving an equation for : Rearranging to solve for yields So can be calculated because all quantities on the right, including the radius of Earth , are known from direct measurements. We shall see in Satellites and Kepler's Laws: An Argument for Simplicity that knowing also allows for the determination of astronomical masses. Interestingly, of all the fundamental constants in physics, is by far the least well determined. The Cavendish experiment is also used to explore other aspects of gravity. One of the most interesting questions is whether the gravitational force depends on substance as well as mass—for example, whether one kilogram of lead exerts the same gravitational pull as one kilogram of water. A Hungarian scientist named Roland von Eötvös pioneered this inquiry early in the 20th century. He found, with an accuracy of five parts per billion, that the gravitational force does not depend on the substance. Such experiments continue today, and have improved upon Eötvös’ measurements. Cavendish-type experiments such as those of Eric Adelberger and others at the University of Washington, have also put severe limits on the possibility of a fifth force and have verified a major prediction of general relativity—that gravitational energy contributes to rest mass. Ongoing measurements there use a torsion balance and a parallel plate (not spheres, as Cavendish used) to examine how Newton’s law of gravitation works over sub-millimeter distances. On this small-scale, do gravitational effects depart from the inverse square law? So far, no deviation has been observed. ### Test Prep for AP Courses ### Section Summary 1. Newton’s universal law of gravitation: Every particle in the universe attracts every other particle with a force along a line joining them. The force is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. In equation form, this is where F is the magnitude of the gravitational force. 2. Newton’s law of gravitation applies universally. ### Conceptual Questions ### Problem Exercises
# Uniform Circular Motion and Gravitation ## Satellites and Kepler’s Laws: An Argument for Simplicity ### Learning Objectives By the end of this section, you will be able to: 1. State Kepler’s laws of planetary motion. 2. Derive the third Kepler’s law for circular orbits. 3. Discuss the Ptolemaic model of the universe. Examples of gravitational orbits abound. Hundreds of artificial satellites orbit Earth together with thousands of pieces of debris. The Moon’s orbit about Earth has intrigued humans from time immemorial. The orbits of planets, asteroids, meteors, and comets about the Sun are no less interesting. If we look further, we see almost unimaginable numbers of stars, galaxies, and other celestial objects orbiting one another and interacting through gravity. All these motions are governed by gravitational force, and it is possible to describe them to various degrees of precision. Precise descriptions of complex systems must be made with large computers. However, we can describe an important class of orbits without the use of computers, and we shall find it instructive to study them. These orbits have the following characteristics: 1. A small mass . This allows us to view the motion as if were stationary—in fact, as if from an inertial frame of reference placed on —without significant error. Mass is the satellite of , if the orbit is gravitationally bound. 2. The system is isolated from other masses. This allows us to neglect any small effects due to outside masses. The conditions are satisfied, to good approximation, by Earth’s satellites (including the Moon), by objects orbiting the Sun, and by the satellites of other planets. Historically, planets were studied first, and there is a classical set of three laws, called Kepler’s laws of planetary motion, that describe the orbits of all bodies satisfying the two previous conditions (not just planets in our solar system). These descriptive laws are named for the German astronomer Johannes Kepler (1571–1630), who devised them after careful study (over some 20 years) of a large amount of meticulously recorded observations of planetary motion done by Tycho Brahe (1546–1601). Such careful collection and detailed recording of methods and data are hallmarks of good science. Data constitute the evidence from which new interpretations and meanings can be constructed. ### Kepler’s Laws of Planetary Motion Kepler’s First Law The orbit of each planet about the Sun is an ellipse with the Sun at one focus. Kepler’s Second Law Each planet moves so that an imaginary line drawn from the Sun to the planet sweeps out equal areas in equal times (see ). Kepler’s Third Law The ratio of the squares of the periods of any two planets about the Sun is equal to the ratio of the cubes of their average distances from the Sun. In equation form, this is where is the period (time for one orbit) and is the average radius. This equation is valid only for comparing two small masses orbiting the same large one. Most importantly, this is a descriptive equation only, giving no information as to the cause of the equality. Note again that while, for historical reasons, Kepler’s laws are stated for planets orbiting the Sun, they are actually valid for all bodies satisfying the two previously stated conditions. People immediately search for deeper meaning when broadly applicable laws, like Kepler’s, are discovered. It was Newton who took the next giant step when he proposed the law of universal gravitation. While Kepler was able to discover what was happening, Newton discovered that gravitational force was the cause. ### Derivation of Kepler’s Third Law for Circular Orbits We shall derive Kepler’s third law, starting with Newton’s laws of motion and his universal law of gravitation. The point is to demonstrate that the force of gravity is the cause for Kepler’s laws (although we will only derive the third one). Let us consider a circular orbit of a small mass around a large mass , satisfying the two conditions stated at the beginning of this section. Gravity supplies the centripetal force to mass . Starting with Newton’s second law applied to circular motion, The net external force on mass is gravity, and so we substitute the force of gravity for : The mass cancels, yielding The fact that cancels out is another aspect of the oft-noted fact that at a given location all masses fall with the same acceleration. Here we see that at a given orbital radius , all masses orbit at the same speed. (This was implied by the result of the preceding worked example.) Now, to get at Kepler’s third law, we must get the period into the equation. By definition, period is the time for one complete orbit. Now the average speed is the circumference divided by the period—that is, Substituting this into the previous equation gives Solving for yields Using subscripts 1 and 2 to denote two different satellites, and taking the ratio of the last equation for satellite 1 to satellite 2 yields This is Kepler’s third law. Note that Kepler’s third law is valid only for comparing satellites of the same parent body, because only then does the mass of the parent body cancel. Now consider what we get if we solve for the ratio . We obtain a relationship that can be used to determine the mass of a parent body from the orbits of its satellites: If and are known for a satellite, then the mass of the parent can be calculated. This principle has been used extensively to find the masses of heavenly bodies that have satellites. Furthermore, the ratio should be a constant for all satellites of the same parent body (because ). (See ). It is clear from that the ratio of is constant, at least to the third digit, for all listed satellites of the Sun, and for those of Jupiter. Small variations in that ratio have two causes—uncertainties in the and data, and perturbations of the orbits due to other bodies. Interestingly, those perturbations can be—and have been—used to predict the location of new planets and moons. This is another verification of Newton’s universal law of gravitation. ### The Case for Simplicity The development of the universal law of gravitation by Newton played a pivotal role in the history of ideas. While it is beyond the scope of this text to cover that history in any detail, we note some important points. The definition of planet set in 2006 by the International Astronomical Union (IAU) states that in the solar system, a planet is a celestial body that: 1. is in orbit around the Sun, 2. has sufficient mass to assume hydrostatic equilibrium and 3. has cleared the neighborhood around its orbit. A non-satellite body fulfilling only the first two of the above criteria is classified as “dwarf planet.” In 2006, Pluto was demoted to a ‘dwarf planet’ after scientists revised their definition of what constitutes a “true” planet. The universal law of gravitation is a good example of a physical principle that is very broadly applicable. That single equation for the gravitational force describes all situations in which gravity acts. It gives a cause for a vast number of effects, such as the orbits of the planets and moons in the solar system. It epitomizes the underlying unity and simplicity of physics. Before the discoveries of Kepler, Copernicus, Galileo, Newton, and others, the solar system was thought to revolve around Earth as shown in (a). This is called the Ptolemaic view, for the Greek philosopher who lived in the second century AD. This model is characterized by a list of facts for the motions of planets with no cause and effect explanation. There tended to be a different rule for each heavenly body and a general lack of simplicity. (b) represents the modern or Copernican model. In this model, a small set of rules and a single underlying force explain not only all motions in the solar system, but all other situations involving gravity. The breadth and simplicity of the laws of physics are compelling. As our knowledge of nature has grown, the basic simplicity of its laws has become ever more evident. ### Section Summary 1. Kepler’s laws are stated for a small mass orbiting a larger mass in near-isolation. Kepler’s laws of planetary motion are then as follows: Kepler’s first law The orbit of each planet about the Sun is an ellipse with the Sun at one focus. Kepler’s second law Each planet moves so that an imaginary line drawn from the Sun to the planet sweeps out equal areas in equal times. Kepler’s third law The ratio of the squares of the periods of any two planets about the Sun is equal to the ratio of the cubes of their average distances from the Sun: where 2. The period and radius of a satellite’s orbit about a larger body are related by or ### Conceptual Questions ### Problem Exercises
# Work, Energy, and Energy Resources ## Connection for AP® Courses Energy plays an essential role both in everyday events and in scientific phenomena. You can no doubt name many forms of energy, from that provided by our foods to the energy we use to run our cars and the sunlight that warms us on the beach. You can also cite examples of what people call “energy” that may not be scientific, such as someone having an energetic personality. Not only does energy have many interesting forms, it is involved in almost all phenomena, and is one of the most important concepts of physics. There is no simple and accurate scientific definition for energy. Energy is characterized by its many forms and the fact that it is conserved. We can loosely define energy as the ability to do work, admitting that in some circumstances not all energy is available to do work. Because of the association of energy with work, we begin the chapter with a discussion of work. Work is intimately related to energy and how energy moves from one system to another or changes form. The work-energy theorem supports Big Idea 3, that interactions between objects are described by forces. In particular, exerting a force on an object may do work on it, changing it's energy (Enduring Understanding 3.E). The work-energy theorem, introduced in this chapter, establishes the relationship between work done on an object by an external force and changes in the object’s kinetic energy (Essential Knowledge 3.E.1). Similarly, systems can do work on each other, supporting Big Idea 4, that interactions between systems can result in changes in those systems—in this case, changes in the total energy of the system (Enduring Understanding 4.C). The total energy of the system is the sum of its kinetic energy, potential energy, and microscopic internal energy (Essential Knowledge 4.C.1). In this chapter students learn how to calculate kinetic, gravitational, and elastic potential energy in order to determine the total mechanical energy of a system. The transfer of mechanical energy into or out of a system is equal to the work done on the system by an external force with a nonzero component parallel to the displacement (Essential Knowledge 4.C.2). An important aspect of energy is that the total amount of energy in the universe is constant. Energy can change forms, but it cannot appear from nothing or disappear without a trace. Energy is thus one of a handful of physical quantities that we say is “conserved.” Conservation of energy (as physicists call the principle that energy can neither be created nor destroyed) is based on experiment. Even as scientists discovered new forms of energy, conservation of energy has always been found to apply. Perhaps the most dramatic example of this was supplied by Einstein when he suggested that mass is equivalent to energy (his famous equation E = mc2). This is one of the most important applications of Big Idea 5, that changes that occur as a result of interactions are constrained by conservation laws. Specifically, there are many situations where conservation of energy (Enduring Understanding 5.B) is both a useful concept and starting point for calculations related to the system. Note, however, that conservation doesn’t necessarily mean that energy in a system doesn’t change. Energy may be transferred into or out of the system, and the change must be equal to the amount transferred (Enduring Understanding 5.A). This may occur if there is an external force or a transfer between external objects and the system (Essential Knowledge 5.A.3). Energy is one of the fundamental quantities that are conserved for all systems (Essential Knowledge 5.A.2). The chapter introduces concepts of kinetic energy and potential energy. Kinetic energy is introduced as an energy of motion that can be changed by the amount of work done by an external force. Potential energy can only exist when objects interact with each other via conservative forces according to classical physics (Essential Knowledge 5.B.3). Because of this, a single object can only have kinetic energy and no potential energy (Essential Knowledge 5.B.1). The chapter also introduces the idea that the energy transfer is equal to the work done on the system by external forces and the rate of energy transfer is defined as power (Essential Knowledge 5.B.5). From a societal viewpoint, energy is one of the major building blocks of modern civilization. Energy resources are key limiting factors to economic growth. The world use of energy resources, especially oil, continues to grow, with ominous consequences economically, socially, politically, and environmentally. We will briefly examine the world’s energy use patterns at the end of this chapter. The concepts in this chapter support: Big Idea 3 The interactions of an object with other objects can be described by forces. Enduring Understanding 3.E A force exerted on an object can change the kinetic energy of the object. Essential Knowledge 3.E.1 The change in the kinetic energy of an object depends on the force exerted on the object and on the displacement of the object during the interval that the force is exerted. Big Idea 4 Interactions between systems can result in changes in those systems. Enduring Understanding 4.C Interactions with other objects or systems can change the total energy of a system. Essential Knowledge 4.C.1 The energy of a system includes its kinetic energy, potential energy, and microscopic internal energy. Examples should include gravitational potential energy, elastic potential energy, and kinetic energy. Essential Knowledge 4.C.2 Mechanical energy (the sum of kinetic and potential energy) is transferred into or out of a system when an external force is exerted on a system such that a component of the force is parallel to its displacement. The process through which the energy is transferred is called work. Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws. Enduring Understanding 5.A Certain quantities are conserved, in the sense that the changes of those quantities in a given system are always equal to the transfer of that quantity to or from the system by all possible interactions with other systems. Essential Knowledge 5.A.2 For all systems under all circumstances, energy, charge, linear momentum, and angular momentum are conserved. Essential Knowledge 5.A.3 An interaction can be either a force exerted by objects outside the system or the transfer of some quantity with objects outside the system. Enduring Understanding 5.B The energy of a system is conserved. Essential Knowledge 5.B.1 Classically, an object can only have kinetic energy since potential energy requires an interaction between two or more objects. Essential Knowledge 5.B.3 A system with internal structure can have potential energy. Potential energy exists within a system if the objects within that system interact with conservative forces. Essential Knowledge 5.B.5 Energy can be transferred by an external force exerted on an object or system that moves the object or system through a distance; this energy transfer is called work. Energy transfer in mechanical or electrical systems may occur at different rates. Power is defined as the rate of energy transfer into, out of, or within a system.
# Work, Energy, and Energy Resources ## Work: The Scientific Definition ### Learning Objectives By the end of this section, you will be able to: 1. Explain how an object must be displaced for a force on it to do work. 2. Explain how relative directions of force and displacement determine whether the work done is positive, negative, or zero. ### What It Means to Do Work The scientific definition of work differs in some ways from its everyday meaning. Certain things we think of as hard work, such as writing an exam or carrying a heavy load on level ground, are not work as defined by a scientist. The scientific definition of work reveals its relationship to energy—whenever work is done, energy is transferred. For work, in the scientific sense, to be done, a force must be exerted and there must be displacement in the direction of the force. Formally, the work done on a system by a constant force is defined to be the product of the component of the force in the direction of motion times the distance through which the force acts. For one-way motion in one dimension, this is expressed in equation form as where is work, is the displacement of the system, and is the angle between the force vector and the displacement vector , as in . We can also write this as To find the work done on a system that undergoes motion that is not one-way or that is in two or three dimensions, we divide the motion into one-way one-dimensional segments and add up the work done over each segment. To examine what the definition of work means, let us consider the other situations shown in . The person holding the briefcase in (b) does no work, for example. Here , so . Why is it you get tired just holding a load? The answer is that your muscles are doing work against one another, but they are doing no work on the system of interest (the “briefcase-Earth system”—see Gravitational Potential Energy for more details). There must be displacement for work to be done, and there must be a component of the force in the direction of the motion. For example, the person carrying the briefcase on level ground in (c) does no work on it, because the force is perpendicular to the motion. That is, , and so . In contrast, when a force exerted on the system has a component in the direction of motion, such as in (d), work is done—energy is transferred to the briefcase. Finally, in (e), energy is transferred from the briefcase to a generator. There are two good ways to interpret this energy transfer. One interpretation is that the briefcase’s weight does work on the generator, giving it energy. The other interpretation is that the generator does negative work on the briefcase, thus removing energy from it. The drawing shows the latter, with the force from the generator upward on the briefcase, and the displacement downward. This makes , and ; therefore, is negative. ### Calculating Work Work and energy have the same units. From the definition of work, we see that those units are force times distance. Thus, in SI units, work and energy are measured in newton-meters. A newton-meter is given the special name joule (J), and . One joule is not a large amount of energy; it would lift a small 100-gram apple a distance of about 1 meter. ### Test Prep for AP Courses ### Section Summary 1. Work is the transfer of energy by a force acting on an object as it is displaced. 2. The work that a force does on an object is the product of the magnitude of the force, times the magnitude of the displacement, times the cosine of the angle between them. In symbols, 3. The SI unit for work and energy is the joule (J), where . 4. The work done by a force is zero if the displacement is either zero or perpendicular to the force. 5. The work done is positive if the force and displacement have the same direction, and negative if they have opposite direction. ### Conceptual Questions ### Problems & Exercises
# Work, Energy, and Energy Resources ## Kinetic Energy and the Work-Energy Theorem ### Learning Objectives By the end of this section, you will be able to: 1. Explain work as a transfer of energy and net work as the work done by the net force. 2. Explain and apply the work-energy theorem. ### Work Transfers Energy What happens to the work done on a system? Energy is transferred into the system, but in what form? Does it remain in the system or move on? The answers depend on the situation. For example, if the lawn mower in (a) is pushed just hard enough to keep it going at a constant speed, then energy put into the mower by the person is removed continuously by friction, and eventually leaves the system in the form of heat transfer. In contrast, work done on the briefcase by the person carrying it up stairs in (d) is stored in the briefcase-Earth system and can be recovered at any time, as shown in (e). In fact, the building of the pyramids in ancient Egypt is an example of storing energy in a system by doing work on the system. Some of the energy imparted to the stone blocks in lifting them during construction of the pyramids remains in the stone-Earth system and has the potential to do work. In this section we begin the study of various types of work and forms of energy. We will find that some types of work leave the energy of a system constant, for example, whereas others change the system in some way, such as making it move. We will also develop definitions of important forms of energy, such as the energy of motion. ### Net Work and the Work-Energy Theorem We know from the study of Newton’s laws in Dynamics: Force and Newton's Laws of Motion that net force causes acceleration. We will see in this section that work done by the net force gives a system energy of motion, and in the process we will also find an expression for the energy of motion. Let us start by considering the total, or net, work done on a system. Net work is defined to be the sum of work on an object. The net work can be written in terms of the net force on an object. . In equation form, this is where is the angle between the force vector and the displacement vector. (a) shows a graph of force versus displacement for the component of the force in the direction of the displacement—that is, an vs. graph. In this case, is constant. You can see that the area under the graph is , or the work done. (b) shows a more general process where the force varies. The area under the curve is divided into strips, each having an average force . The work done is for each strip, and the total work done is the sum of the . Thus the total work done is the total area under the curve, a useful property to which we shall refer later. Net work will be simpler to examine if we consider a one-dimensional situation where a force is used to accelerate an object in a direction parallel to its initial velocity. Such a situation occurs for the package on the roller belt conveyor system shown in . The force of gravity and the normal force acting on the package are perpendicular to the displacement and do no work. Moreover, they are also equal in magnitude and opposite in direction so they cancel in calculating the net force. The net force arises solely from the horizontal applied force and the horizontal friction force . Thus, as expected, the net force is parallel to the displacement, so that and , and the net work is given by The effect of the net force is to accelerate the package from to . The kinetic energy of the package increases, indicating that the net work done on the system is positive. (See .) By using Newton’s second law, and doing some algebra, we can reach an interesting conclusion. Substituting from Newton’s second law gives To get a relationship between net work and the speed given to a system by the net force acting on it, we take and use the equation studied in Motion Equations for Constant Acceleration in One Dimension for the change in speed over a distance if the acceleration has the constant value ; namely, (note that appears in the expression for the net work). Solving for acceleration gives . When is substituted into the preceding expression for , we obtain The cancels, and we rearrange this to obtain This expression is called the work-energy theorem, and it actually applies in general (even for forces that vary in direction and magnitude), although we have derived it for the special case of a constant force parallel to the displacement. The theorem implies that the net work on a system equals the change in the quantity . This quantity is our first example of a form of energy. The quantity in the work-energy theorem is defined to be the translational kinetic energy (KE) of a mass moving at a speed . (Translational kinetic energy is distinct from rotational kinetic energy, which is considered later.) In equation form, the translational kinetic energy, is the energy associated with translational motion. Kinetic energy is a form of energy associated with the motion of a particle, single body, or system of objects moving together. We are aware that it takes energy to get an object, like a car or the package in , up to speed, but it may be a bit surprising that kinetic energy is proportional to speed squared. This proportionality means, for example, that a car traveling at 100 km/h has four times the kinetic energy it has at 50 km/h, helping to explain why high-speed collisions are so devastating. We will now consider a series of examples to illustrate various aspects of work and energy. Some of the examples in this section can be solved without considering energy, but at the expense of missing out on gaining insights about what work and energy are doing in this situation. On the whole, solutions involving energy are generally shorter and easier than those using kinematics and dynamics alone. ### Test Prep for AP Courses ### Section Summary 1. The net work is the work done by the net force acting on an object. 2. Work done on an object transfers energy to the object. 3. The translational kinetic energy of an object of mass moving at speed is . 4. The work-energy theorem states that the net work on a system changes its kinetic energy, . ### Conceptual Questions ### Problems & Exercises
# Work, Energy, and Energy Resources ## Gravitational Potential Energy ### Learning Objectives By the end of this section, you will be able to: 1. Explain gravitational potential energy in terms of work done against gravity. 2. Show that the gravitational potential energy of an object of mass at height on Earth is given by . 3. Show how knowledge of the potential energy as a function of position can be used to simplify calculations and explain physical phenomena. ### Work Done Against Gravity Climbing stairs and lifting objects is work in both the scientific and everyday sense—it is work done against the gravitational force. When there is work, there is a transformation of energy. The work done against the gravitational force goes into an important form of stored energy that we will explore in this section. Let us calculate the work done in lifting an object of mass through a height , such as in , near Earth's surface. If the object is lifted straight up at constant speed, then the force needed to lift it is equal to its weight . The work done on the mass is then . We define this to be the gravitational potential energy put into (or gained by) the object-Earth system. This energy is associated with the state of separation between two objects that attract each other by the gravitational force. For convenience, we refer to this as the gained by the object, recognizing that this is energy stored in the gravitational field of Earth. Why do we use the word “system”? Potential energy is a property of a system rather than of a single object—due to its physical position. An object’s gravitational potential is due to its position relative to the surroundings within the Earth-object system. The force applied to the object is an external force, from outside the system. When it does positive work it increases the gravitational potential energy of the system. Because gravitational potential energy depends on relative position, we need a reference level at which to set the potential energy equal to 0. We usually choose this point to be Earth’s surface, but this point is arbitrary; what is important is the difference in gravitational potential energy, because this difference is what relates to the work done. The difference in gravitational potential energy of an object (in the Earth-object system) between two rungs of a ladder will be the same for the first two rungs as for the last two rungs. ### Converting Between Potential Energy and Kinetic Energy Gravitational potential energy may be converted to other forms of energy, such as kinetic energy. If we release the mass, gravitational force will do an amount of work equal to on it, thereby increasing its kinetic energy by that same amount (by the work-energy theorem). We will find it more useful to consider just the conversion of to without explicitly considering the intermediate step of work. (See .) This shortcut makes it is easier to solve problems using energy (if possible) rather than explicitly using forces. More precisely, we define the change in gravitational potential energy to be where, for simplicity, we denote the change in height by rather than the usual . Note that is positive when the final height is greater than the initial height, and vice versa. For example, if a 0.500-kg mass hung from a cuckoo clock is raised 1.00 m, then its change in gravitational potential energy is Note that the units of gravitational potential energy turn out to be joules, the same as for work and other forms of energy. As the clock runs, the mass is lowered. We can think of the mass as gradually giving up its 4.90 J of gravitational potential energy, without directly considering the force of gravity that does the work. ### Using Potential Energy to Simplify Calculations The equation applies for any path that has a change in height of , not just when the mass is lifted straight up, as long as is small compared to the radius of Earth. Note that, as we learned in Uniform Circular Motion and Gravitation, the force of Earth’s gravity does decrease with distance from Earth. The change is negligible for small changes in distance, but if you want to use potential energy in problems involving travel to the moon or even further, Newton’s universal law of gravity must be taken into account. This more complete treatment is beyond the scope of this text and is not necessary for the problems we consider here. (See .) It is much easier to calculate (a simple multiplication) than it is to calculate the work done along a complicated path. The idea of gravitational potential energy has the double advantage that it is very broadly applicable and it makes calculations easier. From now on, we will consider that any change in vertical position of a mass is accompanied by a change in gravitational potential energy , and we will avoid the equivalent but more difficult task of calculating work done by or against the gravitational force. We have seen that work done by or against the gravitational force depends only on the starting and ending points, and not on the path between, allowing us to define the simplifying concept of gravitational potential energy. We can do the same thing for a few other forces, and we will see that this leads to a formal definition of the law of conservation of energy. ### Test Prep for AP Courses ### Section Summary 1. Work done against gravity in lifting an object becomes potential energy of the object-Earth system. 2. The change in gravitational potential energy, , is , with being the increase in height and the acceleration due to gravity. 3. The gravitational potential energy of an object near Earth’s surface is due to its position in the mass-Earth system. Only differences in gravitational potential energy, , have physical significance. 4. As an object descends without friction, its gravitational potential energy changes into kinetic energy corresponding to increasing speed, so that . ### Conceptual Questions ### Problems & Exercises
# Work, Energy, and Energy Resources ## Conservative Forces and Potential Energy ### Learning Objectives By the end of this section, you will be able to: 1. Define conservative force, potential energy, and mechanical energy. 2. Explain the potential energy of a spring in terms of its compression when Hooke’s law applies. 3. Use the work-energy theorem to show how having only conservative forces implies conservation of mechanical energy. ### Potential Energy and Conservative Forces Work is done by a force, and some forces, such as weight, have special characteristics. A conservative force is one, like the gravitational force, for which work done by or against it depends only on the starting and ending points of a motion and not on the path taken. We can define a potential energy for any conservative force, just as we did for the gravitational force. For example, when you wind up a toy, an egg timer, or an old-fashioned watch, you do work against its spring and store energy in it. (We treat these springs as ideal, in that we assume there is no friction and no production of thermal energy.) This stored energy is recoverable as work, and it is useful to think of it as potential energy contained in the spring. Indeed, the reason that the spring has this characteristic is that its force is conservative. That is, a conservative force results in stored or potential energy. Gravitational potential energy is one example, as is the energy stored in a spring. We will also see how conservative forces are related to the conservation of energy. ### Potential Energy of a Spring First, let us obtain an expression for the potential energy stored in a spring (). We calculate the work done to stretch or compress a spring that obeys Hooke’s law. (Hooke’s law was examined in Elasticity: Stress and Strain, and states that the magnitude of force on the spring and the resulting deformation are proportional, .) (See .) For our spring, we will replace (the amount of deformation produced by a force ) by the distance that the spring is stretched or compressed along its length. So the force needed to stretch the spring has magnitude , where is the spring’s force constant. The force increases linearly from 0 at the start to in the fully stretched position. The average force is . Thus the work done in stretching or compressing the spring is . Alternatively, we noted in Kinetic Energy and the Work-Energy Theorem that the area under a graph of vs. is the work done by the force. In (c) we see that this area is also . We therefore define the potential energy of a spring, , to be where is the spring’s force constant and is the displacement from its undeformed position. The potential energy represents the work done on the spring and the energy stored in it as a result of stretching or compressing it a distance . The potential energy of the spring does not depend on the path taken; it depends only on the stretch or squeeze in the final configuration. The equation has general validity beyond the special case for which it was derived. Potential energy can be stored in any elastic medium by deforming it. Indeed, the general definition of potential energy is energy due to position, shape, or configuration. For shape or position deformations, stored energy is , where is the force constant of the particular system and is its deformation. Another example is seen in for a guitar string. ### Conservation of Mechanical Energy Let us now consider what form the work-energy theorem takes when only conservative forces are involved. This will lead us to the conservation of energy principle. The work-energy theorem states that the net work done by all forces acting on a system equals its change in kinetic energy. In equation form, this is If only conservative forces act, then where is the total work done by all conservative forces. Thus, Now, if the conservative force, such as the gravitational force or a spring force, does work, the system loses potential energy. That is, . Therefore, or This equation means that the total kinetic and potential energy is constant for any process involving only conservative forces. That is, where i and f denote initial and final values. This equation is a form of the work-energy theorem for conservative forces; it is known as the conservation of mechanical energy principle. Remember that this applies to the extent that all the forces are conservative, so that friction is negligible. The total kinetic plus potential energy of a system is defined to be its mechanical energy, . In a system that experiences only conservative forces, there is a potential energy associated with each force, and the energy only changes form between and the various types of , with the total energy remaining constant. Note that, for conservative forces, we do not directly calculate the work they do; rather, we consider their effects through their corresponding potential energies, just as we did in . Note also that we do not consider details of the path taken—only the starting and ending points are important (as long as the path is not impossible). This assumption is usually a tremendous simplification, because the path may be complicated and forces may vary along the way. ### Test Prep for AP Courses ### Section Summary 1. A conservative force is one for which work depends only on the starting and ending points of a motion, not on the path taken. 2. We can define potential energy for any conservative force, just as we defined for the gravitational force. 3. The potential energy of a spring is , where is the spring’s force constant and is the displacement from its undeformed position. 4. Mechanical energy is defined to be for a conservative force. 5. When only conservative forces act on and within a system, the total mechanical energy is constant. In equation form, where i and f denote initial and final values. This is known as the conservation of mechanical energy. ### Conceptual Questions ### Problems & Exercises
# Work, Energy, and Energy Resources ## Nonconservative Forces ### Learning Objectives By the end of this section, you will be able to: 1. Define nonconservative forces and explain how they affect mechanical energy. 2. Show how the principle of conservation of energy can be applied by treating the conservative forces in terms of their potential energies and any nonconservative forces in terms of the work they do. ### Nonconservative Forces and Friction Forces are either conservative or nonconservative. Conservative forces were discussed in Conservative Forces and Potential Energy. A nonconservative force is one for which work depends on the path taken. Friction is a good example of a nonconservative force. As illustrated in , work done against friction depends on the length of the path between the starting and ending points. Because of this dependence on path, there is no potential energy associated with nonconservative forces. An important characteristic is that the work done by a nonconservative force adds or removes mechanical energy from a system. Friction, for example, creates thermal energy that dissipates, removing energy from the system. Furthermore, even if the thermal energy is retained or captured, it cannot be fully converted back to work, so it is lost or not recoverable in that sense as well. ### How Nonconservative Forces Affect Mechanical Energy Mechanical energy may not be conserved when nonconservative forces act. For example, when a car is brought to a stop by friction on level ground, it loses kinetic energy, which is dissipated as thermal energy, reducing its mechanical energy. compares the effects of conservative and nonconservative forces. We often choose to understand simpler systems such as that described in (a) first before studying more complicated systems as in (b). ### How the Work-Energy Theorem Applies Now let us consider what form the work-energy theorem takes when both conservative and nonconservative forces act. We will see that the work done by nonconservative forces equals the change in the mechanical energy of a system. As noted in Kinetic Energy and the Work-Energy Theorem, the work-energy theorem states that the net work on a system equals the change in its kinetic energy, or . The net work is the sum of the work by nonconservative forces plus the work by conservative forces. That is, so that where is the total work done by all nonconservative forces and is the total work done by all conservative forces. Consider , in which a person pushes a crate up a ramp and is opposed by friction. As in the previous section, we note that work done by a conservative force comes from a loss of gravitational potential energy, so that . Substituting this equation into the previous one and solving for gives This equation means that the total mechanical energy changes by exactly the amount of work done by nonconservative forces. In , this is the work done by the person minus the work done by friction. So even if energy is not conserved for the system of interest (such as the crate), we know that an equal amount of work was done to cause the change in total mechanical energy. We rearrange to obtain This means that the amount of work done by nonconservative forces adds to the mechanical energy of a system. If is positive, then mechanical energy is increased, such as when the person pushes the crate up the ramp in . If is negative, then mechanical energy is decreased, such as when the rock hits the ground in (b). If is zero, then mechanical energy is conserved, and nonconservative forces are balanced. For example, when you push a lawn mower at constant speed on level ground, your work done is removed by the work of friction, and the mower has a constant energy. ### Applying Energy Conservation with Nonconservative Forces When no change in potential energy occurs, applying amounts to applying the work-energy theorem by setting the change in kinetic energy to be equal to the net work done on the system, which in the most general case includes both conservative and nonconservative forces. But when seeking instead to find a change in total mechanical energy in situations that involve changes in both potential and kinetic energy, the previous equation says that you can start by finding the change in mechanical energy that would have resulted from just the conservative forces, including the potential energy changes, and add to it the work done, with the proper sign, by any nonconservative forces involved. ### Test Prep for AP Courses ### Section Summary 1. A nonconservative force is one for which work depends on the path. 2. Friction is an example of a nonconservative force that changes mechanical energy into thermal energy. 3. Work done by a nonconservative force changes the mechanical energy of a system. In equation form, or, equivalently, . 4. When both conservative and nonconservative forces act, energy conservation can be applied and used to calculate motion in terms of the known potential energies of the conservative forces and the work done by nonconservative forces, instead of finding the net work from the net force, or having to directly apply Newton’s laws. ### Problems & Exercises
# Work, Energy, and Energy Resources ## Conservation of Energy ### Learning Objectives By the end of this section, you will be able to: 1. Explain the law of the conservation of energy. 2. Describe some of the many forms of energy. 3. Define efficiency of an energy conversion process as the fraction left as useful energy or work, rather than being transformed, for example, into thermal energy. ### Law of Conservation of Energy Energy, as we have noted, is conserved, making it one of the most important physical quantities in nature. The law of conservation of energy can be stated as follows: Total energy is constant in any process. It may change in form or be transferred from one system to another, but the total remains the same. We have explored some forms of energy and some ways it can be transferred from one system to another. This exploration led to the definition of two major types of energy—mechanical energy and energy transferred via work done by nonconservative forces . But energy takes many other forms, manifesting itself in many different ways, and we need to be able to deal with all of these before we can write an equation for the above general statement of the conservation of energy. ### Other Forms of Energy than Mechanical Energy At this point, we deal with all other forms of energy by lumping them into a single group called other energy (). Then we can state the conservation of energy in equation form as All types of energy and work can be included in this very general statement of conservation of energy. Kinetic energy is , work done by a conservative force is represented by , work done by nonconservative forces is , and all other energies are included as . This equation applies to all previous examples; in those situations was constant, and so it subtracted out and was not directly considered. When does play a role? One example occurs when a person eats. Food is oxidized with the release of carbon dioxide, water, and energy. Some of this chemical energy is converted to kinetic energy when the person moves, to potential energy when the person changes altitude, and to thermal energy (another form of ). ### Some of the Many Forms of Energy What are some other forms of energy? You can probably name a number of forms of energy not yet discussed. Many of these will be covered in later chapters, but let us detail a few here. Electrical energy is a common form that is converted to many other forms and does work in a wide range of practical situations. Fuels, such as gasoline and food, carry chemical energy that can be transferred to a system through oxidation. Chemical fuel can also produce electrical energy, such as in batteries. Batteries can in turn produce light, which is a very pure form of energy. Most energy sources on Earth are in fact stored energy from the energy we receive from the Sun. We sometimes refer to this as radiant energy, or electromagnetic radiation, which includes visible light, infrared, and ultraviolet radiation. Nuclear energy comes from processes that convert measurable amounts of mass into energy. Nuclear energy is transformed into the energy of sunlight, into electrical energy in power plants, and into the energy of the heat transfer and blast in weapons. Atoms and molecules inside all objects are in random motion. This internal mechanical energy from the random motions is called thermal energy, because it is related to the temperature of the object. These and all other forms of energy can be converted into one another and can do work. gives the amount of energy stored, used, or released from various objects and in various phenomena. The range of energies and the variety of types and situations is impressive. ### Transformation of Energy The transformation of energy from one form into others is happening all the time. The chemical energy in food is converted into thermal energy through metabolism; light energy is converted into chemical energy through photosynthesis. In a larger example, the chemical energy contained in coal is converted into thermal energy as it burns to turn water into steam in a boiler. This thermal energy in the steam in turn is converted to mechanical energy as it spins a turbine, which is connected to a generator to produce electrical energy. (In all of these examples, not all of the initial energy is converted into the forms mentioned. This important point is discussed later in this section.) Another example of energy conversion occurs in a solar cell. Sunlight impinging on a solar cell (see ) produces electricity, which in turn can be used to run an electric motor. Energy is converted from the primary source of solar energy into electrical energy and then into mechanical energy. ### Efficiency Even though energy is conserved in an energy conversion process, the output of useful energy or work will be less than the energy input. The efficiency of an energy conversion process is defined as lists some efficiencies of mechanical devices and human activities. In a coal-fired power plant, for example, about 40% of the chemical energy in the coal becomes useful electrical energy. The other 60% transforms into other (perhaps less useful) energy forms, such as thermal energy, which is then released to the environment through combustion gases and cooling towers. ### Test Prep for AP Courses ### Section Summary 1. The law of conservation of energy states that the total energy is constant in any process. Energy may change in form or be transferred from one system to another, but the total remains the same. 2. When all forms of energy are considered, conservation of energy is written in equation form as , where is all other forms of energy besides mechanical energy. 3. Commonly encountered forms of energy include electric energy, chemical energy, radiant energy, nuclear energy, and thermal energy. 4. Energy is often utilized to do work, but it is not possible to convert all the energy of a system to work. 5. The efficiency of a machine or human is defined to be , where is useful work output and is the energy consumed. ### Conceptual Questions ### Problems & Exercises
# Work, Energy, and Energy Resources ## Power ### Learning Objectives By the end of this section, you will be able to: 1. Calculate power by calculating changes in energy over time. 2. Examine power consumption and calculations of the cost of energy consumed. ### What is Power? Power—the word conjures up many images: a professional football player muscling aside his opponent, a dragster roaring away from the starting line, a volcano blowing its lava into the atmosphere, or a rocket blasting off, as in . These images of power have in common the rapid performance of work, consistent with the scientific definition of power () as the rate at which work is done. Because work is energy transfer, power is also the rate at which energy is expended. A 60-W light bulb, for example, expends 60 J of energy per second. Great power means a large amount of work or energy developed in a short time. For example, when a powerful car accelerates rapidly, it does a large amount of work and consumes a large amount of fuel in a short time. ### Calculating Power from Energy It is impressive that this woman’s useful power output is slightly less than 1 horsepower ! People can generate more than a horsepower with their leg muscles for short periods of time by rapidly converting available blood sugar and oxygen into work output. (A horse can put out 1 hp for hours on end.) Once oxygen is depleted, power output decreases and the person begins to breathe rapidly to obtain oxygen to metabolize more food—this is known as the aerobic stage of exercise. If the woman climbed the stairs slowly, then her power output would be much less, although the amount of work done would be the same. ### Examples of Power Examples of power are limited only by the imagination, because there are as many types as there are forms of work and energy. (See for some examples.) Sunlight reaching Earth’s surface carries a maximum power of about 1.3 kilowatts per square meter A tiny fraction of this is retained by Earth over the long term. Our consumption rate of fossil fuels is far greater than the rate at which they are stored, so it is inevitable that they will be depleted. Power implies that energy is transferred, perhaps changing form. It is never possible to change one form completely into another without losing some of it as thermal energy. For example, a 60-W incandescent bulb converts only 5 W of electrical power to light, with 55 W dissipating into thermal energy. Furthermore, the typical electric power plant converts only 35 to 40% of its fuel into electricity. The remainder becomes a huge amount of thermal energy that must be dispersed as heat transfer, as rapidly as it is created. A coal-fired power plant may produce 1000 megawatts; 1 megawatt (MW) is of electric power. But the power plant consumes chemical energy at a rate of about 2500 MW, creating heat transfer to the surroundings at a rate of 1500 MW. (See .) ### Power and Energy Consumption We usually have to pay for the energy we use. It is interesting and easy to estimate the cost of energy for an electrical appliance if its power consumption rate and time used are known. The higher the power consumption rate and the longer the appliance is used, the greater the cost of that appliance. The power consumption rate is , where is the energy supplied by the electricity company. So the energy consumed over a time is Electricity bills state the energy used in units of kilowatt-hours which is the product of power in kilowatts and time in hours. This unit is convenient because electrical power consumption at the kilowatt level for hours at a time is typical. The motivation to save energy has become more compelling with its ever-increasing price. Armed with the knowledge that energy consumed is the product of power and time, you can estimate costs for yourself and make the necessary value judgments about where to save energy. Either power or time must be reduced. It is most cost-effective to limit the use of high-power devices that normally operate for long periods of time, such as water heaters and air conditioners. This would not include relatively high power devices like toasters, because they are on only a few minutes per day. It would also not include electric clocks, in spite of their 24-hour-per-day usage, because they are very low power devices. It is sometimes possible to use devices that have greater efficiencies—that is, devices that consume less power to accomplish the same task. One example is the compact fluorescent light bulb, which produces over four times more light per watt of power consumed than its incandescent cousin. Modern civilization depends on energy, but current levels of energy consumption and production are not sustainable. The likelihood of a link between global warming and fossil fuel use (with its concomitant production of carbon dioxide), has made reduction in energy use as well as a shift to non-fossil fuels of the utmost importance. Even though energy in an isolated system is a conserved quantity, the final result of most energy transformations is waste heat transfer to the environment, which is no longer useful for doing work. As we will discuss in more detail in Thermodynamics, the potential for energy to produce useful work has been “degraded” in the energy transformation. ### Section Summary 1. Power is the rate at which work is done, or in equation form, for the average power for work done over a time , . 2. The SI unit for power is the watt (W), where . 3. The power of many devices such as electric motors is also often expressed in horsepower (hp), where . ### Conceptual Questions ### Problems & Exercises
# Work, Energy, and Energy Resources ## Work, Energy, and Power in Humans ### Learning Objectives By the end of this section, you will be able to: 1. Explain the human body’s consumption of energy when at rest vs. when engaged in activities that do useful work. 2. Calculate the conversion of chemical energy in food into useful work. ### Energy Conversion in Humans Our own bodies, like all living organisms, are energy conversion machines. Conservation of energy implies that the chemical energy stored in food is converted into work, thermal energy, and/or stored as chemical energy in fatty tissue. (See .) The fraction going into each form depends both on how much we eat and on our level of physical activity. If we eat more than is needed to do work and stay warm, the remainder goes into body fat. ### Power Consumed at Rest The rate at which the body uses food energy to sustain life and to do different activities is called the metabolic rate. The total energy conversion rate of a person at rest is called the basal metabolic rate (BMR) and is divided among various systems in the body, as shown in . The largest fraction goes to the liver and spleen, with the brain coming next. Of course, during vigorous exercise, the energy consumption of the skeletal muscles and heart increase markedly. About 75% of the calories burned in a day go into these basic functions. The BMR is a function of age, gender, total body weight, and amount of muscle mass (which burns more calories than body fat). Athletes have a greater BMR due to this last factor. Energy consumption is directly proportional to oxygen consumption because the digestive process is basically one of oxidizing food. We can measure the energy people use during various activities by measuring their oxygen use. (See .) Approximately 20 kJ of energy are produced for each liter of oxygen consumed, independent of the type of food. shows energy and oxygen consumption rates (power expended) for a variety of activities. ### Power of Doing Useful Work Work done by a person is sometimes called useful work, which is work done on the outside world, such as lifting weights. Useful work requires a force exerted through a distance on the outside world, and so it excludes internal work, such as that done by the heart when pumping blood. Useful work does include that done in climbing stairs or accelerating to a full run, because these are accomplished by exerting forces on the outside world. Forces exerted by the body are nonconservative, so that they can change the mechanical energy () of the system worked upon, and this is often the goal. A baseball player throwing a ball, for example, increases both the ball’s kinetic and potential energy. If a person needs more energy than they consume, such as when doing vigorous work, the body must draw upon the chemical energy stored in fat. So exercise can be helpful in losing fat. However, the amount of exercise needed to produce a loss in fat, or to burn off extra calories consumed that day, can be large, as illustrates. All bodily functions, from thinking to lifting weights, require energy. (See .) The many small muscle actions accompanying all quiet activity, from sleeping to head scratching, ultimately become thermal energy, as do less visible muscle actions by the heart, lungs, and digestive tract. Shivering, in fact, is an involuntary response to low body temperature that pits muscles against one another to produce thermal energy in the body (and do no work). The kidneys and liver consume a surprising amount of energy, but the biggest surprise of all is that a full 25% of all energy consumed by the body is used to maintain electrical potentials in all living cells. (Nerve cells use this electrical potential in nerve impulses.) This bioelectrical energy ultimately becomes mostly thermal energy, but some is utilized to power chemical processes such as in the kidneys and liver, and in fat production. ### Section Summary 1. The human body converts energy stored in food into work, thermal energy, and/or chemical energy that is stored in fatty tissue. 2. The rate at which the body uses food energy to sustain life and to do different activities is called the metabolic rate, and the corresponding rate when at rest is called the basal metabolic rate (BMR) 3. The energy included in the basal metabolic rate is divided among various systems in the body, with the largest fraction going to the liver and spleen, and the brain coming next. 4. About 75% of food calories are used to sustain basic body functions included in the basal metabolic rate. 5. The energy consumption of people during various activities can be determined by measuring their oxygen use, because the digestive process is basically one of oxidizing food. ### Conceptual Questions ### Problems & Exercises
# Work, Energy, and Energy Resources ## World Energy Use ### Learning Objectives By the end of this section, you will be able to: 1. Describe the distinction between renewable and nonrenewable energy sources. 2. Explain why the inevitable conversion of energy to less useful forms makes it necessary to conserve energy resources. Energy is an important ingredient in all phases of society. We live in a very interdependent world, and access to adequate and reliable energy resources is crucial for economic growth and for maintaining the quality of our lives. But current levels of energy consumption and production are not sustainable. Depending on the data source, estimates indicate that about 31–35% of the world’s energy comes from oil, and much of that goes to transportation uses. This is a reduction by a few percentage points from ten years ago. Oil prices are dependent as much upon new (or foreseen) discoveries as they are upon political events and situations around the world. The U.S., with 4.25% of the world’s population, consumes 21% of the world’s oil production per year. ### Renewable and Nonrenewable Energy Sources The principal energy resources used in the world are shown in . The fuel mix has changed over the years but now is dominated by oil, although natural gas and solar contributions are increasing. Renewable forms of energy are those sources that cannot be used up, such as water, wind, solar, and biomass. About 85% of our energy comes from nonrenewable fossil fuels—oil, natural gas, coal. The likelihood of a link between global warming and fossil fuel use, with its production of carbon dioxide through combustion, has made, in the eyes of many scientists, a shift to non-fossil fuels of utmost importance—but it will not be easy. ### The World’s Growing Energy Needs World energy consumption continues to rise, especially in the developing countries. (See .) Global demand for energy has tripled in the past 50 years and might triple again in the next 30 years. While much of this growth will come from the rapidly booming economies of China and India, many of the developed countries, especially those in Europe, are hoping to meet their energy needs by expanding the use of renewable sources. Although presently only a small percentage, renewable energy is growing very fast, especially wind energy. For example, Germany plans to meet 65% of its power and 30% of its overall energy needs with renewable resources by the year 2030. (See .) Energy is a key constraint in the rapid economic growth of China and India. In 2003, China surpassed Japan as the world’s second largest consumer of oil. However, over 1/3 of this is imported. Unlike most Western countries, coal dominates the commercial energy resources of China, accounting for 2/3 of its energy consumption. In 2009 China surpassed the United States as the largest generator of . In India, the main energy resources are biomass (wood and dung) and coal. Half of India’s oil is imported. About 70% of India’s electricity is generated by highly polluting coal. Yet there are sizeable strides being made in renewable energy. India has a rapidly growing wind energy base, and it has the largest solar cooking program in the world. China has invested substantially in building solar collection farms as well as hydroelectric plants. displays the 2020 commercial energy mix by country for some of the prime energy users in the world. While non-renewable sources dominate, some countries get a sizeable percentage of their electricity from renewable resources. For example, about two-thirds of New Zealand’s electricity demand is met by hydroelectric. Only 10% of the U.S. electricity is generated by renewable resources, primarily hydroelectric. It is difficult to determine total sources and consumers of energy in many countries, and estimates vary somewhat by data source and type of measurement. ### Energy and Economic Well-being Economic well-being is dependent upon energy use, and in most countries higher standards of living, as measured by GDP (gross domestic product) per capita, are matched by higher levels of energy consumption per capita. This is borne out in . Increased efficiency of energy use will change this dependency. A global problem is balancing energy resource development against the harmful effects upon the environment in its extraction and use. New and diversified energy sources do, however, greatly increase economic opportunity and stability. First, the extensive employment opportunities in renewable energy make it one of the most sustainable and secure fields to enter. Second, renewable energy provides countries and localities with increased levels of resiliency in the face of natural disasters, conflict, or other disruptions. The 21st century has already seen major economic impacts from energy disruptions: Hurricane Katrina, Superstorm Sandy, various wildfires, Hurricane Maria, and the 2021 Texas Winter Storm demonstrate the vulnerability of United States power systems. Diversifying energy sources through renewables and other fossil-fuel alternatives brings power grids and transportation systems back online much more quickly, saving lives and enabling a more swift return to economic operations. And as critical emerging information infrastructure, such as data centers, requires more of the world's energy, supplying those growing systems during normal operations and crises will be increasingly important. ### Conserving Energy As we finish this chapter on energy and work, it is relevant to draw some distinctions between two sometimes misunderstood terms in the area of energy use. As has been mentioned elsewhere, the “law of the conservation of energy” is a very useful principle in analyzing physical processes. It is a statement that cannot be proven from basic principles, but is a very good bookkeeping device, and no exceptions have ever been found. It states that the total amount of energy in an isolated system will always remain constant. Related to this principle, but remarkably different from it, is the important philosophy of energy conservation. This concept has to do with seeking to decrease the amount of energy used by an individual or group through (1) reduced activities (e.g., turning down thermostats, driving fewer kilometers) and/or (2) increasing conversion efficiencies in the performance of a particular task—such as developing and using more efficient room heaters, cars that have greater miles-per-gallon ratings, energy-efficient compact fluorescent lights, etc. Since energy in an isolated system is not destroyed or created or generated, one might wonder why we need to be concerned about our energy resources, since energy is a conserved quantity. The problem is that the final result of most energy transformations is waste heat transfer to the environment and conversion to energy forms no longer useful for doing work. To state it in another way, the potential for energy to produce useful work has been “degraded” in the energy transformation. (This will be discussed in more detail in Thermodynamics.) ### Section Summary 1. The relative use of different fuels to provide energy has changed over the years, but fuel use is currently dominated by oil, although natural gas and solar contributions are increasing. 2. Although non-renewable sources dominate, some countries meet a sizeable percentage of their electricity needs from renewable resources. 3. The United States obtains only about 10% of its energy from renewable sources, mostly hydroelectric power. 4. Economic well-being is dependent upon energy use, and in most countries higher standards of living, as measured by GDP (Gross Domestic Product) per capita, are matched by higher levels of energy consumption per capita. 5. Even though, in accordance with the law of conservation of energy, energy can never be created or destroyed, energy that can be used to do work is always partly converted to less useful forms, such as waste heat to the environment, in all of our uses of energy for practical purposes. ### Conceptual Questions ### Problems & Exercises
# Linear Momentum and Collisions ## Connection for AP® courses In this chapter, you will learn about the concept of momentum and the relationship between momentum and force (both vector quantities) applied over a time interval. Have you ever considered why a glass dropped on a tile floor will often break, but a glass dropped on carpet will often remain intact? Both involve changes in momentum, but the actual collision with the floor is different in each case, just as an automobile collision without the benefit of an airbag can have a significantly different outcome than one with an airbag. You will learn that the interaction of objects (like a glass and the floor or two automobiles) results in forces, which in turn result in changes in the momentum of each object. At the same time, you will see how the law of momentum conservation can be applied to a system to help determine the outcome of a collision. The content in this chapter supports: Big Idea 3 The interactions of an object with other objects can be described by forces. Enduring Understanding 3.D A force exerted on an object can change the momentum of the object. Essential Knowledge 3.D.2 The change in momentum of an object occurs over a time interval. Big Idea 4: Interactions between systems can result in changes in those systems. Enduring Understanding 4.B Interactions with other objects or systems can change the total linear momentum of a system. Essential Knowledge 4.B.1 The change in linear momentum for a constant-mass system is the product of the mass of the system and the change in velocity of the center of mass. Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws. Enduring Understanding 5.A Certain quantities are conserved, in the sense that the changes of those quantities in a given system are always equal to the transfer of that quantity to or from the system by all possible interactions with other systems. Essential Knowledge 5.A.2 For all systems under all circumstances, energy, charge, linear momentum, and angular momentum are conserved. Essential Knowledge 5.D.1 In a collision between objects, linear momentum is conserved. In an elastic collision, kinetic energy is the same before and after. Essential Knowledge 5.D.2 In a collision between objects, linear momentum is conserved. In an inelastic collision, kinetic energy is not the same before and after the collision.
# Linear Momentum and Collisions ## Linear Momentum and Force ### Learning Objectives By the end of this section, you will be able to: 1. Define linear momentum. 2. Explain the relationship between momentum and force. 3. State Newton’s second law of motion in terms of momentum. 4. Calculate momentum given mass and velocity. ### Linear Momentum The scientific definition of linear momentum is consistent with most people’s intuitive understanding of momentum: a large, fast-moving object has greater momentum than a smaller, slower object. Linear momentum is defined as the product of a system’s mass multiplied by its velocity. In symbols, linear momentum is expressed as Momentum is directly proportional to the object’s mass and also its velocity. Thus the greater an object’s mass or the greater its velocity, the greater its momentum. Momentum is a vector having the same direction as the velocity . The SI unit for momentum is . ### Momentum and Newton’s Second Law The importance of momentum, unlike the importance of energy, was recognized early in the development of classical physics. Momentum was deemed so important that it was called the “quantity of motion.” Newton actually stated his second law of motion in terms of momentum: The net external force equals the change in momentum of a system divided by the time over which it changes. Using symbols, this law is where is the net external force, is the change in momentum, and is the change in time. This statement of Newton’s second law of motion includes the more familiar as a special case. We can derive this form as follows. First, note that the change in momentum is given by If the mass of the system is constant, then So that for constant mass, Newton’s second law of motion becomes Because , we get the familiar equation when the mass of the system is constant. Newton’s second law of motion stated in terms of momentum is more generally applicable because it can be applied to systems where the mass is changing, such as rockets, as well as to systems of constant mass. We will consider systems with varying mass in some detail; however, the relationship between momentum and force remains useful when mass is constant, such as in the following example. ### Test Prep for AP Courses ### Section Summary 1. Linear momentum (momentum for brevity) is defined as the product of a system’s mass multiplied by its velocity. 2. In symbols, linear momentum is defined to be where is the mass of the system and is its velocity. 3. The SI unit for momentum is . 4. Newton’s second law of motion in terms of momentum states that the net external force equals the change in momentum of a system divided by the time over which it changes. 5. In symbols, Newton’s second law of motion is defined to be is the net external force, is the change in momentum, and is the change time. ### Conceptual Questions ### Problems & Exercises
# Linear Momentum and Collisions ## Impulse ### Learning Objectives By the end of this section, you will be able to: 1. Define impulse. 2. Describe effects of impulses in everyday life. 3. Determine the average effective force using graphical representation. 4. Calculate average force and impulse given mass, velocity, and time. The effect of a force on an object depends on how long it acts, as well as how great the force is. In , a very large force acting for a short time had a great effect on the momentum of the tennis ball. A small force could cause the same change in momentum, but it would have to act for a much longer time. For example, if the ball were thrown upward, the gravitational force (which is much smaller than the tennis racquet’s force) would eventually reverse the momentum of the ball. Quantitatively, the effect we are talking about is the change in momentum . By rearranging the equation to be we can see how the change in momentum equals the average net external force multiplied by the time this force acts. The quantity is given the name impulse. Impulse is the same as the change in momentum. Our definition of impulse includes an assumption that the force is constant over the time interval . Forces are usually not constant. Forces vary considerably even during the brief time intervals considered. It is, however, possible to find an average effective force that produces the same result as the corresponding time-varying force. shows a graph of what an actual force looks like as a function of time for a ball bouncing off the floor. The area under the curve has units of momentum and is equal to the impulse or change in momentum between times and . That area is equal to the area inside the rectangle bounded by , , and . Thus the impulses and their effects are the same for both the actual and effective forces. ### Test Prep for AP Courses ### Section Summary 1. Impulse, or change in momentum, equals the average net external force multiplied by the time this force acts: 2. Forces are usually not constant over a period of time. ### Conceptual Questions ### Problems & Exercises
# Linear Momentum and Collisions ## Conservation of Momentum ### Learning Objectives By the end of this section, you will be able to: 1. Describe the principle of conservation of momentum. 2. Derive an expression for the conservation of momentum. 3. Explain conservation of momentum with examples. 4. Explain the principle of conservation of momentum as it relates to atomic and subatomic particles. Momentum is an important quantity because it is conserved. Yet it was not conserved in the examples in Impulse and Linear Momentum and Force, where large changes in momentum were produced by forces acting on the system of interest. Under what circumstances is momentum conserved? The answer to this question entails considering a sufficiently large system. It is always possible to find a larger system in which total momentum is constant, even if momentum changes for components of the system. If a football player runs into the goalpost in the end zone, there will be a force on him that causes him to bounce backward. The backward momentum felt by an object or person exerting force on another object is often called a recoil. However, the Earth also recoils—conserving momentum—because of the force applied to it through the goalpost. Because Earth is many orders of magnitude more massive than the player, its recoil is immeasurably small and can be neglected in any practical sense, but it is real nevertheless. Consider what happens if the masses of two colliding objects are more similar than the masses of a football player and Earth—for example, one car bumping into another, as shown in . Both cars are coasting in the same direction when the lead car (labeled is bumped by the trailing car (labeled The only unbalanced force on each car is the force of the collision. (Assume that the effects due to friction are negligible.) Car 1 slows down as a result of the collision, losing some momentum, while car 2 speeds up and gains some momentum. We shall now show that the total momentum of the two-car system remains constant. Using the definition of impulse, the change in momentum of car 1 is given by where is the force on car 1 due to car 2, and is the time the force acts (the duration of the collision). Intuitively, it seems obvious that the collision time is the same for both cars, but it is only true for objects traveling at ordinary speeds. This assumption must be modified for objects travelling near the speed of light, without affecting the result that momentum is conserved. Similarly, the change in momentum of car 2 is where is the force on car 2 due to car 1, and we assume the duration of the collision is the same for both cars. We know from Newton’s third law that , and so Thus, the changes in momentum are equal and opposite, and Because the changes in momentum add to zero, the total momentum of the two-car system is constant. That is, where and are the momenta of cars 1 and 2 after the collision. (We often use primes to denote the final state.) This result—that momentum is conserved—has validity far beyond the preceding one-dimensional case. It can be similarly shown that total momentum is conserved for any isolated system, with any number of objects in it. In equation form, the conservation of momentum principle for an isolated system is written or where is the total momentum (the sum of the momenta of the individual objects in the system) and is the total momentum some time later. (Recall in Uniform Circular Motion and Gravitation you learned that the center of mass of a system of objects is the effective average location of the mass of the system. The total momentum can be shown to be the momentum of the center of mass of the system.) An isolated system is defined to be one for which the net external force is zero Perhaps an easier way to see that momentum is conserved for an isolated system is to consider Newton’s second law in terms of momentum, . For an isolated system, ; thus, , and is constant. We have noted that the three length dimensions in nature—, , and —are independent, and it is interesting to note that momentum can be conserved in different ways along each dimension. For example, during projectile motion and where air resistance is negligible, momentum is conserved in the horizontal direction because horizontal forces are zero and momentum is unchanged. But along the vertical direction, the net vertical force is not zero and the momentum of the projectile is not conserved. (See .) However, if the momentum of the projectile-Earth system is considered in the vertical direction, we find that the total momentum is conserved. The conservation of momentum principle can be applied to systems as different as a comet striking Earth and a gas containing huge numbers of atoms and molecules. Conservation of momentum is violated only when the net external force is not zero. But another larger system can always be considered in which momentum is conserved by simply including the source of the external force. For example, in the collision of two cars considered above, the two-car system conserves momentum while each one-car system does not. ### Subatomic Collisions and Momentum The conservation of momentum principle not only applies to the macroscopic objects, it is also essential to our explorations of atomic and subatomic particles. Giant machines hurl subatomic particles at one another, and researchers evaluate the results by assuming conservation of momentum (among other things). On the small scale, we find that particles and their properties are invisible to the naked eye but can be measured with our instruments, and models of these subatomic particles can be constructed to describe the results. Momentum is found to be a property of all subatomic particles including massless particles such as photons that compose light. Momentum being a property of particles hints that momentum may have an identity beyond the description of an object’s mass multiplied by the object’s velocity. Indeed, momentum relates to wave properties and plays a fundamental role in what measurements are taken and how we take these measurements. Furthermore, we find that the conservation of momentum principle is valid when considering systems of particles. We use this principle to analyze the masses and other properties of previously undetected particles, such as the nucleus of an atom and the existence of quarks that make up particles of nuclei. below illustrates how a particle scattering backward from another implies that its target is massive and dense. Experiments seeking evidence that quarks make up protons (one type of particle that makes up nuclei) scattered high-energy electrons off of protons (nuclei of hydrogen atoms). Electrons occasionally scattered straight backward in a manner that implied a very small and very dense particle makes up the proton—this observation is considered nearly direct evidence of quarks. The analysis was based partly on the same conservation of momentum principle that works so well on the large scale. ### Test Prep for AP Courses ### Section Summary 1. The conservation of momentum principle is written or is the initial total momentum and is the total momentum some time later. 2. An isolated system is defined to be one for which the net external force is zero 3. During projectile motion and where air resistance is negligible, momentum is conserved in the horizontal direction because horizontal forces are zero. 4. Conservation of momentum applies only when the net external force is zero. 5. The conservation of momentum principle is valid when considering systems of particles. ### Conceptual Questions ### Problems & Exercises
# Linear Momentum and Collisions ## Elastic Collisions in One Dimension ### Learning Objectives By the end of this section, you will be able to: 1. Describe an elastic collision of two objects in one dimension. 2. Define internal kinetic energy. 3. Derive an expression for conservation of internal kinetic energy in a one dimensional collision. 4. Determine the final velocities in an elastic collision given masses and initial velocities. Let us consider various types of two-object collisions. These collisions are the easiest to analyze, and they illustrate many of the physical principles involved in collisions. The conservation of momentum principle is very useful here, and it can be used whenever the net external force on a system is zero. We start with the elastic collision of two objects moving along the same line—a one-dimensional problem. An elastic collision is one that also conserves internal kinetic energy. Internal kinetic energy is the sum of the kinetic energies of the objects in the system. illustrates an elastic collision in which internal kinetic energy and momentum are conserved. Truly elastic collisions can only be achieved with subatomic particles, such as electrons striking nuclei. Macroscopic collisions can be very nearly, but not quite, elastic—some kinetic energy is always converted into other forms of energy such as heat transfer due to friction and sound. One macroscopic collision that is nearly elastic is that of two steel blocks on ice. Another nearly elastic collision is that between two carts with spring bumpers on an air track. Icy surfaces and air tracks are nearly frictionless, more readily allowing nearly elastic collisions on them. Now, to solve problems involving one-dimensional elastic collisions between two objects we can use the equations for conservation of momentum and conservation of internal kinetic energy. First, the equation for conservation of momentum for two objects in a one-dimensional collision is or where the primes (') indicate values after the collision. By definition, an elastic collision conserves internal kinetic energy, and so the sum of kinetic energies before the collision equals the sum after the collision. Thus, expresses the equation for conservation of internal kinetic energy in a one-dimensional collision. ### Test Prep for AP Courses ### Section Summary 1. An elastic collision is one that conserves internal kinetic energy. 2. Conservation of kinetic energy and momentum together allow the final velocities to be calculated in terms of initial velocities and masses in one dimensional two-body collisions. ### Conceptual Questions ### Problems & Exercises