text
stringlengths 313
1.33M
|
---|
# Electric Current, Resistance, and Ohm's Law
## Electric Power and Energy
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the power dissipated by a resistor and power supplied by a power supply.
2. Calculate the cost of electricity under various circumstances.
### Power in Electric Circuits
Power is associated by many people with electricity. Knowing that power is the rate of energy use or energy conversion, what is the expression for electric power? Power transmission lines might come to mind. We also think of lightbulbs in terms of their power ratings in watts. Let us compare a 25-W bulb with a 60-W bulb. (See (a).) Since both operate on the same voltage, the 60-W bulb must draw more current to have a greater power rating. Thus the 60-W bulb’s resistance must be lower than that of a 25-W bulb. If we increase voltage, we also increase power. For example, when a 25-W bulb that is designed to operate on 120 V is connected to 240 V, it briefly glows very brightly and then burns out. Precisely how are voltage, current, and resistance related to electric power?
Electric energy depends on both the voltage involved and the charge moved. This is expressed most simply as , where is the charge moved and is the voltage (or more precisely, the potential difference the charge moves through). Power is the rate at which energy is moved, and so electric power is
Recognizing that current is (note that here), the expression for power becomes
Electric power ( ) is simply the product of current times voltage. Power has familiar units of watts. Since the SI unit for potential energy (PE) is the joule, power has units of joules per second, or watts. Thus, . For example, cars often have one or more auxiliary power outlets with which you can charge a cell phone or other electronic devices. These outlets may be rated at 20 A, so that the circuit can deliver a maximum power
. In some applications, electric power may be expressed as volt-amperes or even kilovolt-amperes (
).
To see the relationship of power to resistance, we combine Ohm’s law with . Substituting gives . Similarly, substituting gives . Three expressions for electric power are listed together here for convenience:
Note that the first equation is always valid, whereas the other two can be used only for resistors. In a simple circuit, with one voltage source and a single resistor, the power supplied by the voltage source and that dissipated by the resistor are identical. (In more complicated circuits, can be the power dissipated by a single device and not the total power in the circuit.)
Different insights can be gained from the three different expressions for electric power. For example, implies that the lower the resistance connected to a given voltage source, the greater the power delivered. Furthermore, since voltage is squared in , the effect of applying a higher voltage is perhaps greater than expected. Thus, when the voltage is doubled to a 25-W bulb, its power nearly quadruples to about 100 W, burning it out. If the bulb’s resistance remained constant, its power would be exactly 100 W, but at the higher temperature its resistance is higher, too.
### The Cost of Electricity
The more electric appliances you use and the longer they are left on, the higher your electric bill. This familiar fact is based on the relationship between energy and power. You pay for the energy used. Since , we see that
is the energy used by a device using power for a time interval . For example, the more lightbulbs burning, the greater used; the longer they are on, the greater is. The energy unit on electric bills is the kilowatt-hour (), consistent with the relationship . It is easy to estimate the cost of operating electric appliances if you have some idea of their power consumption rate in watts or kilowatts, the time they are on in hours, and the cost per kilowatt-hour for your electric utility. Kilowatt-hours, like all other specialized energy units such as food calories, can be converted to joules. You can prove to yourself that .
The electrical energy ( ) used can be reduced either by reducing the time of use or by reducing the power consumption of that appliance or fixture. This will not only reduce the cost, but it will also result in a reduced impact on the environment. Improvements to lighting are some of the fastest ways to reduce the electrical energy used in a home or business. About 20% of a home’s use of energy goes to lighting, while the number for commercial establishments is closer to 40%. Fluorescent lights are about four times more efficient than incandescent lights—this is true for both the long tubes and the compact fluorescent lights (CFL). (See (b).) Thus, a 60-W incandescent bulb can be replaced by a 15-W CFL, which has the same brightness and color. CFLs have a bent tube inside a globe or a spiral-shaped tube, all connected to a standard screw-in base that fits standard incandescent light sockets. (Original problems with color, flicker, shape, and high initial investment for CFLs have been addressed in recent years.) The heat transfer from these CFLs is less, and they last up to 10 times longer. The significance of an investment in such bulbs is addressed in the next example. New white LED lights (which are clusters of small LED bulbs) are even more efficient (twice that of CFLs) and last 5 times longer than CFLs. However, their cost is still high.
### Test Prep for AP Courses
### Section Summary
1. Electric power is the rate (in watts) that energy is supplied by a source or dissipated by a device.
2. Three expressions for electrical power are
and
3. The energy used by a device with a power over a time is .
### Conceptual Questions
### Problem Exercises
|
# Electric Current, Resistance, and Ohm's Law
## Alternating Current versus Direct Current
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the differences and similarities between AC and DC current.
2. Calculate rms voltage, current, and average power.
3. Explain why AC current is used for power transmission.
### Alternating Current
Most of the examples dealt with so far, and particularly those utilizing batteries, have constant voltage sources. Once the current is established, it is thus also a constant. Direct current (DC) is the flow of electric charge in only one direction. It is the steady state of a constant-voltage circuit. Most well-known applications, however, use a time-varying voltage source. Alternating current (AC) is the flow of electric charge that periodically reverses direction. If the source varies periodically, particularly sinusoidally, the circuit is known as an alternating current circuit. Examples include the commercial and residential power that serves so many of our needs. shows graphs of voltage and current versus time for typical DC and AC power. The AC voltages and frequencies commonly used in homes and businesses vary around the world.
shows a schematic of a simple circuit with an AC voltage source. The voltage between the terminals fluctuates as shown, with the AC voltage given by
where is the voltage at time , is the peak voltage, and is the frequency in hertz. For this simple resistance circuit, , and so the AC current is
where is the current at time , and is the peak current. For this example, the voltage and current are said to be in phase, as seen in (b).
Current in the resistor alternates back and forth just like the driving voltage, since . If the resistor is a fluorescent light bulb, for example, it brightens and dims 120 times per second as the current repeatedly goes through zero. A 120-Hz flicker is too rapid for your eyes to detect, but if you wave your hand back and forth between your face and a fluorescent light, you will see a stroboscopic effect evidencing AC. The fact that the light output fluctuates means that the power is fluctuating. The power supplied is . Using the expressions for and above, we see that the time dependence of power is , as shown in .
We are most often concerned with average power rather than its fluctuations—that 60-W light bulb in your desk lamp has an average power consumption of 60 W, for example. As illustrated in , the average power is
This is evident from the graph, since the areas above and below the line are equal, but it can also be proven using trigonometric identities. Similarly, we define an average or rms current and average or rms voltage to be, respectively,
and
where rms stands for root mean square, a particular kind of average. In general, to obtain a root mean square, the particular quantity is squared, its mean (or average) is found, and the square root is taken. This is useful for AC, since the average value is zero. Now,
which gives
as stated above. It is standard practice to quote , , and rather than the peak values. For example, most household electricity is 120 V AC, which means that is 120 V. The common 10-A circuit breaker will interrupt a sustained greater than 10 A. Your 1.0-kW microwave oven consumes , and so on. You can think of these rms and average values as the equivalent DC values for a simple resistive circuit.
To summarize, when dealing with AC, Ohm’s law and the equations for power are completely analogous to those for DC, but rms and average values are used for AC. Thus, for AC, Ohm’s law is written
The various expressions for AC power are
and
### Why Use AC for Power Distribution?
Most large power-distribution systems are AC. Moreover, the power is transmitted at much higher voltages than the 120-V AC (240 V in most parts of the world) we use in homes and on the job. Economies of scale make it cheaper to build a few very large electric power-generation plants than to build numerous small ones. This necessitates sending power long distances, and it is obviously important that energy losses en route be minimized. High voltages can be transmitted with much smaller power losses than low voltages, as we shall see. (See .) For safety reasons, the voltage at the user is reduced to familiar values. The crucial factor is that it is much easier to increase and decrease AC voltages than DC, so AC is used in most large power distribution systems.
It is widely recognized that high voltages pose greater hazards than low voltages. But, in fact, some high voltages, such as those associated with common static electricity, can be harmless. So it is not voltage alone that determines a hazard. It is not so widely recognized that AC shocks are often more harmful than similar DC shocks. Thomas Edison thought that AC shocks were more harmful and set up a DC power-distribution system in New York City in the late 1800s. There were bitter fights, in particular between Edison and George Westinghouse and Nikola Tesla, who were advocating the use of AC in early power-distribution systems. AC has prevailed largely due to transformers and lower power losses with high-voltage transmission.
### Section Summary
1. Direct current (DC) is the flow of electric current in only one direction. It refers to systems where the source voltage is constant.
2. The voltage source of an alternating current (AC) system puts out , where is the voltage at time , is the peak voltage, and is the frequency in hertz.
3. In a simple circuit, and AC current is , where is the current at time , and is the peak current.
4. The average AC power is .
5. Average (rms) current and average (rms) voltage are and , where rms stands for root mean square.
6. Thus, .
7. Ohm’s law for AC is .
8. Expressions for the average power of an AC circuit are ,
, and
, analogous to the expressions for DC circuits.
### Conceptual Questions
### Problem Exercises
|
# Electric Current, Resistance, and Ohm's Law
## Electric Hazards and the Human Body
### Learning Objectives
By the end of this section, you will be able to:
1. Define thermal hazard, shock hazard, and short circuit.
2. Explain what effects various levels of current have on the human body.
There are two known hazards of electricity—thermal and shock. A thermal hazard is one where excessive electric power causes undesired thermal effects, such as starting a fire in the wall of a house. A shock hazard occurs when electric current passes through a person. Shocks range in severity from painful, but otherwise harmless, to heart-stopping lethality. This section considers these hazards and the various factors affecting them in a quantitative manner. Electrical Safety: Systems and Devices will consider systems and devices for preventing electrical hazards.
### Thermal Hazards
Electric power causes undesired heating effects whenever electric energy is converted to thermal energy at a rate faster than it can be safely dissipated. A classic example of this is the short circuit, a low-resistance path between terminals of a voltage source. An example of a short circuit is shown in . Insulation on wires leading to an appliance has worn through, allowing the two wires to come into contact. Such an undesired contact with a high voltage is called a short. Since the resistance of the short, , is very small, the power dissipated in the short, , is very large. For example, if is 120 V and is , then the power is 144 kW, much greater than that used by a typical household appliance. Thermal energy delivered at this rate will very quickly raise the temperature of surrounding materials, melting or perhaps igniting them.
One particularly insidious aspect of a short circuit is that its resistance may actually be decreased due to the increase in temperature. This can happen if the short creates ionization. These charged atoms and molecules are free to move and, thus, lower the resistance . Since , the power dissipated in the short rises, possibly causing more ionization, more power, and so on. High voltages, such as the 480-V AC used in some industrial applications, lend themselves to this hazard, because higher voltages create higher initial power production in a short.
Another serious, but less dramatic, thermal hazard occurs when wires supplying power to a user are overloaded with too great a current. As discussed in the previous section, the power dissipated in the supply wires is , where is the resistance of the wires and the current flowing through them. If either or is too large, the wires overheat. For example, a worn appliance cord (with some of its braided wires broken) may have rather than the it should be. If 10.0 A of current passes through the cord, then is dissipated in the cord—much more than is safe. Similarly, if a wire with a resistance is meant to carry a few amps, but is instead carrying 100 A, it will severely overheat. The power dissipated in the wire will in that case be . Fuses and circuit breakers are used to limit excessive currents. (See and .) Each device opens the circuit automatically when a sustained current exceeds safe limits.
Fuses and circuit breakers for typical household voltages and currents are relatively simple to produce, but those for large voltages and currents experience special problems. For example, when a circuit breaker tries to interrupt the flow of high-voltage electricity, a spark can jump across its points that ionizes the air in the gap and allows the current to continue flowing. Large circuit breakers found in power-distribution systems employ insulating gas and even use jets of gas to blow out such sparks. Here AC is safer than DC, since AC current goes through zero 120 times per second, giving a quick opportunity to extinguish these arcs.
### Shock Hazards
Electrical currents through people produce tremendously varied effects. An electrical current can be used to block back pain. The possibility of using electrical current to stimulate muscle action in paralyzed limbs, perhaps allowing paraplegics to walk, is under study. TV dramatizations in which electrical shocks are used to bring a heart attack victim out of ventricular fibrillation (a massively irregular, often fatal, beating of the heart) are more than common. Yet most electrical shock fatalities occur because a current put the heart into fibrillation. A pacemaker uses electrical shocks to stimulate the heart to beat properly. Some fatal shocks do not produce burns, but warts can be safely burned off with electric current (though freezing using liquid nitrogen is now more common). Of course, there are consistent explanations for these disparate effects. The major factors upon which the effects of electrical shock depend are
1. The amount of current
2. The path taken by the current
3. The duration of the shock
4. The frequency of the current ( for DC)
gives the effects of electrical shocks as a function of current for a typical accidental shock. The effects are for a shock that passes through the trunk of the body, has a duration of 1 s, and is caused by 60-Hz power.
Our bodies are relatively good conductors due to the water in our bodies. Given that larger currents will flow through sections with lower resistance (to be further discussed in the next chapter), electric currents preferentially flow through paths in the human body that have a minimum resistance in a direct path to earth. The earth is a natural electron sink. Wearing insulating shoes, a requirement in many professions, prohibits a pathway for electrons by providing a large resistance in that path. Whenever working with high-power tools (drills), or in risky situations, ensure that you do not provide a pathway for current flow (especially through the heart).
Very small currents pass harmlessly and unfelt through the body. This happens to you regularly without your knowledge. The threshold of sensation is only 1 mA and, although unpleasant, shocks are apparently harmless for currents less than 5 mA. A great number of safety rules take the 5-mA value for the maximum allowed shock. At 10 to 20 mA and above, the current can stimulate sustained muscular contractions much as regular nerve impulses do. People sometimes say they were knocked across the room by a shock, but what really happened was that certain muscles contracted, propelling them in a manner not of their own choosing. (See (a).) More frightening, and potentially more dangerous, is the “can’t let go” effect illustrated in (b). The muscles that close the fingers are stronger than those that open them, so the hand closes involuntarily on the wire shocking it. This can prolong the shock indefinitely. It can also be a danger to a person trying to rescue the victim, because the rescuer’s hand may close about the victim’s wrist. Usually the best way to help the victim is to give the fist a hard knock/blow/jar with an insulator or to throw an insulator at the fist. Modern electric fences, used in animal enclosures, are now pulsed on and off to allow people who touch them to get free, rendering them less lethal than in the past.
Greater currents may affect the heart. Its electrical patterns can be disrupted, so that it beats irregularly and ineffectively in a condition called “ventricular fibrillation.” This condition often lingers after the shock and is fatal due to a lack of blood circulation. The threshold for ventricular fibrillation is between 100 and 300 mA. At about 300 mA and above, the shock can cause burns, depending on the concentration of current—the more concentrated, the greater the likelihood of burns.
Very large currents cause the heart and diaphragm to contract for the duration of the shock. Both the heart and breathing stop. Interestingly, both often return to normal following the shock. The electrical patterns on the heart are completely erased in a manner that the heart can start afresh with normal beating, as opposed to the permanent disruption caused by smaller currents that can put the heart into ventricular fibrillation. The latter is something like scribbling on a blackboard, whereas the former completely erases it. TV dramatizations of electric shock used to bring a heart attack victim out of ventricular fibrillation also show large paddles. These are used to spread out current passed through the victim to reduce the likelihood of burns.
Current is the major factor determining shock severity (given that other conditions such as path, duration, and frequency are fixed, such as in the table and preceding discussion). A larger voltage is more hazardous, but since , the severity of the shock depends on the combination of voltage and resistance. For example, a person with dry skin has a resistance of about . If he comes into contact with 120-V AC, a current passes harmlessly through him. The same person soaking wet may have a resistance of and the same 120 V will produce a current of 12 mA—above the “can’t let go” threshold and potentially dangerous.
Most of the body’s resistance is in its dry skin. When wet, salts go into ion form, lowering the resistance significantly. The interior of the body has a much lower resistance than dry skin because of all the ionic solutions and fluids it contains. If skin resistance is bypassed, such as by an intravenous infusion, a catheter, or exposed pacemaker leads, a person is rendered microshock sensitive. In this condition, currents about 1/1000 those listed in produce similar effects. During open-heart surgery, currents as small as can be used to still the heart. Stringent electrical safety requirements in hospitals, particularly in surgery and intensive care, are related to the doubly disadvantaged microshock-sensitive patient. The break in the skin has reduced his resistance, and so the same voltage causes a greater current, and a much smaller current has a greater effect.
Factors other than current that affect the severity of a shock are its path, duration, and AC frequency. Path has obvious consequences. For example, the heart is unaffected by an electric shock through the brain, such as may be used to treat manic depression. And it is a general truth that the longer the duration of a shock, the greater its effects. presents a graph that illustrates the effects of frequency on a shock. The curves show the minimum current for two different effects, as a function of frequency. The lower the current needed, the more sensitive the body is at that frequency. Ironically, the body is most sensitive to frequencies near the 50- or 60-Hz frequencies in common use. The body is slightly less sensitive for DC (), mildly confirming Edison’s claims that AC presents a greater hazard. At higher and higher frequencies, the body becomes progressively less sensitive to any effects that involve nerves. This is related to the maximum rates at which nerves can fire or be stimulated. At very high frequencies, electrical current travels only on the surface of a person. Thus a wart can be burned off with very high frequency current without causing the heart to stop. (Do not try this at home with 60-Hz AC!) Some of the spectacular demonstrations of electricity, in which high-voltage arcs are passed through the air and over people’s bodies, employ high frequencies and low currents. (See .) Electrical safety devices and techniques are discussed in detail in Electrical Safety: Systems and Devices.
### Section Summary
1. The two types of electric hazards are thermal (excessive power) and shock (current through a person).
2. Shock severity is determined by current, path, duration, and AC frequency.
3. lists shock hazards as a function of current.
4. graphs the threshold current for two hazards as a function of frequency.
### Conceptual Questions
### Problem Exercises
|
# Electric Current, Resistance, and Ohm's Law
## Nerve Conduction–Electrocardiograms
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the process by which electric signals are transmitted along a neuron.
2. Explain the effects myelin sheaths have on signal propagation.
3. Explain what the features of an ECG signal indicate.
### Nerve Conduction
Electric currents in the vastly complex system of billions of nerves in our body allow us to sense the world, control parts of our body, and think. These are representative of the three major functions of nerves. First, nerves carry messages from our sensory organs and others to the central nervous system, consisting of the brain and spinal cord. Second, nerves carry messages from the central nervous system to muscles and other organs. Third, nerves transmit and process signals within the central nervous system. The sheer number of nerve cells and the incredibly greater number of connections between them makes this system the subtle wonder that it is. Nerve conduction is a general term for electrical signals carried by nerve cells. It is one aspect of bioelectricity, or electrical effects in and created by biological systems.
Nerve cells, properly called neurons, look different from other cells—they have tendrils, some of them many centimeters long, connecting them with other cells. (See .) Signals arrive at the cell body across synapses or through dendrites, stimulating the neuron to generate its own signal, sent along its long axon to other nerve or muscle cells. Signals may arrive from many other locations and be transmitted to yet others, conditioning the synapses by use, giving the system its complexity and its ability to learn.
The method by which these electric currents are generated and transmitted is more complex than the simple movement of free charges in a conductor, but it can be understood with principles already discussed in this text. The most important of these are the Coulomb force and diffusion.
illustrates how a voltage (potential difference) is created across the cell membrane of a neuron in its resting state. This thin membrane separates electrically neutral fluids having differing concentrations of ions, the most important varieties being , , and (these are sodium, potassium, and chlorine ions with single plus or minus charges as indicated). As discussed in Molecular Transport Phenomena: Diffusion, Osmosis, and Related Processes, free ions will diffuse from a region of high concentration to one of low concentration. But the cell membrane is semipermeable, meaning that some ions may cross it while others cannot. In its resting state, the cell membrane is permeable to and , and impermeable to . Diffusion of and thus creates the layers of positive and negative charge on the outside and inside of the membrane. The Coulomb force prevents the ions from diffusing across in their entirety. Once the charge layer has built up, the repulsion of like charges prevents more from moving across, and the attraction of unlike charges prevents more from leaving either side. The result is two layers of charge right on the membrane, with diffusion being balanced by the Coulomb force. A tiny fraction of the charges move across and the fluids remain neutral (other ions are present), while a separation of charge and a voltage have been created across the membrane.
The separation of charge creates a potential difference of 70 to 90 mV across the cell membrane. While this is a small voltage, the resulting electric field () across the only 8-nm-thick membrane is immense (on the order of 11 MV/m!) and has fundamental effects on its structure and permeability. Now, if the exterior of a neuron is taken to be at 0 V, then the interior has a resting potential of about –90 mV. Such voltages are created across the membranes of almost all types of animal cells but are largest in nerve and muscle cells. In fact, fully 25% of the energy used by cells goes toward creating and maintaining these potentials.
Electric currents along the cell membrane are created by any stimulus that changes the membrane’s permeability. The membrane thus temporarily becomes permeable to , which then rushes in, driven both by diffusion and the Coulomb force. This inrush of first neutralizes the inside membrane, or depolarizes it, and then makes it slightly positive. The depolarization causes the membrane to again become impermeable to , and the movement of quickly returns the cell to its resting potential, or repolarizes it. This sequence of events results in a voltage pulse, called the action potential. (See .) Only small fractions of the ions move, so that the cell can fire many hundreds of times without depleting the excess concentrations of and . Eventually, the cell must replenish these ions to maintain the concentration differences that create bioelectricity. This sodium-potassium pump is an example of active transport, wherein cell energy is used to move ions across membranes against diffusion gradients and the Coulomb force.
The action potential is a voltage pulse at one location on a cell membrane. How does it get transmitted along the cell membrane, and in particular down an axon, as a nerve impulse? The answer is that the changing voltage and electric fields affect the permeability of the adjacent cell membrane, so that the same process takes place there. The adjacent membrane depolarizes, affecting the membrane further down, and so on, as illustrated in . Thus the action potential stimulated at one location triggers a nerve impulse that moves slowly (about 1 m/s) along the cell membrane.
Some axons, like that in , are sheathed with myelin, consisting of fat-containing cells. shows an enlarged view of an axon having myelin sheaths characteristically separated by unmyelinated gaps (called nodes of Ranvier). This arrangement gives the axon a number of interesting properties. Since myelin is an insulator, it prevents signals from jumping between adjacent nerves (cross talk). Additionally, the myelinated regions transmit electrical signals at a very high speed, as an ordinary conductor or resistor would. There is no action potential in the myelinated regions, so that no cell energy is used in them. There is an signal loss in the myelin, but the signal is regenerated in the gaps, where the voltage pulse triggers the action potential at full voltage. So a myelinated axon transmits a nerve impulse faster, with less energy consumption, and is better protected from cross talk than an unmyelinated one. Not all axons are myelinated, so that cross talk and slow signal transmission are a characteristic of the normal operation of these axons, another variable in the nervous system.
The degeneration or destruction of the myelin sheaths that surround the nerve fibers impairs signal transmission and can lead to numerous neurological effects. One of the most prominent of these diseases comes from the body’s own immune system attacking the myelin in the central nervous system—multiple sclerosis. MS symptoms include fatigue, vision problems, weakness of arms and legs, loss of balance, and tingling or numbness in one’s extremities (neuropathy). It is more apt to strike younger adults, especially females. Causes might come from infection, environmental or geographic affects, or genetics. At the moment there is no known cure for MS.
Most animal cells can fire or create their own action potential. Muscle cells contract when they fire and are often induced to do so by a nerve impulse. In fact, nerve and muscle cells are physiologically similar, and there are even hybrid cells, such as in the heart, that have characteristics of both nerves and muscles. Some animals, like the infamous electric eel (see ), use muscles ganged so that their voltages add in order to create a shock great enough to stun prey.
### Electrocardiograms
Just as nerve impulses are transmitted by depolarization and repolarization of adjacent membrane, the depolarization that causes muscle contraction can also stimulate adjacent muscle cells to depolarize (fire) and contract. Thus, a depolarization wave can be sent across the heart, coordinating its rhythmic contractions and enabling it to perform its vital function of propelling blood through the circulatory system. is a simplified graphic of a depolarization wave spreading across the heart from the sinoarterial (SA) node, the heart’s natural pacemaker.
An electrocardiogram (ECG) is a record of the voltages created by the wave of depolarization and subsequent repolarization in the heart. (They are also abbreviated EKG.) Voltages between pairs of electrodes placed on the chest are vector components of the voltage wave on the heart. Standard ECGs have 12 or more electrodes, but only three are shown in for clarity. Decades ago, three-electrode ECGs were performed by placing electrodes on the left and right arms and the left leg. The voltage between the right arm and the left leg is called the lead II potential and is the most often graphed. We shall examine the lead II potential as an indicator of heart-muscle function and see that it is coordinated with arterial blood pressure as well.
Heart function and its four-chamber action are explored in Viscosity and Laminar Flow; Poiseuille’s Law. Basically, the right and left atria receive blood from the body and lungs, respectively, and pump the blood into the ventricles. The right and left ventricles, in turn, pump blood through the lungs and the rest of the body, respectively. Depolarization of the heart muscle causes it to contract. After contraction it is repolarized to ready it for the next beat. The ECG measures components of depolarization and repolarization of the heart muscle and can yield significant information on the functioning and malfunctioning of the heart.
shows an ECG of the lead II potential and a graph of the corresponding arterial blood pressure. The major features are labeled P, Q, R, S, and T. The P wave is generated by the depolarization and contraction of the atria as they pump blood into the ventricles. The QRS complex is created by the depolarization of the ventricles as they pump blood to the lungs and body. Since the shape of the heart and the path of the depolarization wave are not simple, the QRS complex has this typical shape and time span. The lead II QRS signal also masks the repolarization of the atria, which occur at the same time. Finally, the T wave is generated by the repolarization of the ventricles and is followed by the next P wave in the next heartbeat. Arterial blood pressure varies with each part of the heartbeat, with systolic (maximum) pressure occurring closely after the QRS complex, which signals contraction of the ventricles.
Taken together, the 12 leads of a state-of-the-art ECG can yield a wealth of information about the heart. For example, regions of damaged heart tissue, called infarcts, reflect electrical waves and are apparent in one or more lead potentials. Subtle changes due to slight or gradual damage to the heart are most readily detected by comparing a recent ECG to an older one. This is particularly the case since individual heart shape, size, and orientation can cause variations in ECGs from one individual to another. ECG technology has advanced to the point where a portable ECG monitor can be incorporated into wearable devices and other small objects. See .
### Section Summary
1. Electric potentials in neurons and other cells are created by ionic concentration differences across semipermeable membranes.
2. Stimuli change the permeability and create action potentials that propagate along neurons.
3. Myelin sheaths speed this process and reduce the needed energy input.
4. This process in the heart can be measured with an electrocardiogram (ECG).
### Conceptual Questions
### Problems & Exercises
|
# Circuits and DC Instruments
## Connection for AP® Courses
Electric circuits are commonplace in our everyday lives. Some circuits are simple, such as those in flashlights while others are extremely complex, such as those used in supercomputers. This chapter takes the topic of electric circuits a step beyond simple circuits by addressing both changes that result from interactions between systems (Big Idea 4) and constraints on such changes due to laws of conservation (Big Idea 5). When the circuit is purely resistive, everything in this chapter applies to both DC and AC. However, matters become more complex when capacitance is involved. We do consider what happens when capacitors are connected to DC voltage sources, but the interaction of capacitors (and other nonresistive devices) with AC sources is left for a later chapter. In addition, a number of important DC instruments, such as meters that measure voltage and current, are covered in this chapter.
Information and examples presented in the chapter examine cause-effect relationships inherent in interactions involving electrical systems. The electrical properties of an electric circuit can change due to other systems (Enduring Understanding 4.E). More specifically, values of currents and potential differences in electric circuits depend on arrangements of individual circuit components (Essential Knowledge 4.E.5). In this chapter several series and parallel combinations of resistors are discussed and their effects on currents and potential differences are analyzed.
In electric circuits the total energy (Enduring Understanding 5.B) and the total electric charge (Enduring Understanding 5.C) are conserved. Kirchoff’s rules describe both, energy conservation (Essential Knowledge 5.B.9) and charge conservation (Essential Knowledge 5.C.3). Energy conservation is discussed in terms of the loop rule which specifies that the potential around any closed circuit path must be zero. Charge conservation is applied as conservation of current by equating the sum of all currents entering a junction to the sum of all currents leaving the junction (also known as the junction rule). Kirchoff’s rules are used to calculate currents and potential differences in circuits that combine resistors in series and parallel, and resistors and capacitors.
The concepts in this chapter support:
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.E The electric and magnetic properties of a system can change in response to the presence of, or changes in, other objects or systems.
Essential Knowledge 4.E.5 The values of currents and electric potential differences in an electric circuit are determined by the properties and arrangement of the individual circuit elements such as sources of emf, resistors, and capacitors.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.9 Kirchhoff’s loop rule describes conservation of energy in electrical circuits.
Enduring Understanding 5.C The electric charge of a system is conserved.
Essential Knowledge 5.C.3 Kirchhoff’s junction rule describes the conservation of electric charge in electrical circuits. Since charge is conserved, current must be conserved at each junction in the circuit. Examples should include circuits that combine resistors in series and parallel. |
# Circuits and DC Instruments
## Resistors in Series and Parallel
### Learning Objectives
By the end of this section, you will be able to:
1. Draw a circuit with resistors in parallel and in series.
2. Calculate the voltage drop of a current across a resistor using Ohm’s law.
3. Contrast the way total resistance is calculated for resistors in series and in parallel.
4. Explain why total resistance of a parallel circuit is less than the smallest resistance of any of the resistors in that circuit.
5. Calculate total resistance of a circuit that contains a mixture of resistors connected in series and in parallel.
Most circuits have more than one component, called a resistor that limits the flow of charge in the circuit. A measure of this limit on charge flow is called resistance. The simplest combinations of resistors are the series and parallel connections illustrated in . The total resistance of a combination of resistors depends on both their individual values and how they are connected.
### Resistors in Series
When are resistors in series? Resistors are in series whenever the flow of charge, called the current, must flow through devices sequentially. For example, if current flows through a person holding a screwdriver and into the Earth, then in (a) could be the resistance of the screwdriver’s shaft, the resistance of its handle, the person’s body resistance, and the resistance of her shoes.
shows resistors in series connected to a voltage source. It seems reasonable that the total resistance is the sum of the individual resistances, considering that the current has to pass through each resistor in sequence. (This fact would be an advantage to a person wishing to avoid an electrical shock, who could reduce the current by wearing high-resistance rubber-soled shoes. It could be a disadvantage if one of the resistances were a faulty high-resistance cord to an appliance that would reduce the operating current.)
To verify that resistances in series do indeed add, let us consider the loss of electrical power, called a voltage drop, in each resistor in .
According to Ohm’s law, the voltage drop, , across a resistor when a current flows through it is calculated using the equation , where equals the current in amps (A) and is the resistance in ohms . Another way to think of this is that is the voltage necessary to make a current flow through a resistance .
So the voltage drop across is , that across is , and that across is . The sum of these voltages equals the voltage output of the source; that is,
This equation is based on the conservation of energy and conservation of charge. Electrical potential energy can be described by the equation , where is the electric charge and is the voltage. Thus the energy supplied by the source is , while that dissipated by the resistors is
These energies must be equal, because there is no other source and no other destination for energy in the circuit. Thus, . The charge cancels, yielding , as stated. (Note that the same amount of charge passes through the battery and each resistor in a given amount of time, since there is no capacitance to store charge, there is no place for charge to leak, and charge is conserved.)
Now substituting the values for the individual voltages gives
Note that for the equivalent single series resistance , we have
This implies that the total or equivalent series resistance of three resistors is .
This logic is valid in general for any number of resistors in series; thus, the total resistance of a series connection is
as proposed. Since all of the current must pass through each resistor, it experiences the resistance of each, and resistances in series simply add up.
### Resistors in Parallel
shows resistors in parallel, wired to a voltage source. Resistors are in parallel when each resistor is connected directly to the voltage source by connecting wires having negligible resistance. Each resistor thus has the full voltage of the source applied to it.
Each resistor draws the same current it would if it alone were connected to the voltage source (provided the voltage source is not overloaded). For example, an automobile’s headlights, radio, and so on, are wired in parallel, so that they utilize the full voltage of the source and can operate completely independently. The same is true in your house, or any building. (See (b).)
To find an expression for the equivalent parallel resistance , let us consider the currents that flow and how they are related to resistance. Since each resistor in the circuit has the full voltage, the currents flowing through the individual resistors are , , and . Conservation of charge implies that the total current produced by the source is the sum of these currents:
Substituting the expressions for the individual currents gives
Note that Ohm’s law for the equivalent single resistance gives
The terms inside the parentheses in the last two equations must be equal. Generalizing to any number of resistors, the total resistance of a parallel connection is related to the individual resistances by
This relationship results in a total resistance that is less than the smallest of the individual resistances. (This is seen in the next example.) When resistors are connected in parallel, more current flows from the source than would flow for any of them individually, and so the total resistance is lower.
### Combinations of Series and Parallel
More complex connections of resistors are sometimes just combinations of series and parallel. These are commonly encountered, especially when wire resistance is considered. In that case, wire resistance is in series with other resistances that are in parallel.
Combinations of series and parallel can be reduced to a single equivalent resistance using the technique illustrated in . Various parts are identified as either series or parallel, reduced to their equivalents, and further reduced until a single resistance is left. The process is more time consuming than difficult.
The simplest combination of series and parallel resistance, shown in , is also the most instructive, since it is found in many applications. For example, could be the resistance of wires from a car battery to its electrical devices, which are in parallel. and could be the starter motor and a passenger compartment light. We have previously assumed that wire resistance is negligible, but, when it is not, it has important effects, as the next example indicates.
### Practical Implications
One implication of this last example is that resistance in wires reduces the current and power delivered to a resistor. If wire resistance is relatively large, as in a worn (or a very long) extension cord, then this loss can be significant. If a large current is drawn, the drop in the wires can also be significant.
For example, when you are rummaging in the refrigerator and the motor comes on, the refrigerator light dims momentarily. Similarly, you can see the passenger compartment light dim when you start the engine of your car (although this may be due to resistance inside the battery itself).
What is happening in these high-current situations is illustrated in . The device represented by has a very low resistance, and so when it is switched on, a large current flows. This increased current causes a larger drop in the wires represented by , reducing the voltage across the light bulb (which is ), which then dims noticeably.
### Test Prep for AP Courses
### Section Summary
1. The total resistance of an electrical circuit with resistors wired in a series is the sum of the individual resistances:
2. Each resistor in a series circuit has the same amount of current flowing through it.
3. The voltage drop, or power dissipation, across each individual resistor in a series is different, and their combined total adds up to the power source input.
4. The total resistance of an electrical circuit with resistors wired in parallel is less than the lowest resistance of any of the components and can be determined using the formula:
5. Each resistor in a parallel circuit has the same full voltage of the source applied to it.
6. The current flowing through each resistor in a parallel circuit is different, depending on the resistance.
7. If a more complex connection of resistors is a combination of series and parallel, it can be reduced to a single equivalent resistance by identifying its various parts as series or parallel, reducing each to its equivalent, and continuing until a single resistance is eventually reached.
### Conceptual Questions
### Problem Exercises
Note: Data taken from figures can be assumed to be accurate to three significant digits.
|
# Circuits and DC Instruments
## Electromotive Force: Terminal Voltage
### Learning Objectives
By the end of this section, you will be able to:
1. Compare and contrast the voltage and the electromagnetic force of an electric power source.
2. Describe what happens to the terminal voltage, current, and power delivered to a load as internal resistance of the voltage source increases (due to aging of batteries, for example).
3. Explain why it is beneficial to use more than one voltage source connected in parallel.
When you forget to turn off your car lights, they slowly dim as the battery runs down. Why don’t they simply blink off when the battery’s energy is gone? Their gradual dimming implies that battery output voltage decreases as the battery is depleted.
Furthermore, if you connect an excessive number of 12-V lights in parallel to a car battery, they will be dim even when the battery is fresh and even if the wires to the lights have very low resistance. This implies that the battery’s output voltage is reduced by the overload.
The reason for the decrease in output voltage for depleted or overloaded batteries is that all voltage sources have two fundamental parts—a source of electrical energy and an internal resistance. Let us examine both.
### Electromotive Force
You can think of many different types of voltage sources. Batteries themselves come in many varieties. There are many types of mechanical/electrical generators, driven by many different energy sources, ranging from nuclear to wind. Solar cells create voltages directly from light, while thermoelectric devices create voltage from temperature differences.
A few voltage sources are shown in . All such devices create a potential difference and can supply current if connected to a resistance. On the small scale, the potential difference creates an electric field that exerts force on charges, causing current. We thus use the name electromotive force, abbreviated emf.
Emf is not a force at all; it is a special type of potential difference. To be precise, the electromotive force (emf) is the potential difference of a source when no current is flowing. Units of emf are volts.
Electromotive force is directly related to the source of potential difference, such as the particular combination of chemicals in a battery. However, emf differs from the voltage output of the device when current flows. The voltage across the terminals of a battery, for example, is less than the emf when the battery supplies current, and it declines further as the battery is depleted or loaded down. However, if the device’s output voltage can be measured without drawing current, then output voltage will equal emf (even for a very depleted battery).
### Internal Resistance
As noted before, a 12-V truck battery is physically larger, contains more charge and energy, and can deliver a larger current than a 12-V motorcycle battery. Both are lead-acid batteries with identical emf, but, because of its size, the truck battery has a smaller internal resistance . Internal resistance is the inherent resistance to the flow of current within the source itself.
is a schematic representation of the two fundamental parts of any voltage source. The emf (represented by a script E in the figure) and internal resistance are in series. The smaller the internal resistance for a given emf, the more current and the more power the source can supply.
The internal resistance can behave in complex ways. As noted, increases as a battery is depleted. But internal resistance may also depend on the magnitude and direction of the current through a voltage source, its temperature, and even its history. The internal resistance of rechargeable nickel-cadmium cells, for example, depends on how many times and how deeply they have been depleted.
Why are the chemicals able to produce a unique potential difference? Quantum mechanical descriptions of molecules, which take into account the types of atoms and numbers of electrons in them, are able to predict the energy states they can have and the energies of reactions between them.
In the case of a lead-acid battery, an energy of 2 eV is given to each electron sent to the anode. Voltage is defined as the electrical potential energy divided by charge: . An electron volt is the energy given to a single electron by a voltage of 1 V. So the voltage here is 2 V, since 2 eV is given to each electron. It is the energy produced in each molecular reaction that produces the voltage. A different reaction produces a different energy and, hence, a different voltage.
### Terminal Voltage
The voltage output of a device is measured across its terminals and, thus, is called its terminal voltage. Terminal voltage is given by
where is the internal resistance and is the current flowing at the time of the measurement.
is positive if current flows away from the positive terminal, as shown in . You can see that the larger the current, the smaller the terminal voltage. And it is likewise true that the larger the internal resistance, the smaller the terminal voltage.
Suppose a load resistance is connected to a voltage source, as in . Since the resistances are in series, the total resistance in the circuit is . Thus the current is given by Ohm’s law to be
We see from this expression that the smaller the internal resistance , the greater the current the voltage source supplies to its load . As batteries are depleted, increases. If becomes a significant fraction of the load resistance, then the current is significantly reduced, as the following example illustrates.
Battery testers, such as those in , use small load resistors to intentionally draw current to determine whether the terminal voltage drops below an acceptable level. They really test the internal resistance of the battery. If internal resistance is high, the battery is weak, as evidenced by its low terminal voltage.
Some batteries can be recharged by passing a current through them in the direction opposite to the current they supply to a resistance. This is done routinely in cars and batteries for small electrical appliances and electronic devices, and is represented pictorially in . The voltage output of the battery charger must be greater than the emf of the battery to reverse current through it. This will cause the terminal voltage of the battery to be greater than the emf, since , and is now negative.
### Multiple Voltage Sources
There are two voltage sources when a battery charger is used. Voltage sources connected in series are relatively simple. When voltage sources are in series, their internal resistances add and their emfs add algebraically. (See .) Series connections of voltage sources are common—for example, in flashlights, toys, and other appliances. Usually, the cells are in series in order to produce a larger total emf.
But if the cells oppose one another, such as when one is put into an appliance backward, the total emf is less, since it is the algebraic sum of the individual emfs.
A battery is a multiple connection of voltaic cells, as shown in . The disadvantage of series connections of cells is that their internal resistances add. One of the authors once owned a 1957 MGA that had two 6-V batteries in series, rather than a single 12-V battery. This arrangement produced a large internal resistance that caused him many problems in starting the engine.
If the series connection of two voltage sources is made into a complete circuit with the emfs in opposition, then a current of magnitude flows. See , for example, which shows a circuit exactly analogous to the battery charger discussed above. If two voltage sources in series with emfs in the same sense are connected to a load , as in , then
flows.
shows two voltage sources with identical emfs in parallel and connected to a load resistance. In this simple case, the total emf is the same as the individual emfs. But the total internal resistance is reduced, since the internal resistances are in parallel. The parallel connection thus can produce a larger current.
Here, flows through the load, and is less than those of the individual batteries. For example, some diesel-powered cars use two 12-V batteries in parallel; they produce a total emf of 12 V but can deliver the larger current needed to start a diesel engine.
### Animals as Electrical Detectors
A number of animals both produce and detect electrical signals. Fish, sharks, platypuses, and echidnas (spiny anteaters) all detect electric fields generated by nerve activity in prey. Electric eels produce their own emf through biological cells (electric organs) called electroplaques, which are arranged in both series and parallel as a set of batteries.
Electroplaques are flat, disk-like cells; those of the electric eel have a voltage of 0.15 V across each one. These cells are usually located toward the head or tail of the animal, although in the case of the electric eel, they are found along the entire body. The electroplaques in the South American eel are arranged in 140 rows, with each row stretching horizontally along the body and containing 5,000 electroplaques. This can yield an emf of approximately 600 V, and a current of 1 A—deadly.
The mechanism for detection of external electric fields is similar to that for producing nerve signals in the cell through depolarization and repolarization—the movement of ions across the cell membrane. Within the fish, weak electric fields in the water produce a current in a gel-filled canal that runs from the skin to sensing cells, producing a nerve signal. The Australian platypus, one of the very few mammals that lay eggs, can detect fields of 30 , while sharks have been found to be able to sense a field in their snouts as small as 100 (). Electric eels use their own electric fields produced by the electroplaques to stun their prey or enemies.
### Solar Cell Arrays
Another example dealing with multiple voltage sources is that of combinations of solar cells—wired in both series and parallel combinations to yield a desired voltage and current. Photovoltaic generation (PV), the conversion of sunlight directly into electricity, is based upon the photoelectric effect, in which photons hitting the surface of a solar cell create an electric current in the cell.
Most solar cells are made from pure silicon—either as single-crystal silicon, or as a thin film of silicon deposited upon a glass or metal backing. Most single cells have a voltage output of about 0.5 V, while the current output is a function of the amount of sunlight upon the cell (the incident solar radiation—the insolation). Under bright noon sunlight, a current of about of cell surface area is produced by typical single-crystal cells.
Individual solar cells are connected electrically in modules to meet electrical-energy needs. They can be wired together in series or in parallel—connected like the batteries discussed earlier. A solar-cell array or module usually consists of between 36 and 72 cells, with a power output of 50 W to 140 W.
The output of the solar cells is direct current. For most uses in a home, AC is required, so a device called an inverter must be used to convert the DC to AC. Any extra output can then be passed on to the outside electrical grid for sale to the utility.
### Test Prep for AP Courses
### Section Summary
1. All voltage sources have two fundamental parts—a source of electrical energy that has a characteristic electromotive force (emf), and an internal resistance .
2. The emf is the potential difference of a source when no current is flowing.
3. The numerical value of the emf depends on the source of potential difference.
4. The internal resistance of a voltage source affects the output voltage when a current flows.
5. The voltage output of a device is called its terminal voltage and is given by , where is the electric current and is positive when flowing away from the positive terminal of the voltage source.
6. When multiple voltage sources are in series, their internal resistances add and their emfs add algebraically.
7. Solar cells can be wired in series or parallel to provide increased voltage or current, respectively.
### Conceptual Questions
### Problem Exercises
|
# Circuits and DC Instruments
## Kirchhoff’s Rules
### Learning Objectives
By the end of this section, you will be able to:
1. Analyze a complex circuit using Kirchhoff’s rules, using the conventions for determining the correct signs of various terms.
Many complex circuits, such as the one in , cannot be analyzed with the series-parallel techniques developed in Resistors in Series and Parallel and Electromotive Force: Terminal Voltage. There are, however, two circuit analysis rules that can be used to analyze any circuit, simple or complex. These rules are special cases of the laws of conservation of charge and conservation of energy. The rules are known as Kirchhoff’s rules, after their inventor Gustav Kirchhoff (1824–1887).
Explanations of the two rules will now be given, followed by problem-solving hints for applying Kirchhoff’s rules, and a worked example that uses them.
### Kirchhoff’s First Rule
Kirchhoff’s first rule (the junction rule) is an application of the conservation of charge to a junction; it is illustrated in . Current is the flow of charge, and charge is conserved; thus, whatever charge flows into the junction must flow out. Kirchhoff’s first rule requires that (see figure). Equations like this can and will be used to analyze circuits and to solve circuit problems.
### Kirchhoff’s Second Rule
Kirchhoff’s second rule (the loop rule) is an application of conservation of energy. The loop rule is stated in terms of potential, , rather than potential energy, but the two are related since . Recall that emf is the potential difference of a source when no current is flowing. In a closed loop, whatever energy is supplied by emf must be transferred into other forms by devices in the loop, since there are no other ways in which energy can be transferred into or out of the circuit. illustrates the changes in potential in a simple series circuit loop.
Kirchhoff’s second rule requires . Rearranged, this is , which means the emf equals the sum of the (voltage) drops in the loop.
### Applying Kirchhoff’s Rules
By applying Kirchhoff’s rules, we generate equations that allow us to find the unknowns in circuits. The unknowns may be currents, emfs, or resistances. Each time a rule is applied, an equation is produced. If there are as many independent equations as unknowns, then the problem can be solved. There are two decisions you must make when applying Kirchhoff’s rules. These decisions determine the signs of various quantities in the equations you obtain from applying the rules.
1. When applying Kirchhoff’s first rule, the junction rule, you must label the current in each branch and decide in what direction it is going. For example, in , , and , currents are labeled , , , and , and arrows indicate their directions. There is no risk here, for if you choose the wrong direction, the current will be of the correct magnitude but negative.
2. When applying Kirchhoff’s second rule, the loop rule, you must identify a closed loop and decide in which direction to go around it, clockwise or counterclockwise. For example, in the loop was traversed in the same direction as the current (clockwise). Again, there is no risk; going around the circuit in the opposite direction reverses the sign of every term in the equation, which is like multiplying both sides of the equation by
and the following points will help you get the plus or minus signs right when applying the loop rule. Note that the resistors and emfs are traversed by going from a to b. In many circuits, it will be necessary to construct more than one loop. In traversing each loop, one needs to be consistent for the sign of the change in potential. (See .)
1. When a resistor is traversed in the same direction as the current, the change in potential is . (See .)
2. When a resistor is traversed in the direction opposite to the current, the change in potential is . (See .)
3. When an emf is traversed from to + (the same direction it moves positive charge), the change in potential is +emf. (See .)
4. When an emf is traversed from + to (opposite to the direction it moves positive charge), the change in potential is
emf. (See .)
The material in this section is correct in theory. We should be able to verify it by making measurements of current and voltage. In fact, some of the devices used to make such measurements are straightforward applications of the principles covered so far and are explored in the next modules. As we shall see, a very basic, even profound, fact results—making a measurement alters the quantity being measured.
### Test Prep for AP Courses
### Section Summary
1. Kirchhoff’s rules can be used to analyze any circuit, simple or complex.
2. Kirchhoff’s first rule—the junction rule: The sum of all currents entering a junction must equal the sum of all currents leaving the junction.
3. Kirchhoff’s second rule—the loop rule: The algebraic sum of changes in potential around any closed circuit path (loop) must be zero.
4. The two rules are based, respectively, on the laws of conservation of charge and energy.
5. When calculating potential and current using Kirchhoff’s rules, a set of conventions must be followed for determining the correct signs of various terms.
6. The simpler series and parallel rules are special cases of Kirchhoff’s rules.
### Conceptual Questions
### Problem Exercises
|
# Circuits and DC Instruments
## DC Voltmeters and Ammeters
### Learning Objectives
By the end of this section, you will be able to:
1. Explain why a voltmeter must be connected in parallel with the circuit.
2. Draw a diagram showing an ammeter correctly connected in a circuit.
3. Describe how a galvanometer can be used as either a voltmeter or an ammeter.
4. Find the resistance that must be placed in series with a galvanometer to allow it to be used as a voltmeter with a given reading.
5. Explain why measuring the voltage or current in a circuit can never be exact.
Voltmeters measure voltage, whereas ammeters measure current. Some of the meters in automobile dashboards, digital cameras, cell phones, and tuner-amplifiers are voltmeters or ammeters. (See .) The internal construction of the simplest of these meters and how they are connected to the system they monitor give further insight into applications of series and parallel connections.
Voltmeters are connected in parallel with whatever device’s voltage is to be measured. A parallel connection is used because objects in parallel experience the same potential difference. (See , where the voltmeter is represented by the symbol V.)
Ammeters are connected in series with whatever device’s current is to be measured. A series connection is used because objects in series have the same current passing through them. (See , where the ammeter is represented by the symbol A.)
### Analog Meters: Galvanometers
Analog meters have a needle that swivels to point at numbers on a scale, as opposed to digital meters, which have numerical readouts similar to a hand-held calculator. The heart of most analog meters is a device called a galvanometer, denoted by G. Current flow through a galvanometer, , produces a proportional needle deflection. (This deflection is due to the force of a magnetic field upon a current-carrying wire.)
The two crucial characteristics of a given galvanometer are its resistance and current sensitivity. Current sensitivity is the current that gives a full-scale deflection of the galvanometer’s needle, the maximum current that the instrument can measure. For example, a galvanometer with a current sensitivity of has a maximum deflection of its needle when flows through it, reads half-scale when flows through it, and so on.
If such a galvanometer has a resistance, then a voltage of only produces a full-scale reading. By connecting resistors to this galvanometer in different ways, you can use it as either a voltmeter or ammeter that can measure a broad range of voltages or currents.
### Galvanometer as Voltmeter
shows how a galvanometer can be used as a voltmeter by connecting it in series with a large resistance, . The value of the resistance is determined by the maximum voltage to be measured. Suppose you want 10 V to produce a full-scale deflection of a voltmeter containing a galvanometer with a sensitivity. Then 10 V applied to the meter must produce a current of . The total resistance must be
( is so large that the galvanometer resistance, , is nearly negligible.) Note that 5 V applied to this voltmeter produces a half-scale deflection by producing a current through the meter, and so the voltmeter’s reading is proportional to voltage as desired.
This voltmeter would not be useful for voltages less than about half a volt, because the meter deflection would be small and difficult to read accurately. For other voltage ranges, other resistances are placed in series with the galvanometer. Many meters have a choice of scales. That choice involves switching an appropriate resistance into series with the galvanometer.
### Galvanometer as Ammeter
The same galvanometer can also be made into an ammeter by placing it in parallel with a small resistance , often called the shunt resistance, as shown in . Since the shunt resistance is small, most of the current passes through it, allowing an ammeter to measure currents much greater than those producing a full-scale deflection of the galvanometer.
Suppose, for example, an ammeter is needed that gives a full-scale deflection for 1.0 A, and contains the same galvanometer with its sensitivity. Since and are in parallel, the voltage across them is the same.
These drops are so that . Solving for , and noting that is and is 0.999950 A, we have
### Taking Measurements Alters the Circuit
When you use a voltmeter or ammeter, you are connecting another resistor to an existing circuit and, thus, altering the circuit. Ideally, voltmeters and ammeters do not appreciably affect the circuit, but it is instructive to examine the circumstances under which they do or do not interfere.
First, consider the voltmeter, which is always placed in parallel with the device being measured. Very little current flows through the voltmeter if its resistance is a few orders of magnitude greater than the device, and so the circuit is not appreciably affected. (See (a).) (A large resistance in parallel with a small one has a combined resistance essentially equal to the small one.) If, however, the voltmeter’s resistance is comparable to that of the device being measured, then the two in parallel have a smaller resistance, appreciably affecting the circuit. (See (b).) The voltage across the device is not the same as when the voltmeter is out of the circuit.
An ammeter is placed in series in the branch of the circuit being measured, so that its resistance adds to that branch. Normally, the ammeter’s resistance is very small compared with the resistances of the devices in the circuit, and so the extra resistance is negligible. (See (a).) However, if very small load resistances are involved, or if the ammeter is not as low in resistance as it should be, then the total series resistance is significantly greater, and the current in the branch being measured is reduced. (See (b).)
A practical problem can occur if the ammeter is connected incorrectly. If it was put in parallel with the resistor to measure the current in it, you could possibly damage the meter; the low resistance of the ammeter would allow most of the current in the circuit to go through the galvanometer, and this current would be larger since the effective resistance is smaller.
One solution to the problem of voltmeters and ammeters interfering with the circuits being measured is to use galvanometers with greater sensitivity. This allows construction of voltmeters with greater resistance and ammeters with smaller resistance than when less sensitive galvanometers are used.
There are practical limits to galvanometer sensitivity, but it is possible to get analog meters that make measurements accurate to a few percent. Note that the inaccuracy comes from altering the circuit, not from a fault in the meter.
### Section Summary
1. Voltmeters measure voltage, and ammeters measure current.
2. A voltmeter is placed in parallel with the voltage source to receive full voltage and must have a large resistance to limit its effect on the circuit.
3. An ammeter is placed in series to get the full current flowing through a branch and must have a small resistance to limit its effect on the circuit.
4. Both can be based on the combination of a resistor and a galvanometer, a device that gives an analog reading of current.
5. Standard voltmeters and ammeters alter the circuit being measured and are thus limited in accuracy.
### Conceptual Questions
### Problem Exercises
|
# Circuits and DC Instruments
## Null Measurements
### Learning Objectives
By the end of this section, you will be able to:
1. Explain why a null measurement device is more accurate than a standard voltmeter or ammeter.
2. Demonstrate how a Wheatstone bridge can be used to accurately calculate the resistance in a circuit.
Standard measurements of voltage and current alter the circuit being measured, introducing uncertainties in the measurements. Voltmeters draw some extra current, whereas ammeters reduce current flow. Null measurements balance voltages so that there is no current flowing through the measuring device and, therefore, no alteration of the circuit being measured.
Null measurements are generally more accurate but are also more complex than the use of standard voltmeters and ammeters, and they still have limits to their precision. In this module, we shall consider a few specific types of null measurements, because they are common and interesting, and they further illuminate principles of electric circuits.
### The Potentiometer
Suppose you wish to measure the emf of a battery. Consider what happens if you connect the battery directly to a standard voltmeter as shown in . (Once we note the problems with this measurement, we will examine a null measurement that improves accuracy.) As discussed before, the actual quantity measured is the terminal voltage , which is related to the emf of the battery by , where is the current that flows and is the internal resistance of the battery.
The emf could be accurately calculated if were very accurately known, but it is usually not. If the current could be made zero, then , and so emf could be directly measured. However, standard voltmeters need a current to operate; thus, another technique is needed.
A potentiometer is a null measurement device for measuring potentials (voltages). (See .) A voltage source is connected to a resistor say, a long wire, and passes a constant current through it. There is a steady drop in potential (an drop) along the wire, so that a variable potential can be obtained by making contact at varying locations along the wire.
(b) shows an unknown (represented by script in the figure) connected in series with a galvanometer. Note that opposes the other voltage source. The location of the contact point (see the arrow on the drawing) is adjusted until the galvanometer reads zero. When the galvanometer reads zero, , where is the resistance of the section of wire up to the contact point. Since no current flows through the galvanometer, none flows through the unknown emf, and so is directly sensed.
Now, a very precisely known standard is substituted for , and the contact point is adjusted until the galvanometer again reads zero, so that . In both cases, no current passes through the galvanometer, and so the current through the long wire is the same. Upon taking the ratio , cancels, giving
Solving for gives
Because a long uniform wire is used for , the ratio of resistances is the same as the ratio of the lengths of wire that zero the galvanometer for each emf. The three quantities on the right-hand side of the equation are now known or measured, and can be calculated. The uncertainty in this calculation can be considerably smaller than when using a voltmeter directly, but it is not zero. There is always some uncertainty in the ratio of resistances and in the standard . Furthermore, it is not possible to tell when the galvanometer reads exactly zero, which introduces error into both and , and may also affect the current .
### Resistance Measurements and the Wheatstone Bridge
There is a variety of so-called ohmmeters that purport to measure resistance. What the most common ohmmeters actually do is to apply a voltage to a resistance, measure the current, and calculate the resistance using Ohm’s law. Their readout is this calculated resistance. Two configurations for ohmmeters using standard voltmeters and ammeters are shown in . Such configurations are limited in accuracy, because the meters alter both the voltage applied to the resistor and the current that flows through it.
The Wheatstone bridge is a null measurement device for calculating resistance by balancing potential drops in a circuit. (See .) The device is called a bridge because the galvanometer forms a bridge between two branches. A variety of bridge devices are used to make null measurements in circuits.
Resistors and are precisely known, while the arrow through indicates that it is a variable resistance. The value of can be precisely read. With the unknown resistance in the circuit, is adjusted until the galvanometer reads zero. The potential difference between points b and d is then zero, meaning that b and d are at the same potential. With no current running through the galvanometer, it has no effect on the rest of the circuit. So the branches abc and adc are in parallel, and each branch has the full voltage of the source. That is, the drops along abc and adc are the same. Since b and d are at the same potential, the drop along ad must equal the drop along ab. Thus,
Again, since b and d are at the same potential, the drop along dc must equal the drop along bc. Thus,
Taking the ratio of these last two expressions gives
Canceling the currents and solving for Rx yields
This equation is used to calculate the unknown resistance when current through the galvanometer is zero. This method can be very accurate (often to four significant digits), but it is limited by two factors. First, it is not possible to get the current through the galvanometer to be exactly zero. Second, there are always uncertainties in , , and , which contribute to the uncertainty in .
### Section Summary
1. Null measurement techniques achieve greater accuracy by balancing a circuit so that no current flows through the measuring device.
2. One such device, for determining voltage, is a potentiometer.
3. Another null measurement device, for determining resistance, is the Wheatstone bridge.
4. Other physical quantities can also be measured with null measurement techniques.
### Conceptual questions
### Problem Exercises
|
# Circuits and DC Instruments
## DC Circuits Containing Resistors and Capacitors
When you use a flash camera, it takes a few seconds to charge the capacitor that powers the flash. The light flash discharges the capacitor in a tiny fraction of a second. Why does charging take longer than discharging? This question and a number of other phenomena that involve charging and discharging capacitors are discussed in this module.
### RC Circuits
An is one containing a resistor R and a capacitor C. The capacitor is an electrical component that stores electric charge.
shows a simple RC circuit that employs a DC (direct current) voltage source. The capacitor is initially uncharged. As soon as the switch is closed, current flows to and from the initially uncharged capacitor. As charge increases on the capacitor plates, there is increasing opposition to the flow of charge by the repulsion of like charges on each plate.
In terms of voltage, this is because voltage across the capacitor is given by , where is the amount of charge stored on each plate and is the capacitance. This voltage opposes the battery, growing from zero to the maximum emf when fully charged. The current thus decreases from its initial value of to zero as the voltage on the capacitor reaches the same value as the emf. When there is no current, there is no drop, and so the voltage on the capacitor must then equal the emf of the voltage source. This can also be explained with Kirchhoff’s second rule (the loop rule), discussed in Kirchhoff’s Rules, which says that the algebraic sum of changes in potential around any closed loop must be zero.
The initial current is , because all of the drop is in the resistance. Therefore, the smaller the resistance, the faster a given capacitor will be charged. Note that the internal resistance of the voltage source is included in , as are the resistances of the capacitor and the connecting wires. In the flash camera scenario above, when the batteries powering the camera begin to wear out, their internal resistance rises, reducing the current and lengthening the time it takes to get ready for the next flash.
Voltage on the capacitor is initially zero and rises rapidly at first, since the initial current is a maximum. (b) shows a graph of capacitor voltage versus time () starting when the switch is closed at . The voltage approaches emf asymptotically, since the closer it gets to emf the less current flows. The equation for voltage versus time when charging a capacitor through a resistor , derived using calculus, is
where is the voltage across the capacitor, emf is equal to the emf of the DC voltage source, and the exponential e = 2.718 … is the base of the natural logarithm. Note that the units of are seconds. We define
where (the Greek letter tau) is called the time constant for an circuit. As noted before, a small resistance allows the capacitor to charge faster. This is reasonable, since a larger current flows through a smaller resistance. It is also reasonable that the smaller the capacitor , the less time needed to charge it. Both factors are contained in .
More quantitatively, consider what happens when . Then the voltage on the capacitor is
This means that in the time , the voltage rises to 0.632 of its final value. The voltage will rise 0.632 of the remainder in the next time . It is a characteristic of the exponential function that the final value is never reached, but 0.632 of the remainder to that value is achieved in every time, . In just a few multiples of the time constant , then, the final value is very nearly achieved, as the graph in (b) illustrates.
### Discharging a Capacitor
Discharging a capacitor through a resistor proceeds in a similar fashion, as illustrates. Initially, the current is , driven by the initial voltage on the capacitor. As the voltage decreases, the current and hence the rate of discharge decreases, implying another exponential formula for . Using calculus, the voltage on a capacitor being discharged through a resistor is found to be
The graph in (b) is an example of this exponential decay. Again, the time constant is . A small resistance allows the capacitor to discharge in a small time, since the current is larger. Similarly, a small capacitance requires less time to discharge, since less charge is stored. In the first time interval after the switch is closed, the voltage falls to 0.368 of its initial value, since .
During each successive time , the voltage falls to 0.368 of its preceding value. In a few multiples of , the voltage becomes very close to zero, as indicated by the graph in (b).
Now we can explain why the flash camera in our scenario takes so much longer to charge than discharge; the resistance while charging is significantly greater than while discharging. The internal resistance of the battery accounts for most of the resistance while charging. As the battery ages, the increasing internal resistance makes the charging process even slower. (You may have noticed this.)
The flash discharge is through a low-resistance ionized gas in the flash tube and proceeds very rapidly. Flash photographs, such as in , can capture a brief instant of a rapid motion because the flash can be less than a microsecond in duration. Such flashes can be made extremely intense.
During World War II, nighttime reconnaissance photographs were made from the air with a single flash illuminating more than a square kilometer of enemy territory. The brevity of the flash eliminated blurring due to the surveillance aircraft’s motion. Today, an important use of intense flash lamps is to pump energy into a laser. The short intense flash can rapidly energize a laser and allow it to reemit the energy in another form.
### RC Circuits for Timing
circuits are commonly used for timing purposes. A mundane example of this is found in the ubiquitous intermittent wiper systems of modern cars. The time between wipes is varied by adjusting the resistance in an circuit. Another example of an circuit is found in novelty jewelry, Halloween costumes, and various toys that have battery-powered flashing lights. (See for a timing circuit.)
A more crucial use of circuits for timing purposes is in the artificial pacemaker, used to control heart rate. The heart rate is normally controlled by electrical signals generated by the sino-atrial (SA) node, which is on the wall of the right atrium chamber. This causes the muscles to contract and pump blood. Sometimes the heart rhythm is abnormal and the heartbeat is too high or too low.
The artificial pacemaker is inserted near the heart to provide electrical signals to the heart when needed with the appropriate time constant. Pacemakers have sensors that detect body motion and breathing to increase the heart rate during exercise to meet the body’s increased needs for blood and oxygen.
### Test Prep for AP Courses
### Section Summary
1. An circuit is one that has both a resistor and a capacitor.
2. The time constant for an circuit is .
3. When an initially uncharged ( at ) capacitor in series with a resistor is charged by a DC voltage source, the voltage rises, asymptotically approaching the emf of the voltage source; as a function of time,
4. Within the span of each time constant , the voltage rises by 0.632 of the remaining value, approaching the final voltage asymptotically.
5. If a capacitor with an initial voltage is discharged through a resistor starting at , then its voltage decreases exponentially as given by
6. In each time constant , the voltage falls by 0.368 of its remaining initial value, approaching zero asymptotically.
### Conceptual questions
### Problem Exercises
|
# Magnetism
## Connection for AP® Courses
Magnetism plays a major role in your everyday life. All electric motors, with uses as diverse as powering refrigerators, starting cars, and moving elevators, contain magnets. Magnetic resonance imaging (MRI) has become an important diagnostic tool in the field of medicine, and the use of magnetism to explore brain activity is a subject of contemporary research and development. Other applications of magnetism include computer memory, levitation of high-speed trains, the aurora borealis, and, of course, the first important historical use of magnetism: navigation. You will find all of these applications of magnetism linked by a small number of underlying principles.
In this chapter, you will learn that both the internal properties of an object and the movement of charged particles can generate a magnetic field, and you will learn why all magnetic fields have a north and south pole. You will also learn how magnetic fields exert forces on objects, resulting in the magnetic alignment that makes a compass work. You will learn how we use this principle to weigh the smallest of subatomic particles with precision and contain superheated plasma to facilitate nuclear fusion.
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.E Materials have many macroscopic properties that result from the arrangement and interactions of the atoms and molecules that make up the material.
Essential Knowledge 1.E.5 Matter has a property called magnetic permeability.
Essential Knowledge 1.E.6 Matter has a property called magnetic dipole moment.
Big Idea 2 Fields existing in space can be used to explain interactions.
Enduring Understanding 2.D A magnetic field is caused by a magnet or a moving electrically charged object. Magnetic fields observed in nature always seem to be produced either by moving charged objects or by magnetic dipoles or combinations of dipoles and never by single poles.
Essential Knowledge 2.D.1 The magnetic field exerts a force on a moving electrically charged object. That magnetic force is perpendicular to the direction of the velocity of the object and to the magnetic field and is proportional to the magnitude of the charge, the magnitude of the velocity, and the magnitude of the magnetic field. It also depends on the angle between the velocity and the magnetic field vectors. Treatment is quantitative for angles of 0°, 90°, or 180° and qualitative for other angles.
Essential Knowledge 2.D.2 The magnetic field vectors around a straight wire that carries electric current are tangent to concentric circles centered on that wire. The field has no component toward the current-carrying wire.
Essential Knowledge 2.D.3 A magnetic dipole placed in a magnetic field, such as the ones created by a magnet or the Earth, will tend to align with the magnetic field vector.
Essential Knowledge 2.D.4 Ferromagnetic materials contain magnetic domains that are themselves magnets.
Big Idea 3 The interactions of an object with other objects can be described by forces.
Enduring Understanding 3.C At the macroscopic level, forces can be categorized as either long-range (action-at-a-distance) forces or contact forces.
Essential Knowledge 3.C.3 A magnetic force results from the interaction of a moving charged object or a magnet with other moving charged objects or another magnet.
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.E The electric and magnetic properties of a system can change in response to the presence of, or changes in, other objects or systems.
Essential Knowledge 4.E.1 The magnetic properties of some materials can be affected by magnetic fields at the system. Students should focus on the underlying concepts and not the use of the vocabulary. |
# Magnetism
## Magnets
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the difference between the north and south poles of a magnet.
2. Describe how magnetic poles interact with each other.
All magnets attract iron, such as that in a refrigerator door. However, magnets may attract or repel other magnets. Experimentation shows that all magnets have two poles. If freely suspended, one pole will point toward the north. The two poles are thus named the north magnetic pole and the south magnetic pole (or more properly, north-seeking and south-seeking poles, for the attractions in those directions).
The fact that magnetic poles always occur in pairs of north and south is true from the very large scale—for example, sunspots always occur in pairs that are north and south magnetic poles—all the way down to the very small scale. Magnetic atoms have both a north pole and a south pole, as do many types of subatomic particles, such as electrons, protons, and neutrons.
### Section Summary
1. Magnetism is a subject that includes the properties of magnets, the effect of the magnetic force on moving charges and currents, and the creation of magnetic fields by currents.
2. There are two types of magnetic poles, called the north magnetic pole and south magnetic pole.
3. North magnetic poles are those that are attracted toward the Earth’s geographic north pole.
4. Like poles repel and unlike poles attract.
5. Magnetic poles always occur in pairs of north and south—it is not possible to isolate north and south poles.
### Conceptual Questions
|
# Magnetism
## Ferromagnets and Electromagnets
### Learning Objectives
By the end of this section, you will be able to:
1. Define ferromagnet.
2. Describe the role of magnetic domains in magnetization.
3. Explain the significance of the Curie temperature.
4. Describe the relationship between electricity and magnetism.
### Ferromagnets
Only certain materials, such as iron, cobalt, nickel, and gadolinium, exhibit strong magnetic effects. Such materials are called ferromagnetic, after the Latin word for iron, ferrum. A group of materials made from the alloys of the rare earth elements are also used as strong and permanent magnets; a popular one is neodymium. Other materials exhibit weak magnetic effects, which are detectable only with sensitive instruments. Not only do ferromagnetic materials respond strongly to magnets (the way iron is attracted to magnets), they can also be magnetized themselves—that is, they can be induced to be magnetic or made into permanent magnets.
When a magnet is brought near a previously unmagnetized ferromagnetic material, it causes local magnetization of the material with unlike poles closest, as in . (This results in the attraction of the previously unmagnetized material to the magnet.) What happens on a microscopic scale is illustrated in . The regions within the material called domains act like small bar magnets. Within domains, the poles of individual atoms are aligned. Each atom acts like a tiny bar magnet. Domains are small and randomly oriented in an unmagnetized ferromagnetic object. In response to an external magnetic field, the domains may grow to millimeter size, aligning themselves as shown in (b). This induced magnetization can be made permanent if the material is heated and then cooled, or simply tapped in the presence of other magnets.
Conversely, a permanent magnet can be demagnetized by hard blows or by heating it in the absence of another magnet. Increased thermal motion at higher temperature can disrupt and randomize the orientation and the size of the domains. There is a well-defined temperature for ferromagnetic materials, which is called the Curie temperature, above which they cannot be magnetized. The Curie temperature for iron is 1043 K , which is well above room temperature. There are several elements and alloys that have Curie temperatures much lower than room temperature and are ferromagnetic only below those temperatures.
### Electromagnets
Early in the 19th century, it was discovered that electrical currents cause magnetic effects. The first significant observation was by the Danish scientist Hans Christian Oersted (1777–1851), who found that a compass needle was deflected by a current-carrying wire. This was the first significant evidence that the movement of charges had any connection with magnets. Electromagnetism is the use of electric current to make magnets. These temporarily induced magnets are called electromagnets. Electromagnets are employed for everything from a wrecking yard crane that lifts scrapped cars to controlling the beam of a 90-km-circumference particle accelerator to the magnets in medical imaging machines (See ).
shows that the response of iron filings to a current-carrying coil and to a permanent bar magnet. The patterns are similar. In fact, electromagnets and ferromagnets have the same basic characteristics—for example, they have north and south poles that cannot be separated and for which like poles repel and unlike poles attract.
Combining a ferromagnet with an electromagnet can produce particularly strong magnetic effects. (See .) Whenever strong magnetic effects are needed, such as lifting scrap metal, or in particle accelerators, electromagnets are enhanced by ferromagnetic materials. Limits to how strong the magnets can be made are imposed by coil resistance (it will overheat and melt at sufficiently high current), and so superconducting magnets may be employed. These are still limited, because superconducting properties are destroyed by too great a magnetic field.
shows a few uses of combinations of electromagnets and ferromagnets. Ferromagnetic materials can act as memory devices, because the orientation of the magnetic fields of small domains can be reversed or erased. Magnetic information storage on videotapes and computer hard drives are among the most common applications. This property is vital in our digital world.
### Current: The Source of All Magnetism
An electromagnet creates magnetism with an electric current. In later sections we explore this more quantitatively, finding the strength and direction of magnetic fields created by various currents. But what about ferromagnets? shows models of how electric currents create magnetism at the submicroscopic level. (Note that we cannot directly observe the paths of individual electrons about atoms, and so a model or visual image, consistent with all direct observations, is made. We can directly observe the electron’s orbital angular momentum, its spin momentum, and subsequent magnetic moments, all of which are explained with electric-current-creating subatomic magnetism.) Currents, including those associated with other submicroscopic particles like protons, allow us to explain ferromagnetism and all other magnetic effects. Ferromagnetism, for example, results from an internal cooperative alignment of electron spins, possible in some materials but not in others.
Crucial to the statement that electric current is the source of all magnetism is the fact that it is impossible to separate north and south magnetic poles. (This is far different from the case of positive and negative charges, which are easily separated.) A current loop always produces a magnetic dipole—that is, a magnetic field that acts like a north pole and south pole pair. Since isolated north and south magnetic poles, called magnetic monopoles, are not observed, currents are used to explain all magnetic effects. If magnetic monopoles did exist, then we would have to modify this underlying connection that all magnetism is due to electrical current. There is no known reason that magnetic monopoles should not exist—they are simply never observed—and so searches at the subnuclear level continue. If they do not exist, we would like to find out why not. If they do exist, we would like to see evidence of them.
### Test Prep for AP Courses
### Section Summary
1. Magnetic poles always occur in pairs of north and south—it is not possible to isolate north and south poles.
2. All magnetism is created by electric current.
3. Ferromagnetic materials, such as iron, are those that exhibit strong magnetic effects.
4. The atoms in ferromagnetic materials act like small magnets (due to currents within the atoms) and can be aligned, usually in millimeter-sized regions called domains.
5. Domains can grow and align on a larger scale, producing permanent magnets. Such a material is magnetized, or induced to be magnetic.
6. Above a material’s Curie temperature, thermal agitation destroys the alignment of atoms, and ferromagnetism disappears.
7. Electromagnets employ electric currents to make magnetic fields, often aided by induced fields in ferromagnetic materials. |
# Magnetism
## Magnetic Fields and Magnetic Field Lines
### Learning Objectives
By the end of this section, you will be able to:
1. Define magnetic field and describe the magnetic field lines of various magnetic fields.
Einstein is said to have been fascinated by a compass as a child, perhaps musing on how the needle felt a force without direct physical contact. His ability to think deeply and clearly about action at a distance, particularly for gravitational, electric, and magnetic forces, later enabled him to create his revolutionary theory of relativity. Since magnetic forces act at a distance, we define a magnetic field to represent magnetic forces. The pictorial representation of magnetic field lines is very useful in visualizing the strength and direction of the magnetic field. As shown in , the direction of magnetic field lines is defined to be the direction in which the north end of a compass needle points. The magnetic field is traditionally called the .
Small compasses used to test a magnetic field will not disturb it. (This is analogous to the way we tested electric fields with a small test charge. In both cases, the fields represent only the object creating them and not the probe testing them.) shows how the magnetic field appears for a current loop and a long straight wire, as could be explored with small compasses. A small compass placed in these fields will align itself parallel to the field line at its location, with its north pole pointing in the direction of B. Note the symbols used for field into and out of the paper.
Extensive exploration of magnetic fields has revealed a number of hard-and-fast rules. We use magnetic field lines to represent the field (the lines are a pictorial tool, not a physical entity in and of themselves). The properties of magnetic field lines can be summarized by these rules:
1. The direction of the magnetic field is tangent to the field line at any point in space. A small compass will point in the direction of the field line.
2. The strength of the field is proportional to the closeness of the lines. It is exactly proportional to the number of lines per unit area perpendicular to the lines (called the areal density).
3. Magnetic field lines can never cross, meaning that the field is unique at any point in space.
4. Magnetic field lines are continuous, forming closed loops without beginning or end. They go from the north pole to the south pole.
The last property is related to the fact that the north and south poles cannot be separated. It is a distinct difference from electric field lines, which begin and end on the positive and negative charges. If magnetic monopoles existed, then magnetic field lines would begin and end on them.
### Section Summary
1. Magnetic fields can be pictorially represented by magnetic field lines, the properties of which are as follows:
1. The field is tangent to the magnetic field line.
2. Field strength is proportional to the line density.
3. Field lines cannot cross.
4. Field lines are continuous loops.
### Conceptual Questions
|
# Magnetism
## Magnetic Field Strength: Force on a Moving Charge in a Magnetic Field
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the effects of magnetic fields on moving charges.
2. Use the right hand rule 1 to determine the velocity of a charge, the direction of the magnetic field, and the direction of the magnetic force on a moving charge.
3. Calculate the magnetic force on a moving charge.
What is the mechanism by which one magnet exerts a force on another? The answer is related to the fact that all magnetism is caused by current, the flow of charge. Magnetic fields exert forces on moving charges, and so they exert forces on other magnets, all of which have moving charges.
### Right Hand Rule 1
The magnetic force on a moving charge is one of the most fundamental known. Magnetic force is as important as the electrostatic or Coulomb force. Yet the magnetic force is more complex, in both the number of factors that affects it and in its direction, than the relatively simple Coulomb force. The magnitude of the magnetic force on a charge moving at a speed in a magnetic field of strength is given by
where is the angle between the directions of and This force is often called the Lorentz force. In fact, this is how we define the magnetic field strength —in terms of the force on a charged particle moving in a magnetic field. The SI unit for magnetic field strength is called the tesla (T) after the eccentric but brilliant inventor Nikola Tesla (1856–1943). To determine how the tesla relates to other SI units, we solve for .
Because
is unitless, the tesla is
(note that C/s = A).
Another smaller unit, called the gauss (G), where , is sometimes used. The strongest permanent magnets have fields near 2 T; superconducting electromagnets may attain 10 T or more. The Earth’s magnetic field on its surface is only about , or 0.5 G.
The direction of the magnetic force is perpendicular to the plane formed by and , as determined by the right hand rule 1 (or RHR-1), which is illustrated in . RHR-1 states that, to determine the direction of the magnetic force on a positive moving charge, you point the thumb of the right hand in the direction of , the fingers in the direction of , and a perpendicular to the palm points in the direction of . One way to remember this is that there is one velocity, and so the thumb represents it. There are many field lines, and so the fingers represent them. The force is in the direction you would push with your palm. The force on a negative charge is in exactly the opposite direction to that on a positive charge.
### Test Prep for AP Courses
### Section Summary
1. Magnetic fields exert a force on a moving charge q, the magnitude of which is
where is the angle between the directions of and .
2. The SI unit for magnetic field strength is the tesla (T), which is related to other units by
3. The direction of the force on a moving charge is given by right hand rule 1 (RHR-1): Point the thumb of the right hand in the direction of , the fingers in the direction of , and a perpendicular to the palm points in the direction of .
4. The force is perpendicular to the plane formed by and . Since the force is zero if is parallel to , charged particles often follow magnetic field lines rather than cross them.
### Conceptual Questions
### Problems & Exercises
|
# Magnetism
## Force on a Moving Charge in a Magnetic Field: Examples and Applications
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the effects of a magnetic field on a moving charge.
2. Calculate the radius of curvature of the path of a charge that is moving in a magnetic field.
Magnetic force can cause a charged particle to move in a circular or spiral path. Cosmic rays are energetic charged particles in outer space, some of which approach the Earth. They can be forced into spiral paths by the Earth’s magnetic field. Protons in giant accelerators are kept in a circular path by magnetic force. The bubble chamber photograph in shows charged particles moving in such curved paths. The curved paths of charged particles in magnetic fields are the basis of a number of phenomena and can even be used analytically, such as in a mass spectrometer.
So does the magnetic force cause circular motion? Magnetic force is always perpendicular to velocity, so that it does no work on the charged particle. The particle’s kinetic energy and speed thus remain constant. The direction of motion is affected, but not the speed. This is typical of uniform circular motion. The simplest case occurs when a charged particle moves perpendicular to a uniform -field, such as shown in . (If this takes place in a vacuum, the magnetic field is the dominant factor determining the motion.) Here, the magnetic force supplies the centripetal force . Noting that , we see that .
Because the magnetic force supplies the centripetal force , we have
Solving for yields
Here, is the radius of curvature of the path of a charged particle with mass and charge , moving at a speed perpendicular to a magnetic field of strength . If the velocity is not perpendicular to the magnetic field, then is the component of the velocity perpendicular to the field. The component of the velocity parallel to the field is unaffected, since the magnetic force is zero for motion parallel to the field. This produces a spiral motion rather than a circular one.
shows how electrons not moving perpendicular to magnetic field lines follow the field lines. The component of velocity parallel to the lines is unaffected, and so the charges spiral along the field lines. If field strength increases in the direction of motion, the field will exert a force to slow the charges, forming a kind of magnetic mirror, as shown below.
The properties of charged particles in magnetic fields are related to such different things as the Aurora Australis or Aurora Borealis and particle accelerators. Charged particles approaching magnetic field lines may get trapped in spiral orbits about the lines rather than crossing them, as seen above. Some cosmic rays, for example, follow the Earth’s magnetic field lines, entering the atmosphere near the magnetic poles and causing the southern or northern lights through their ionization of molecules in the atmosphere. This glow of energized atoms and molecules is seen in Introduction to Magnetism. Those particles that approach middle latitudes must cross magnetic field lines, and many are prevented from penetrating the atmosphere. Cosmic rays are a component of background radiation; consequently, they give a higher radiation dose at the poles than at the equator.
Some incoming charged particles become trapped in the Earth’s magnetic field, forming two belts above the atmosphere known as the Van Allen radiation belts after the discoverer James A. Van Allen, an American astrophysicist. (See .) Particles trapped in these belts form radiation fields (similar to nuclear radiation) so intense that piloted space flights avoid them and satellites with sensitive electronics are kept out of them. In the few minutes it took lunar missions to cross the Van Allen radiation belts, astronauts received radiation doses more than twice the allowed annual exposure for radiation workers. Other planets have similar belts, especially those having strong magnetic fields like Jupiter.
Back on Earth, we have devices that employ magnetic fields to contain charged particles. Among them are the giant particle accelerators that have been used to explore the substructure of matter. (See .) Magnetic fields not only control the direction of the charged particles, they also are used to focus particles into beams and overcome the repulsion of like charges in these beams.
Thermonuclear fusion (like that occurring in the Sun) is a hope for a future clean energy source. One of the most promising devices is the tokamak, which uses magnetic fields to contain (or trap) and direct the reactive charged particles. (See .) Less exotic, but more immediately practical, amplifiers in microwave ovens use a magnetic field to contain oscillating electrons. These oscillating electrons generate the microwaves sent into the oven.
Mass spectrometers have a variety of designs, and many use magnetic fields to measure mass. The curvature of a charged particle’s path in the field is related to its mass and is measured to obtain mass information. (See More Applications of Magnetism.) Historically, such techniques were employed in the first direct observations of electron charge and mass. Today, mass spectrometers (sometimes coupled with gas chromatographs) are used to determine the make-up and sequencing of large biological molecules.
### Test Prep for AP Courses
### Section Summary
1. Magnetic force can supply centripetal force and cause a charged particle to move in a circular path of radius
where is the component of the velocity perpendicular to for a charged particle with mass and charge .
### Conceptual Questions
### Problems & Exercises
If you need additional support for these problems, see More Applications of Magnetism. |
# Magnetism
## The Hall Effect
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the Hall effect.
2. Calculate the Hall emf across a current-carrying conductor.
We have seen effects of a magnetic field on free-moving charges. The magnetic field also affects charges moving in a conductor. One result is the Hall effect, which has important implications and applications.
shows what happens to charges moving through a conductor in a magnetic field. The field is perpendicular to the electron drift velocity and to the width of the conductor. Note that conventional current is to the right in both parts of the figure. In part (a), electrons carry the current and move to the left. In part (b), positive charges carry the current and move to the right. Moving electrons feel a magnetic force toward one side of the conductor, leaving a net positive charge on the other side. This separation of charge creates a voltage , known as the Hall emf, across the conductor. The creation of a voltage across a current-carrying conductor by a magnetic field is known as the Hall effect, after Edwin Hall, the American physicist who discovered it in 1879.
One very important use of the Hall effect is to determine whether positive or negative charges carries the current. Note that in (b), where positive charges carry the current, the Hall emf has the sign opposite to when negative charges carry the current. Historically, the Hall effect was used to show that electrons carry current in metals and it also shows that positive charges carry current in some semiconductors. The Hall effect is used today as a research tool to probe the movement of charges, their drift velocities and densities, and so on, in materials. In 1980, it was discovered that the Hall effect is quantized, an example of quantum behavior in a macroscopic object.
The Hall effect has other uses that range from the determination of blood flow rate to precision measurement of magnetic field strength. To examine these quantitatively, we need an expression for the Hall emf, , across a conductor. Consider the balance of forces on a moving charge in a situation where , , and are mutually perpendicular, such as shown in . Although the magnetic force moves negative charges to one side, they cannot build up without limit. The electric field caused by their separation opposes the magnetic force, , and the electric force, , eventually grows to equal it. That is,
or
Note that the electric field is uniform across the conductor because the magnetic field is uniform, as is the conductor. For a uniform electric field, the relationship between electric field and voltage is , where is the width of the conductor and is the Hall emf. Entering this into the last expression gives
Solving this for the Hall emf yields
where is the Hall effect voltage across a conductor of width through which charges move at a speed .
One of the most common uses of the Hall effect is in the measurement of magnetic field strength . Such devices, called Hall probes, can be made very small, allowing fine position mapping. Hall probes can also be made very accurate, usually accomplished by careful calibration. Another application of the Hall effect is to measure fluid flow in any fluid that has free charges (most do). (See .) A magnetic field applied perpendicular to the flow direction produces a Hall emf as shown. Note that the sign of depends not on the sign of the charges, but only on the directions of and . The magnitude of the Hall emf is , where is the pipe diameter, so that the average velocity can be determined from providing the other factors are known.
### Test Prep for AP Courses
### Section Summary
1. The Hall effect is the creation of voltage , known as the Hall emf, across a current-carrying conductor by a magnetic field.
2. The Hall emf is given by
for a conductor of width through which charges move at a speed .
### Conceptual Questions
### Problems & Exercises
|
# Magnetism
## Magnetic Force on a Current-Carrying Conductor
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the effects of a magnetic force on a current-carrying conductor.
2. Calculate the magnetic force on a current-carrying conductor.
Because charges ordinarily cannot escape a conductor, the magnetic force on charges moving in a conductor is transmitted to the conductor itself.
We can derive an expression for the magnetic force on a current by taking a sum of the magnetic forces on individual charges. (The forces add because they are in the same direction.) The force on an individual charge moving at the drift velocity is given by . Taking to be uniform over a length of wire and zero elsewhere, the total magnetic force on the wire is then , where is the number of charge carriers in the section of wire of length . Now, , where is the number of charge carriers per unit volume and is the volume of wire in the field. Noting that , where is the cross-sectional area of the wire, then the force on the wire is . Gathering terms,
Because (see Current),
is the equation for magnetic force on a length , as shown in . If we divide both sides of this expression by , we find that the magnetic force per unit length of wire in a uniform field is . The direction of this force is given by RHR-1, with the thumb in the direction of the current . Then, with the fingers in the direction of , a perpendicular to the palm points in the direction of , as in .
Magnetic force on current-carrying conductors is used to convert electric energy to work. (Motors are a prime example—they employ loops of wire and are considered in the next section.) Magnetohydrodynamics (MHD) is the technical name given to a clever application where magnetic force pumps fluids without moving mechanical parts. (See .)
A strong magnetic field is applied across a tube and a current is passed through the fluid at right angles to the field, resulting in a force on the fluid parallel to the tube axis as shown. The absence of moving parts makes this attractive for moving a hot, chemically active substance, such as the liquid sodium employed in some nuclear reactors. Experimental artificial hearts are testing with this technique for pumping blood, perhaps circumventing the adverse effects of mechanical pumps. (Cell membranes, however, are affected by the large fields needed in MHD, delaying its practical application in humans.) MHD propulsion for nuclear submarines has been proposed, because it could be considerably quieter than conventional propeller drives. The deterrent value of nuclear submarines is based on their ability to hide and survive a first or second nuclear strike. As we slowly disassemble our nuclear weapons arsenals, the submarine branch will be the last to be decommissioned because of this ability (See .) Existing MHD drives are heavy and inefficient—much development work is needed.
### Section Summary
1. The magnetic force on current-carrying conductors is given by
where is the current, is the length of a straight conductor in a uniform magnetic field , and is the angle between and . The force follows RHR-1 with the thumb in the direction of .
### Conceptual Questions
### Problems & Exercises
|
# Magnetism
## Torque on a Current Loop: Motors and Meters
### Learning Objectives
By the end of this section, you will be able to:
1. Describe how motors and meters work in terms of torque on a current loop.
2. Calculate the torque on a current-carrying loop in a magnetic field.
Motors are the most common application of magnetic force on current-carrying wires. Motors have loops of wire in a magnetic field. When current is passed through the loops, the magnetic field exerts torque on the loops, which rotates a shaft. Electrical energy is converted to mechanical work in the process. (See .)
Let us examine the force on each segment of the loop in to find the torques produced about the axis of the vertical shaft. (This will lead to a useful equation for the torque on the loop.) We take the magnetic field to be uniform over the rectangular loop, which has width and height . First, we note that the forces on the top and bottom segments are vertical and, therefore, parallel to the shaft, producing no torque. Those vertical forces are equal in magnitude and opposite in direction, so that they also produce no net force on the loop. shows views of the loop from above. Torque is defined as , where is the force, is the distance from the pivot that the force is applied, and is the angle between and . As seen in (a), right hand rule 1 gives the forces on the sides to be equal in magnitude and opposite in direction, so that the net force is again zero. However, each force produces a clockwise torque. Since , the torque on each vertical segment is , and the two add to give a total torque.
Now, each vertical segment has a length that is perpendicular to , so that the force on each is . Entering into the expression for torque yields
If we have a multiple loop of turns, we get times the torque of one loop. Finally, note that the area of the loop is ; the expression for the torque becomes
This is the torque on a current-carrying loop in a uniform magnetic field. This equation can be shown to be valid for a loop of any shape. The loop carries a current , has turns, each of area , and the perpendicular to the loop makes an angle with the field . The net force on the loop is zero.
The torque found in the preceding example is the maximum. As the coil rotates, the torque decreases to zero at . The torque then reverses its direction once the coil rotates past . (See (d).) This means that, unless we do something, the coil will oscillate back and forth about equilibrium at . To get the coil to continue rotating in the same direction, we can reverse the current as it passes through with automatic switches called brushes. (See .)
Meters, such as those in analog fuel gauges on a car, are another common application of magnetic torque on a current-carrying loop. shows that a meter is very similar in construction to a motor. The meter in the figure has its magnets shaped to limit the effect of by making perpendicular to the loop over a large angular range. Thus the torque is proportional to and not . A linear spring exerts a counter-torque that balances the current-produced torque. This makes the needle deflection proportional to . If an exact proportionality cannot be achieved, the gauge reading can be calibrated. To produce a galvanometer for use in analog voltmeters and ammeters that have a low resistance and respond to small currents, we use a large loop area , high magnetic field , and low-resistance coils.
### Section Summary
1. The torque on a current-carrying loop of any shape in a uniform magnetic field. is
where is the number of turns, is the current, is the area of the loop, is the magnetic field strength, and is the angle between the perpendicular to the loop and the magnetic field.
### Conceptual Questions
### Problems & Exercises
|
# Magnetism
## Magnetic Fields Produced by Currents: Ampere’s Law
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate current that produces a magnetic field.
2. Use the right hand rule 2 to determine the direction of current or the direction of magnetic field loops.
How much current is needed to produce a significant magnetic field, perhaps as strong as the Earth’s field? Surveyors will tell you that overhead electric power lines create magnetic fields that interfere with their compass readings. Indeed, when Oersted discovered in 1820 that a current in a wire affected a compass needle, he was not dealing with extremely large currents. How does the shape of wires carrying current affect the shape of the magnetic field created? We noted earlier that a current loop created a magnetic field similar to that of a bar magnet, but what about a straight wire or a toroid (doughnut)? How is the direction of a current-created field related to the direction of the current? Answers to these questions are explored in this section, together with a brief discussion of the law governing the fields created by currents.
### Magnetic Field Created by a Long Straight Current-Carrying Wire: Right Hand Rule 2
Magnetic fields have both direction and magnitude. As noted before, one way to explore the direction of a magnetic field is with compasses, as shown for a long straight current-carrying wire in . Hall probes can determine the magnitude of the field. The field around a long straight wire is found to be in circular loops. The right hand rule 2 (RHR-2) emerges from this exploration and is valid for any current segment—point the thumb in the direction of the current, and the fingers curl in the direction of the magnetic field loops created by it.
The magnetic field strength (magnitude) produced by a long straight current-carrying wire is found by experiment to be
where is the current, is the shortest distance to the wire, and the constant is the permeability of free space.
is one of the basic constants in nature. We will see later that is related to the speed of light.) Since the wire is very long, the magnitude of the field depends only on distance from the wire , not on position along the wire.
### Ampere’s Law and Others
The magnetic field of a long straight wire has more implications than you might at first suspect. Each segment of current produces a magnetic field like that of a long straight wire, and the total field of any shape current is the vector sum of the fields due to each segment. The formal statement of the direction and magnitude of the field due to each segment is called the Biot-Savart law. Integral calculus is needed to sum the field for an arbitrary shape current. This results in a more complete law, called Ampere’s law, which relates magnetic field and current in a general way. Ampere’s law in turn is a part of Maxwell’s equations, which give a complete theory of all electromagnetic phenomena. Considerations of how Maxwell’s equations appear to different observers led to the modern theory of relativity, and the realization that electric and magnetic fields are different manifestations of the same thing. Most of this is beyond the scope of this text in both mathematical level, requiring calculus, and in the amount of space that can be devoted to it. But for the interested student, and particularly for those who continue in physics, engineering, or similar pursuits, delving into these matters further will reveal descriptions of nature that are elegant as well as profound. In this text, we shall keep the general features in mind, such as RHR-2 and the rules for magnetic field lines listed in Magnetic Fields and Magnetic Field Lines, while concentrating on the fields created in certain important situations.
### Magnetic Field Produced by a Current-Carrying Circular Loop
The magnetic field near a current-carrying loop of wire is shown in . Both the direction and the magnitude of the magnetic field produced by a current-carrying loop are complex. RHR-2 can be used to give the direction of the field near the loop, but mapping with compasses and the rules about field lines given in Magnetic Fields and Magnetic Field Lines are needed for more detail. There is a simple formula for the magnetic field strength at the center of a circular loop. It is
where is the radius of the loop. This equation is very similar to that for a straight wire, but it is valid only at the center of a circular loop of wire. The similarity of the equations does indicate that similar field strength can be obtained at the center of a loop. One way to get a larger field is to have loops; then, the field is . Note that the larger the loop, the smaller the field at its center, because the current is farther away.
### Magnetic Field Produced by a Current-Carrying Solenoid
A solenoid is a long coil of wire (with many turns or loops, as opposed to a flat loop). Because of its shape, the field inside a solenoid can be very uniform, and also very strong. The field just outside the coils is nearly zero. shows how the field looks and how its direction is given by RHR-2.
The magnetic field inside of a current-carrying solenoid is very uniform in direction and magnitude. Only near the ends does it begin to weaken and change direction. The field outside has similar complexities to flat loops and bar magnets, but the magnetic field strength inside a solenoid is simply
where is the number of loops per unit length of the solenoid , with being the number of loops and the length). Note that is the field strength anywhere in the uniform region of the interior and not just at the center. Large uniform fields spread over a large volume are possible with solenoids, as implies.
There are interesting variations of the flat coil and solenoid. For example, the toroidal coil used to confine the reactive particles in tokamaks is much like a solenoid bent into a circle. The field inside a toroid is very strong but circular. Charged particles travel in circles, following the field lines, and collide with one another, perhaps inducing fusion. But the charged particles do not cross field lines and escape the toroid. A whole range of coil shapes are used to produce all sorts of magnetic field shapes. Adding ferromagnetic materials produces greater field strengths and can have a significant effect on the shape of the field. Ferromagnetic materials tend to trap magnetic fields (the field lines bend into the ferromagnetic material, leaving weaker fields outside it) and are used as shields for devices that are adversely affected by magnetic fields, including the Earth’s magnetic field.
### Test Prep for AP Courses
### Section Summary
1. The strength of the magnetic field created by current in a long straight wire is given by
where is the current, is the shortest distance to the wire, and the constant is the permeability of free space.
2. The direction of the magnetic field created by a long straight wire is given by right hand rule 2 (RHR-2): Point the thumb of the right hand in the direction of current, and the fingers curl in the direction of the magnetic field loops created by it.
3. The magnetic field created by current following any path is the sum (or integral) of the fields due to segments along the path (magnitude and direction as for a straight wire), resulting in a general relationship between current and field known as Ampere’s law.
4. The magnetic field strength at the center of a circular loop is given by
where is the radius of the loop. This equation becomes for a flat coil of loops. RHR-2 gives the direction of the field about the loop. A long coil is called a solenoid.
5. The magnetic field strength inside a solenoid is
where is the number of loops per unit length of the solenoid. The field inside is very uniform in magnitude and direction.
### Conceptual Questions
|
# Magnetism
## Magnetic Force between Two Parallel Conductors
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the effects of the magnetic force between two conductors.
2. Calculate the force between two parallel conductors.
You might expect that there are significant forces between current-carrying wires, since ordinary currents produce significant magnetic fields and these fields exert significant forces on ordinary currents. But you might not expect that the force between wires is used to define the ampere. It might also surprise you to learn that this force has something to do with why large circuit breakers burn up when they attempt to interrupt large currents.
The force between two long straight and parallel conductors separated by a distance can be found by applying what we have developed in preceding sections. shows the wires, their currents, the fields they create, and the subsequent forces they exert on one another. Let us consider the field produced by wire 1 and the force it exerts on wire 2 (call the force ). The field due to at a distance is given to be
This field is uniform along wire 2 and perpendicular to it, and so the force it exerts on wire 2 is given by with :
By Newton’s third law, the forces on the wires are equal in magnitude, and so we just write for the magnitude of . (Note that .) Since the wires are very long, it is convenient to think in terms of , the force per unit length. Substituting the expression for into the last equation and rearranging terms gives
is the force per unit length between two parallel currents and separated by a distance . The force is attractive if the currents are in the same direction and repulsive if they are in opposite directions.
This force is responsible for the pinch effect in electric arcs and plasmas. The force exists whether the currents are in wires or not. In an electric arc, where currents are moving parallel to one another, there is an attraction that squeezes currents into a smaller tube. In large circuit breakers, like those used in neighborhood power distribution systems, the pinch effect can concentrate an arc between plates of a switch trying to break a large current, burn holes, and even ignite the equipment. Another example of the pinch effect is found in the solar plasma, where jets of ionized material, such as solar flares, are shaped by magnetic forces.
The operational definition of the ampere is based on the force between current-carrying wires. Note that for parallel wires separated by 1 meter with each carrying 1 ampere, the force per meter is
Since is exactly by definition, and because
, the force per meter is exactly . This is the basis of the operational definition of the ampere.
Infinite-length straight wires are impractical and so, in practice, a current balance is constructed with coils of wire separated by a few centimeters. Force is measured to determine current. This also provides us with a method for measuring the coulomb. We measure the charge that flows for a current of one ampere in one second. That is, . For both the ampere and the coulomb, the method of measuring force between conductors is the most accurate in practice.
### Test Prep for AP Courses
### Section Summary
1. The force between two parallel currents and , separated by a distance , has a magnitude per unit length given by
2. The force is attractive if the currents are in the same direction, repulsive if they are in opposite directions.
### Conceptual Questions
### Problems & Exercises
|
# Magnetism
## More Applications of Magnetism
### Learning Objectives
By the end of this section, you will be able to:
1. Describe some applications of magnetism.
### Mass Spectrometry
The curved paths followed by charged particles in magnetic fields can be put to use. A charged particle moving perpendicular to a magnetic field travels in a circular path having a radius .
It was noted that this relationship could be used to measure the mass of charged particles such as ions. A mass spectrometer is a device that measures such masses. Most mass spectrometers use magnetic fields for this purpose, although some of them have extremely sophisticated designs. Since there are five variables in the relationship, there are many possibilities. However, if , , and can be fixed, then the radius of the path is simply proportional to the mass of the charged particle. Let us examine one such mass spectrometer that has a relatively simple design. (See .) The process begins with an ion source, a device like an electron gun. The ion source gives ions their charge, accelerates them to some velocity , and directs a beam of them into the next stage of the spectrometer. This next region is a velocity selector that only allows particles with a particular value of to get through.
The velocity selector has both an electric field and a magnetic field, perpendicular to one another, producing forces in opposite directions on the ions. Only those ions for which the forces balance travel in a straight line into the next region. If the forces balance, then the electric force equals the magnetic force , so that . Noting that cancels, we see that
is the velocity particles must have to make it through the velocity selector, and further, that can be selected by varying and . In the final region, there is only a uniform magnetic field, and so the charged particles move in circular arcs with radii proportional to particle mass. The paths also depend on charge , but since is in multiples of electron charges, it is easy to determine and to discriminate between ions in different charge states.
Mass spectrometry today is used extensively in chemistry and biology laboratories to identify chemical and biological substances according to their mass-to-charge ratios. In medicine, mass spectrometers are used to measure the concentration of isotopes used as tracers. Usually, biological molecules such as proteins are very large, so they are broken down into smaller fragments before analyzing. Recently, large virus particles have been analyzed as a whole on mass spectrometers. Sometimes a gas chromatograph or high-performance liquid chromatograph provides an initial separation of the large molecules, which are then input into the mass spectrometer.
### Cathode Ray Tubes—CRTs—and the Like
What do non-flat-screen TVs, old computer monitors, x-ray machines, and the 2-mile-long Stanford Linear Accelerator have in common? All of them accelerate electrons, making them different versions of the electron gun. Many of these devices use magnetic fields to steer the accelerated electrons. shows the construction of the type of cathode ray tube (CRT) found in some TVs, oscilloscopes, and old computer monitors. Two pairs of coils are used to steer the electrons, one vertically and the other horizontally, to their desired destination.
### Magnetic Resonance Imaging
Magnetic resonance imaging (MRI) is one of the most useful and rapidly growing medical imaging tools. It non-invasively produces two-dimensional and three-dimensional images of the body that provide important medical information with none of the hazards of x-rays. MRI is based on an effect called nuclear magnetic resonance (NMR) in which an externally applied magnetic field interacts with the nuclei of certain atoms, particularly those of hydrogen (protons). These nuclei possess their own small magnetic fields, similar to those of electrons and the current loops discussed earlier in this chapter.
When placed in an external magnetic field, such nuclei experience a torque that pushes or aligns the nuclei into one of two new energy states—depending on the orientation of its spin (analogous to the N pole and S pole in a bar magnet). Transitions from the lower to higher energy state can be achieved by using an external radio frequency signal to “flip” the orientation of the small magnets. (This is actually a quantum mechanical process. The direction of the nuclear magnetic field is quantized as is energy in the radio waves. We will return to these topics in later chapters.) The specific frequency of the radio waves that are absorbed and reemitted depends sensitively on the type of nucleus, the chemical environment, and the external magnetic field strength. Therefore, this is a resonance phenomenon in which nuclei in a magnetic field act like resonators (analogous to those discussed in the treatment of sound in Oscillatory Motion and Waves) that absorb and reemit only certain frequencies. Hence, the phenomenon is named nuclear magnetic resonance (NMR).
NMR has been used for more than 50 years as an analytical tool. It was formulated in 1946 by F. Bloch and E. Purcell, with the 1952 Nobel Prize in Physics going to them for their work. Over the past two decades, NMR has been developed to produce detailed images in a process now called magnetic resonance imaging (MRI), a name coined to avoid the use of the word “nuclear” and the concomitant implication that nuclear radiation is involved. (It is not.) The 2003 Nobel Prize in Medicine went to P. Lauterbur and P. Mansfield for their work with MRI applications.
The largest part of the MRI unit is a superconducting magnet that creates a magnetic field, typically between 1 and 2 T in strength, over a relatively large volume. MRI images can be both highly detailed and informative about structures and organ functions. It is helpful that normal and non-normal tissues respond differently for slight changes in the magnetic field. In most medical images, the protons that are hydrogen nuclei are imaged. (About 2/3 of the atoms in the body are hydrogen.) Their location and density give a variety of medically useful information, such as organ function, the condition of tissue (as in the brain), and the shape of structures, such as vertebral disks and knee-joint surfaces. MRI can also be used to follow the movement of certain ions across membranes, yielding information on active transport, osmosis, dialysis, and other phenomena. With excellent spatial resolution, MRI can provide information about tumors, strokes, shoulder injuries, infections, etc.
An image requires position information as well as the density of a nuclear type (usually protons). By varying the magnetic field slightly over the volume to be imaged, the resonant frequency of the protons is made to vary with position. Broadcast radio frequencies are swept over an appropriate range and nuclei absorb and reemit them only if the nuclei are in a magnetic field with the correct strength. The imaging receiver gathers information through the body almost point by point, building up a tissue map. The reception of reemitted radio waves as a function of frequency thus gives position information. These “slices” or cross sections through the body are only several mm thick. The intensity of the reemitted radio waves is proportional to the concentration of the nuclear type being flipped, as well as information on the chemical environment in that area of the body. Various techniques are available for enhancing contrast in images and for obtaining more information. Scans called T1, T2, or proton density scans rely on different relaxation mechanisms of nuclei. Relaxation refers to the time it takes for the protons to return to equilibrium after the external field is turned off. This time depends upon tissue type and status (such as inflammation).
While MRI images are superior to x rays for certain types of tissue and have none of the hazards of x rays, they do not completely supplant x-ray images. MRI is less effective than x rays for detecting breaks in bone, for example, and in imaging breast tissue, so the two diagnostic tools complement each other. MRI images are also expensive compared to simple x-ray images and tend to be used most often where they supply information not readily obtained from x rays. Another disadvantage of MRI is that the patient is totally enclosed with detectors close to the body for about 30 minutes or more, leading to claustrophobia. It is also difficult for the obese patient to be in the magnet tunnel. New “open-MRI” machines are now available in which the magnet does not completely surround the patient.
Over the last decade, the development of much faster scans, called “functional MRI” (fMRI), has allowed us to map the functioning of various regions in the brain responsible for thought and motor control. This technique measures the change in blood flow for activities (thought, experiences, action) in the brain. The nerve cells increase their consumption of oxygen when active. Blood hemoglobin releases oxygen to active nerve cells and has somewhat different magnetic properties when oxygenated than when deoxygenated. With MRI, we can measure this and detect a blood oxygen-dependent signal. Most of the brain scans today use fMRI.
### Other Medical Uses of Magnetic Fields
Currents in nerve cells and the heart create magnetic fields like any other currents. These can be measured but with some difficulty since their strengths are about to less than the Earth’s magnetic field. Recording of the heart’s magnetic field as it beats is called a magnetocardiogram (MCG), while measurements of the brain’s magnetic field is called a magnetoencephalogram (MEG). Both give information that differs from that obtained by measuring the electric fields of these organs (ECGs and EEGs), but they are not yet of sufficient importance to make these difficult measurements common.
In both of these techniques, the sensors do not touch the body. MCG can be used in fetal studies, and is probably more sensitive than echocardiography. MCG also looks at the heart’s electrical activity whose voltage output is too small to be recorded by surface electrodes as in EKG. It has the potential of being a rapid scan for early diagnosis of cardiac ischemia (obstruction of blood flow to the heart) or problems with the fetus.
MEG can be used to identify abnormal electrical discharges in the brain that produce weak magnetic signals. Therefore, it looks at brain activity, not just brain structure. It has been used for studies of Alzheimer’s disease and epilepsy. Advances in instrumentation to measure very small magnetic fields have allowed these two techniques to be used more in recent years. What is used is a sensor called a SQUID, for superconducting quantum interference device. This operates at liquid helium temperatures and can measure magnetic fields thousands of times smaller than the Earth’s.
Finally, there is a burgeoning market for magnetic cures in which magnets are applied in a variety of ways to the body, from magnetic bracelets to magnetic mattresses. The best that can be said for such practices is that they are apparently harmless, unless the magnets get close to the patient’s computer or magnetic storage disks. Claims are made for a broad spectrum of benefits from cleansing the blood to giving the patient more energy, but clinical studies have not verified these claims, nor is there an identifiable mechanism by which such benefits might occur.
### Section Summary
1. Crossed (perpendicular) electric and magnetic fields act as a velocity filter, giving equal and opposite forces on any charge with velocity perpendicular to the fields and of magnitude
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Connection for AP® Courses
Nature’s displays of symmetry are beautiful and alluring. As shown in Figure 23.2, a butterfly’s wings exhibit an appealing symmetry in a complex system. The laws of physics display symmetries at the most basic level – these symmetries are a source of wonder and imply deeper meaning. Since we place a high value on symmetry, we look for it when we explore nature. The remarkable thing is that we find it.
This chapter supports Big Idea 4, illustrating how electric and magnetic changes can take place in a system due to interactions with other systems. The hint of symmetry between electricity and magnetism found in the preceding chapter will be elaborated upon in this chapter. Specifically, we know that a current creates a magnetic field. If nature is symmetric in this case, then perhaps a magnetic field can create a current. Historically, it was very shortly after Oersted discovered that currents cause magnetic fields that other scientists asked the following question: Can magnetic fields cause currents? The answer was soon found by experiment to be yes. In 1831, some 12 years after Oersted’s discovery, the English scientist Michael Faraday (1791–1862) and the American scientist Joseph Henry (1797–1878) independently demonstrated that magnetic fields can produce currents. The basic process of generating emfs (electromotive forces), and hence currents, with magnetic fields is known as induction; this process is also called “magnetic induction” to distinguish it from charging by induction, which utilizes the Coulomb force.
Today, currents induced by magnetic fields are essential to our technological society. The ubiquitous generator – found in automobiles, on bicycles, in nuclear power plants, and so on – uses magnetism to generate current. Other devices that use magnetism to induce currents include pickup coils in electric guitars, transformers of every size, certain microphones, airport security gates, and damping mechanisms on sensitive chemical balances. Explanations and examples in this chapter will help you understand current induction via magnetic interactions in mechanical systems (Enduring Understanding 4.E, Essential Knowledge 4.E.2). You will also learn how the behavior of AC circuits depends strongly on the effect of magnetic fields on currents.
The content of this chapter supports:
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.E The electric and magnetic properties of a system can change in response to the presence of, or changes in, other objects or systems.
Essential Knowledge 4.E.2 Changing magnetic flux induces an electric field that can establish an induced emf in a system. |
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Induced Emf and Magnetic Flux
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the flux of a uniform magnetic field through a loop of arbitrary orientation.
2. Describe methods to produce an electromotive force (emf) with a magnetic field or magnet and a loop of wire.
The apparatus used by Faraday to demonstrate that magnetic fields can create currents is illustrated in . When the switch is closed, a magnetic field is produced in the coil on the top part of the iron ring and transmitted to the coil on the bottom part of the ring. The galvanometer is used to detect any current induced in the coil on the bottom. It was found that each time the switch is closed, the galvanometer detects a current in one direction in the coil on the bottom. (You can also observe this in a physics lab.) Each time the switch is opened, the galvanometer detects a current in the opposite direction. Interestingly, if the switch remains closed or open for any length of time, there is no current through the galvanometer. Closing and opening the switch induces the current. It is the change in magnetic field that creates the current. More basic than the current that flows is the emf that causes it. The current is a result of an emf induced by a changing magnetic field, whether or not there is a path for current to flow.
An experiment easily performed and often done in physics labs is illustrated in . An emf is induced in the coil when a bar magnet is pushed in and out of it. Emfs of opposite signs are produced by motion in opposite directions, and the emfs are also reversed by reversing poles. The same results are produced if the coil is moved rather than the magnet—it is the relative motion that is important. The faster the motion, the greater the emf, and there is no emf when the magnet is stationary relative to the coil.
The method of inducing an emf used in most electric generators is shown in . A coil is rotated in a magnetic field, producing an alternating current emf, which depends on rotation rate and other factors that will be explored in later sections. Note that the generator is remarkably similar in construction to a motor (another symmetry).
So we see that changing the magnitude or direction of a magnetic field produces an emf. Experiments revealed that there is a crucial quantity called the magnetic flux, , given by
where is the magnetic field strength over an area , at an angle
with the perpendicular to the area as shown in . Any change in magnetic flux This process is defined to be electromagnetic induction. Units of magnetic flux are . As seen in , ⊥, which is the component of perpendicular to the area . Thus magnetic flux is , the product of the area and the component of the magnetic field perpendicular to it.
All induction, including the examples given so far, arises from some change in magnetic flux . For example, Faraday changed and hence when opening and closing the switch in his apparatus (shown in ). This is also true for the bar magnet and coil shown in . When rotating the coil of a generator, the angle and, hence, is changed. Just how great an emf and what direction it takes depend on the change in and how rapidly the change is made, as examined in the next section.
### Test Prep for AP Courses
### Section Summary
1. The crucial quantity in induction is magnetic flux , defined to be , where is the magnetic field strength over an area at an angle with the perpendicular to the area.
2. Units of magnetic flux are .
3. Any change in magnetic flux induces an emf—the process is defined to be electromagnetic induction.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Faraday’s Law of Induction: Lenz’s Law
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate emf, current, and magnetic fields using Faraday’s Law.
2. Explain the physical results of Lenz’s Law
### Faraday’s and Lenz’s Law
Faraday’s experiments showed that the emf induced by a change in magnetic flux depends on only a few factors. First, emf is directly proportional to the change in flux . Second, emf is greatest when the change in time is smallest—that is, emf is inversely proportional to . Finally, if a coil has
turns, an emf will be produced that is times greater than for a single coil, so that emf is directly proportional to . The equation for the emf induced by a change in magnetic flux is
This relationship is known as Faraday’s law of induction. The units for emf are volts, as is usual.
The minus sign in Faraday’s law of induction is very important. The minus means that the emf creates a current I and magnetic field B that oppose the change in flux . The direction (given by the minus sign) of the emfis so important that it is called Lenz’s law after the Russian Heinrich Lenz (1804–1865), who, like Faraday and Henry,independently investigated aspects of induction. Faraday was aware of the direction, but Lenz stated it so clearly that he is credited for its discovery. (See .)
For practice, apply these steps to the situations shown in and to others that are part of the following text material.
### Applications of Electromagnetic Induction
There are many applications of Faraday’s Law of induction, as we will explore in this chapter and others. At this juncture, let us mention several that have to do with data storage and magnetic fields. A very important application has to do with audio and video recording tapes. A plastic tape, coated with iron oxide, moves past a recording head. This recording head is basically a round iron ring about which is wrapped a coil of wire—an electromagnet (). A signal in the form of a varying input current from a microphone or camera goes to the recording head. These signals (which are a function of the signal amplitude and frequency) produce varying magnetic fields at the recording head. As the tape moves past the recording head, the magnetic field orientations of the iron oxide molecules on the tape are changed thus recording the signal. In the playback mode, the magnetized tape is run past another head, similar in structure to the recording head. The different magnetic field orientations of the iron oxide molecules on the tape induces an emf in the coil of wire in the playback head. This signal then is sent to a loudspeaker or video player.
Similar principles apply to computer hard drives, except at a much faster rate. Here recordings are on a coated, spinning disk. Read heads historically were made to work on the principle of induction. However, the input information is carried in digital rather than analog form – a series of 0’s or 1’s are written upon the spinning hard drive. Today, most hard drive readout devices do not work on the principle of induction, but use a technique known as giant magnetoresistance. (The discovery that weak changes in a magnetic field in a thin film of iron and chromium could bring about much larger changes in electrical resistance was one of the first large successes of nanotechnology.) Another application of induction is found on the magnetic stripe on the back of your personal credit card as used at the grocery store or the ATM machine. This works on the same principle as the audio or video tape mentioned in the last paragraph in which a head reads personal information from your card.
Another application of electromagnetic induction is when electrical signals need to be transmitted across a barrier. Consider the cochlear implant shown below. Sound is picked up by a microphone on the outside of the skull and is used to set up a varying magnetic field. A current is induced in a receiver secured in the bone beneath the skin and transmitted to electrodes in the inner ear. Electromagnetic induction can be used in other instances where electric signals need to be conveyed across various media.
Another contemporary area of research in which electromagnetic induction is being successfully implemented (and with substantial potential) is transcranial magnetic simulation. A host of disorders, including depression and hallucinations can be traced to irregular localized electrical activity in the brain. In transcranial magnetic stimulation, a rapidly varying and very localized magnetic field is placed close to certain sites identified in the brain. Weak electric currents are induced in the identified sites and can result in recovery of electrical functioning in the brain tissue.
Sleep apnea (“the cessation of breath”) affects both adults and infants (especially premature babies and it may be a cause of sudden infant deaths [SID]). In such individuals, breath can stop repeatedly during their sleep. A cessation of more than 20 seconds can be very dangerous. Stroke, heart failure, and tiredness are just some of the possible consequences for a person having sleep apnea. The concern in infants is the stopping of breath for these longer times. One type of monitor to alert parents when a child is not breathing uses electromagnetic induction. A wire wrapped around the infant’s chest has an alternating current running through it. The expansion and contraction of the infant’s chest as the infant breathes changes the area through the coil. A pickup coil located nearby has an alternating current induced in it due to the changing magnetic field of the initial wire. If the child stops breathing, there will be a change in the induced current, and so a parent can be alerted.
### Section Summary
1. Faraday’s law of induction states that the emfinduced by a change in magnetic flux is
when flux changes by
2. If emf is induced in a coil,
is its number of turns.
3. The minus sign means that the emf creates a current and magnetic field that oppose the change in flux —this opposition is known as Lenz’s law.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Motional Emf
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate emf, force, magnetic field, and work due to the motion of an object in a magnetic field.
As we have seen, any change in magnetic flux induces an emf opposing that change—a process known as induction. Motion is one of the major causes of induction. For example, a magnet moved toward a coil induces an emf, and a coil moved toward a magnet produces a similar emf. In this section, we concentrate on motion in a magnetic field that is stationary relative to the Earth, producing what is loosely called motional emf.
One situation where motional emf occurs is known as the Hall effect and has already been examined. Charges moving in a magnetic field experience the magnetic force , which moves opposite charges in opposite directions and produces an . We saw that the Hall effect has applications, including measurements of and . We will now see that the Hall effect is one aspect of the broader phenomenon of induction, and we will find that motional emf can be used as a power source.
Consider the situation shown in . A rod is moved at a speed
along a pair of conducting rails separated by a distance
in a uniform magnetic field . The rails are stationary relative to and are connected to a stationary resistor . The resistor could be anything from a light bulb to a voltmeter. Consider the area enclosed by the moving rod, rails, and resistor. is perpendicular to this area, and the area is increasing as the rod moves. Thus the magnetic flux enclosed by the rails, rod, and resistor is increasing. When flux changes, an emf is induced according to Faraday’s law of induction.
To find the magnitude of emf induced along the moving rod, we use Faraday’s law of induction without the sign:
Here and below, “emf” implies the magnitude of the emf. In this equation, and the flux . We have
and , since is perpendicular to
.
Now , since is uniform. Note that the area swept out by the rod is . Entering these quantities into the expression for emf yields
Finally, note that , the velocity of the rod. Entering this into the last expression shows that
is the motional emf. This is the same expression given for the Hall effect previously.
To find the direction of the induced field, the direction of the current, and the polarity of the induced emf, we apply Lenz’s law as explained in Faraday's Law of Induction: Lenz's Law. (See (b).) Flux is increasing, since the area enclosed is increasing. Thus the induced field must oppose the existing one and be out of the page. And so the RHR-2 requires that I be counterclockwise, which in turn means the top of the rod is positive as shown.
Motional emf also occurs if the magnetic field moves and the rod (or other object) is stationary relative to the Earth (or some observer). We have seen an example of this in the situation where a moving magnet induces an emf in a stationary coil. It is the relative motion that is important. What is emerging in these observations is a connection between magnetic and electric fields. A moving magnetic field produces an electric field through its induced emf. We already have seen that a moving electric field produces a magnetic field—moving charge implies moving electric field and moving charge produces a magnetic field.
Motional emfs in the Earth’s weak magnetic field are not ordinarily very large, or we would notice voltage along metal rods, such as a screwdriver, during ordinary motions. For example, a simple calculation of the motional emf of a 1 m rod moving at 3.0 m/s perpendicular to the Earth’s field gives . This small value is consistent with experience. There is a spectacular exception, however. In 1992 and 1996, attempts were made with the space shuttle to create large motional emfs. The Tethered Satellite was to be let out on a 20 km length of wire as shown in , to create a 5 kV emf by moving at orbital speed through the Earth’s field. This emf could be used to convert some of the shuttle’s kinetic and potential energy into electrical energy if a complete circuit could be made. To complete the circuit, the stationary ionosphere was to supply a return path for the current to flow. (The ionosphere is the rarefied and partially ionized atmosphere at orbital altitudes. It conducts because of the ionization. The ionosphere serves the same function as the stationary rails and connecting resistor in , without which there would not be a complete circuit.) Drag on the current in the cable due to the magnetic force does the work that reduces the shuttle’s kinetic and potential energy and allows it to be converted to electrical energy. The tests were both unsuccessful. In the first, the cable hung up and could only be extended a couple of hundred meters; in the second, the cable broke when almost fully extended. indicates feasibility in principle.
### Section Summary
1. An emf induced by motion relative to a magnetic field
is called a motional emf and is given by
where is the length of the object moving at speed relative to the field.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Eddy Currents and Magnetic Damping
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the magnitude and direction of an induced eddy current, and the effect this will have on the object it is induced in.
2. Describe several applications of magnetic damping.
### Eddy Currents and Magnetic Damping
As discussed in Motional Emf, motional emf is induced when a conductor moves in a magnetic field or when a magnetic field moves relative to a conductor. If motional emf can cause a current loop in the conductor, we refer to that current as an eddy current. Eddy currents can produce significant drag, called magnetic damping, on the motion involved. Consider the apparatus shown in , which swings a pendulum bob between the poles of a strong magnet. (This is another favorite physics lab activity.) If the bob is metal, there is significant drag on the bob as it enters and leaves the field, quickly damping the motion. If, however, the bob is a slotted metal plate, as shown in (b), there is a much smaller effect due to the magnet. There is no discernible effect on a bob made of an insulator. Why is there drag in both directions, and are there any uses for magnetic drag?
shows what happens to the metal plate as it enters and leaves the magnetic field. In both cases, it experiences a force opposing its motion. As it enters from the left, flux increases, and so an eddy current is set up (Faraday’s law) in the counterclockwise direction (Lenz’s law), as shown. Only the right-hand side of the current loop is in the field, so that there is an unopposed force on it to the left (RHR-1). When the metal plate is completely inside the field, there is no eddy current if the field is uniform, since the flux remains constant in this region. But when the plate leaves the field on the right, flux decreases, causing an eddy current in the clockwise direction that, again, experiences a force to the left, further slowing the motion. A similar analysis of what happens when the plate swings from the right toward the left shows that its motion is also damped when entering and leaving the field.
When a slotted metal plate enters the field, as shown in , an emf is induced by the change in flux, but it is less effective because the slots limit the size of the current loops. Moreover, adjacent loops have currents in opposite directions, and their effects cancel. When an insulating material is used, the eddy current is extremely small, and so magnetic damping on insulators is negligible. If eddy currents are to be avoided in conductors, then they can be slotted or constructed of thin layers of conducting material separated by insulating sheets.
### Applications of Magnetic Damping
One use of magnetic damping is found in sensitive laboratory balances. To have maximum sensitivity and accuracy, the balance must be as friction-free as possible. But if it is friction-free, then it will oscillate for a very long time. Magnetic damping is a simple and ideal solution. With magnetic damping, drag is proportional to speed and becomes zero at zero velocity. Thus the oscillations are quickly damped, after which the damping force disappears, allowing the balance to be very sensitive. (See .) In most balances, magnetic damping is accomplished with a conducting disc that rotates in a fixed field.
Since eddy currents and magnetic damping occur only in conductors, recycling centers can use magnets to separate metals from other materials. Trash is dumped in batches down a ramp, beneath which lies a powerful magnet. Conductors in the trash are slowed by magnetic damping while nonmetals in the trash move on, separating from the metals. (See .) This works for all metals, not just ferromagnetic ones. A magnet can separate out the ferromagnetic materials alone by acting on stationary trash.
Other major applications of eddy currents are in metal detectors and braking systems in trains and roller coasters. Portable metal detectors () consist of a primary coil carrying an alternating current and a secondary coil in which a current is induced. An eddy current will be induced in a piece of metal close to the detector which will cause a change in the induced current within the secondary coil, leading to some sort of signal like a shrill noise. Braking using eddy currents is safer because factors such as rain do not affect the braking and the braking is smoother. However, eddy currents cannot bring the motion to a complete stop, since the force produced decreases with speed. Thus, speed can be reduced from say 20 m/s to 5 m/s, but another form of braking is needed to completely stop the vehicle. Generally, powerful rare earth magnets such as neodymium magnets are used in roller coasters. shows rows of magnets in such an application. The vehicle has metal fins (normally containing copper) which pass through the magnetic field slowing the vehicle down in much the same way as with the pendulum bob shown in .
Induction cooktops have electromagnets under their surface. The magnetic field is varied rapidly producing eddy currents in the base of the pot, causing the pot and its contents to increase in temperature. Induction cooktops have high efficiencies and good response times but the base of the pot needs to be ferromagnetic, iron or steel for induction to work.
### Section Summary
1. Current loops induced in moving conductors are called eddy currents.
2. They can create significant drag, called magnetic damping.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Electric Generators
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the emf induced in a generator.
2. Calculate the peak emf which can be induced in a particular generator system.
Electric generators induce an emf by rotating a coil in a magnetic field, as briefly discussed in Induced Emf and Magnetic Flux. We will now explore generators in more detail. Consider the following example.
The emf calculated in is the average over one-fourth of a revolution. What is the emf at any given instant? It varies with the angle between the magnetic field and a perpendicular to the coil. We can get an expression for emf as a function of time by considering the motional emf on a rotating rectangular coil of width and height in a uniform magnetic field, as illustrated in .
Charges in the wires of the loop experience the magnetic force, because they are moving in a magnetic field. Charges in the vertical wires experience forces parallel to the wire, causing currents. But those in the top and bottom segments feel a force perpendicular to the wire, which does not cause a current. We can thus find the induced emf by considering only the side wires. Motional emf is given to be , where the velocity v is perpendicular to the magnetic field . Here the velocity is at an angle with , so that its component perpendicular to is (see ). Thus in this case the emf induced on each side is , and they are in the same direction. The total emf around the loop is then
This expression is valid, but it does not give emf as a function of time. To find the time dependence of emf, we assume the coil rotates at a constant angular velocity . The angle is related to angular velocity by , so that
Now, linear velocity
is related to angular velocity
by . Here , so that , and
Noting that the area of the loop is , and allowing for loops, we find that
is the emf induced in a generator coil of turns and area rotating at a constant angular velocity
in a uniform magnetic field . This can also be expressed as
where
is the maximum (peak) emf. Note that the frequency of the oscillation is , and the period is . shows a graph of emf as a function of time, and it now seems reasonable that AC voltage is sinusoidal.
The fact that the peak emf, , makes good sense. The greater the number of coils, the larger their area, and the stronger the field, the greater the output voltage. It is interesting that the faster the generator is spun (greater ), the greater the emf. This is noticeable on bicycle generators—at least the cheaper varieties. One of the authors as a juvenile found it amusing to ride his bicycle fast enough to burn out his lights, until he had to ride home lightless one dark night.
shows a scheme by which a generator can be made to produce pulsed DC. More elaborate arrangements of multiple coils and split rings can produce smoother DC, although electronic rather than mechanical means are usually used to make ripple-free DC.
In real life, electric generators look a lot different than the figures in this section, but the principles are the same. The source of mechanical energy that turns the coil can be falling water (hydropower), steam produced by the burning of fossil fuels, or the kinetic energy of wind. shows a cutaway view of a steam turbine; steam moves over the blades connected to the shaft, which rotates the coil within the generator.
Generators illustrated in this section look very much like the motors illustrated previously. This is not coincidental. In fact, a motor becomes a generator when its shaft rotates. Certain early automobiles used their starter motor as a generator. In Back Emf, we shall further explore the action of a motor as a generator.
### Test Prep for AP Courses
### Section Summary
1. An electric generator rotates a coil in a magnetic field, inducing an emfgiven as a function of time by
where is the area of an -turn coil rotated at a constant angular velocity in a uniform magnetic field .
2. The peak emf of a generator is
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Back Emf
### Learning Objectives
By the end of this section, you will be able to:
1. Explain what back emf is and how it is induced.
It has been noted that motors and generators are very similar. Generators convert mechanical energy into electrical energy, whereas motors convert electrical energy into mechanical energy. Furthermore, motors and generators have the same construction. When the coil of a motor is turned, magnetic flux changes, and an emf (consistent with Faraday’s law of induction) is induced. The motor thus acts as a generator whenever its coil rotates. This will happen whether the shaft is turned by an external input, like a belt drive, or by the action of the motor itself. That is, when a motor is doing work and its shaft is turning, an emf is generated. Lenz’s law tells us the emf opposes any change, so that the input emf that powers the motor will be opposed by the motor’s self-generated emf, called the back emf of the motor. (See .)
Back emf is the generator output of a motor, and so it is proportional to the motor’s angular velocity . It is zero when the motor is first turned on, meaning that the coil receives the full driving voltage and the motor draws maximum current when it is on but not turning. As the motor turns faster and faster, the back emf grows, always opposing the driving emf, and reduces the voltage across the coil and the amount of current it draws. This effect is noticeable in a number of situations. When a vacuum cleaner, refrigerator, or washing machine is first turned on, lights in the same circuit dim briefly due to the
drop produced in feeder lines by the large current drawn by the motor. When a motor first comes on, it draws more current than when it runs at its normal operating speed. When a mechanical load is placed on the motor, like an electric wheelchair going up a hill, the motor slows, the back emf drops, more current flows, and more work can be done. If the motor runs at too low a speed, the larger current can overheat it (via resistive power in the coil, ), perhaps even burning it out. On the other hand, if there is no mechanical load on the motor, it will increase its angular velocity until the back emf is nearly equal to the driving emf. Then the motor uses only enough energy to overcome friction.
Consider, for example, the motor coils represented in . The coils have a equivalent resistance and are driven by a 48.0 V emf. Shortly after being turned on, they draw a current and, thus, dissipate of energy as heat transfer. Under normal operating conditions for this motor, suppose the back emf is 40.0 V. Then at operating speed, the total voltage across the coils is 8.0 V (48.0 V minus the 40.0 V back emf), and the current drawn is . Under normal load, then, the power dissipated is . The latter will not cause a problem for this motor, whereas the former 5.76 kW would burn out the coils if sustained.
### Section Summary
1. Any rotating coil will have an induced emf—in motors, this is called back emf, since it opposes the emf input to the motor.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Transformers
### Learning Objectives
By the end of this section, you will be able to:
1. Explain how a transformer works.
2. Calculate voltage, current, and/or number of turns given the other quantities.
Transformers do what their name implies—they transform voltages from one value to another (The term voltage is used rather than emf, because transformers have internal resistance). For example, many cell phones, laptops, video games, and power tools and small appliances have a transformer built into their plug-in unit (like that in ) that changes 120 V or 240 V AC into whatever voltage the device uses. Transformers are also used at several points in the power distribution systems, such as illustrated in . Power is sent long distances at high voltages, because less current is required for a given amount of power, and this means less line loss, as was discussed previously. But high voltages pose greater hazards, so that transformers are employed to produce lower voltage at the user’s location.
The type of transformer considered in this text—see —is based on Faraday’s law of induction and is very similar in construction to the apparatus Faraday used to demonstrate magnetic fields could cause currents. The two coils are called the primary and secondary coils. In normal use, the input voltage is placed on the primary, and the secondary produces the transformed output voltage. Not only does the iron core trap the magnetic field created by the primary coil, its magnetization increases the field strength. Since the input voltage is AC, a time-varying magnetic flux is sent to the secondary, inducing its AC output voltage.
For the simple transformer shown in , the output voltage depends almost entirely on the input voltage and the ratio of the number of loops in the primary and secondary coils. Faraday’s law of induction for the secondary coil gives its induced output voltage to be
where is the number of loops in the secondary coil and / is the rate of change of magnetic flux. Note that the output voltage equals the induced emf (), provided coil resistance is small (a reasonable assumption for transformers). The cross-sectional area of the coils is the same on either side, as is the magnetic field strength, and so
is the same on either side. The input primary voltage is also related to changing flux by
The reason for this is a little more subtle. Lenz’s law tells us that the primary coil opposes the change in flux caused by the input voltage , hence the minus sign (This is an example of self-inductance, a topic to be explored in some detail in later sections). Assuming negligible coil resistance, Kirchhoff’s loop rule tells us that the induced emf exactly equals the input voltage. Taking the ratio of these last two equations yields a useful relationship:
This is known as the transformer equation, and it simply states that the ratio of the secondary to primary voltages in a transformer equals the ratio of the number of loops in their coils.
The output voltage of a transformer can be less than, greater than, or equal to the input voltage, depending on the ratio of the number of loops in their coils. Some transformers even provide a variable output by allowing connection to be made at different points on the secondary coil. A step-up transformer is one that increases voltage, whereas a step-down transformer decreases voltage. Assuming, as we have, that resistance is negligible, the electrical power output of a transformer equals its input. This is nearly true in practice—transformer efficiency often exceeds 99%. Equating the power input and output,
Rearranging terms gives
Combining this with , we find that
is the relationship between the output and input currents of a transformer. So if voltage increases, current decreases. Conversely, if voltage decreases, current increases.
The fact that transformers are based on Faraday’s law of induction makes it clear why we cannot use transformers to change DC voltages. If there is no change in primary voltage, there is no voltage induced in the secondary. One possibility is to connect DC to the primary coil through a switch. As the switch is opened and closed, the secondary produces a voltage like that in . This is not really a practical alternative, and AC is in common use wherever it is necessary to increase or decrease voltages.
Transformers have many applications in electrical safety systems, which are discussed in Electrical Safety: Systems and Devices.
### Test Prep for AP Courses
### Section Summary
1. Transformers use induction to transform voltages from one value to another.
2. For a transformer, the voltages across the primary and secondary coils are related by
where and are the voltages across primary and secondary coils having and turns.
3. The currents and in the primary and secondary coils are related by .
4. A step-up transformer increases voltage and decreases current, whereas a step-down transformer decreases voltage and increases current.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Electrical Safety: Systems and Devices
### Learning Objectives
By the end of this section, you will be able to:
1. Explain how various modern safety features in electric circuits work, with an emphasis on how induction is employed.
Electricity has two hazards. A thermal hazard occurs when there is electrical overheating. A shock hazard occurs when electric current passes through a person. Both hazards have already been discussed. Here we will concentrate on systems and devices that prevent electrical hazards.
shows the schematic for a simple AC circuit with no safety features. This is not how power is distributed in practice. Modern household and industrial wiring requires the three-wire system, shown schematically in , which has several safety features. First is the familiar circuit breaker (or fuse) to prevent thermal overload. Second, there is a protective case around the appliance, such as a toaster or refrigerator. The case’s safety feature is that it prevents a person from touching exposed wires and coming into electrical contact with the circuit, helping prevent shocks.
There are three connections to earth or ground (hereafter referred to as “earth/ground”) shown in . Recall that an earth/ground connection is a low-resistance path directly to the earth. The two earth/ground connections on the neutral wire force it to be at zero volts relative to the earth, giving the wire its name. This wire is therefore safe to touch even if its insulation, usually white, is missing. The neutral wire is the return path for the current to follow to complete the circuit. Furthermore, the two earth/ground connections supply an alternative path through the earth, a good conductor, to complete the circuit. The earth/ground connection closest to the power source could be at the generating plant, while the other is at the user’s location. The third earth/ground is to the case of the appliance, through the green earth/ground wire, forcing the case, too, to be at zero volts. The live or hot wire (hereafter referred to as “live/hot”) supplies voltage and current to operate the appliance. shows a more pictorial version of how the three-wire system is connected through a three-prong plug to an appliance.
A note on insulation color-coding: Insulating plastic is color-coded to identify live/hot, neutral and ground wires but these codes vary around the world. Live/hot wires may be brown, red, black, blue or grey. Neutral wire may be blue, black or white. Since the same color may be used for live/hot or neutral in different parts of the world, it is essential to determine the color code in your region. The only exception is the earth/ground wire which is often green but may be yellow or just bare wire. Striped coatings are sometimes used for the benefit of those who are colorblind.
The three-wire system replaced the older two-wire system, which lacks an earth/ground wire. Under ordinary circumstances, insulation on the live/hot and neutral wires prevents the case from being directly in the circuit, so that the earth/ground wire may seem like double protection. Grounding the case solves more than one problem, however. The simplest problem is worn insulation on the live/hot wire that allows it to contact the case, as shown in . Lacking an earth/ground connection (some people cut the third prong off the plug because they only have outdated two hole receptacles), a severe shock is possible. This is particularly dangerous in the kitchen, where a good connection to earth/ground is available through water on the floor or a water faucet. With the earth/ground connection intact, the circuit breaker will trip, forcing repair of the appliance. Why are some appliances still sold with two-prong plugs? These have nonconducting cases, such as power tools with impact resistant plastic cases, and are called doubly insulated. Modern two-prong plugs can be inserted into the asymmetric standard outlet in only one way, to ensure proper connection of live/hot and neutral wires.
Electromagnetic induction causes a more subtle problem that is solved by grounding the case. The AC current in appliances can induce an emf on the case. If grounded, the case voltage is kept near zero, but if the case is not grounded, a shock can occur as pictured in . Current driven by the induced case emf is called a leakage current, although current does not necessarily pass from the resistor to the case.
A ground fault interrupter (GFI) is a safety device found in updated kitchen and bathroom wiring that works based on electromagnetic induction. GFIs compare the currents in the live/hot and neutral wires. When live/hot and neutral currents are not equal, it is almost always because current in the neutral is less than in the live/hot wire. Then some of the current, again called a leakage current, is returning to the voltage source by a path other than through the neutral wire. It is assumed that this path presents a hazard, such as shown in . GFIs are usually set to interrupt the circuit if the leakage current is greater than 5 mA, the accepted maximum harmless shock. Even if the leakage current goes safely to earth/ground through an intact earth/ground wire, the GFI will trip, forcing repair of the leakage.
shows how a GFI works. If the currents in the live/hot and neutral wires are equal, then they induce equal and opposite emfs in the coil. If not, then the circuit breaker will trip.
Another induction-based safety device is the isolation transformer, shown in . Most isolation transformers have equal input and output voltages. Their function is to put a large resistance between the original voltage source and the device being operated. This prevents a complete circuit between them, even in the circumstance shown. There is a complete circuit through the appliance. But there is not a complete circuit for current to flow through the person in the figure, who is touching only one of the transformer’s output wires, and neither output wire is grounded. The appliance is isolated from the original voltage source by the high resistance of the material between the transformer coils, hence the name isolation transformer. For current to flow through the person, it must pass through the high-resistance material between the coils, through the wire, the person, and back through the earth—a path with such a large resistance that the current is negligible.
The basics of electrical safety presented here help prevent many electrical hazards. Electrical safety can be pursued to greater depths. There are, for example, problems related to different earth/ground connections for appliances in close proximity. Many other examples are found in hospitals. Microshock-sensitive patients, for instance, require special protection. For these people, currents as low as 0.1 mA may cause ventricular fibrillation. The interested reader can use the material presented here as a basis for further study.
### Test Prep for AP Courses
### Section Summary
1. Electrical safety systems and devices are employed to prevent thermal and shock hazards.
2. Circuit breakers and fuses interrupt excessive currents to prevent thermal hazards.
3. The three-wire system guards against thermal and shock hazards, utilizing live/hot, neutral, and earth/ground wires, and grounding the neutral wire and case of the appliance.
4. A ground fault interrupter (GFI) prevents shock by detecting the loss of current to unintentional paths.
5. An isolation transformer insulates the device being powered from the original source, also to prevent shock.
6. Many of these devices use induction to perform their basic function.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Inductance
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the inductance of an inductor.
2. Calculate the energy stored in an inductor.
3. Calculate the emf generated in an inductor.
### Inductors
Induction is the process in which an emf is induced by changing magnetic flux. Many examples have been discussed so far, some more effective than others. Transformers, for example, are designed to be particularly effective at inducing a desired voltage and current with very little loss of energy to other forms. Is there a useful physical quantity related to how “effective” a given device is? The answer is yes, and that physical quantity is called inductance.
Mutual inductance is the effect of Faraday’s law of induction for one device upon another, such as the primary coil in transmitting energy to the secondary in a transformer. See , where simple coils induce emfs in one another.
In the many cases where the geometry of the devices is fixed, flux is changed by varying current. We therefore concentrate on the rate of change of current, , as the cause of induction. A change in the current in one device, coil 1 in the figure, induces an in the other. We express this in equation form as
where is defined to be the mutual inductance between the two devices. The minus sign is an expression of Lenz’s law. The larger the mutual inductance , the more effective the coupling. For example, the coils in have a small compared with the transformer coils in . Units for are , which is named a henry (H), after Joseph Henry. That is, .
Nature is symmetric here. If we change the current in coil 2, we induce an in coil 1, which is given by
where is the same as for the reverse process. Transformers run backward with the same effectiveness, or mutual inductance .
A large mutual inductance may or may not be desirable. We want a transformer to have a large mutual inductance. But an appliance, such as an electric clothes dryer, can induce a dangerous emf on its case if the mutual inductance between its coils and the case is large. One way to reduce mutual inductance is to counterwind coils to cancel the magnetic field produced. (See .)
Self-inductance, the effect of Faraday’s law of induction of a device on itself, also exists. When, for example, current through a coil is increased, the magnetic field and flux also increase, inducing a counter emf, as required by Lenz’s law. Conversely, if the current is decreased, an emf is induced that opposes the decrease. Most devices have a fixed geometry, and so the change in flux is due entirely to the change in current through the device. The induced emf is related to the physical geometry of the device and the rate of change of current. It is given by
where is the self-inductance of the device. A device that exhibits significant self-inductance is called an inductor, and given the symbol in .The minus sign is an expression of Lenz’s law, indicating that emf opposes the change in current. Units of self-inductance are henries (H) just as for mutual inductance. The larger the self-inductance of a device, the greater its opposition to any change in current through it. For example, a large coil with many turns and an iron core has a large and will not allow current to change quickly. To avoid this effect, a small must be achieved, such as by counterwinding coils as in .
A 1 H inductor is a large inductor. To illustrate this, consider a device with that has a 10 A current flowing through it. What happens if we try to shut off the current rapidly, perhaps in only 1.0 ms? An emf, given by , will oppose the change. Thus an emf will be induced given by . The positive sign means this large voltage is in the same direction as the current, opposing its decrease. Such large emfs can cause arcs, damaging switching equipment, and so it may be necessary to change current more slowly.
There are uses for such a large induced voltage. Camera flashes use a battery, two inductors that function as a transformer, and a switching system or oscillator to induce large voltages. (Remember that we need a changing magnetic field, brought about by a changing current, to induce a voltage in another coil.) The oscillator system will do this many times as the battery voltage is boosted to over one thousand volts. (You may hear the high pitched whine from the transformer as the capacitor is being charged.) A capacitor stores the high voltage for later use in powering the flash. (See .)
It is possible to calculate for an inductor given its geometry (size and shape) and knowing the magnetic field that it produces. This is difficult in most cases, because of the complexity of the field created. So in this text the inductance is usually a given quantity. One exception is the solenoid, because it has a very uniform field inside, a nearly zero field outside, and a simple shape. It is instructive to derive an equation for its inductance. We start by noting that the induced emf is given by Faraday’s law of induction as and, by the definition of self-inductance, as . Equating these yields
Solving for gives
This equation for the self-inductance of a device is always valid. It means that self-inductance depends on how effective the current is in creating flux; the more effective, the greater / is.
Let us use this last equation to find an expression for the inductance of a solenoid. Since the area
of a solenoid is fixed, the change in flux is
.
To find
, we note that the magnetic field of a solenoid is given by . (Here , where
is the number of coils and
is the solenoid’s length.) Only the current changes, so that . Substituting
into gives
This simplifies to
This is the self-inductance of a solenoid of cross-sectional area
and length
. Note that the inductance depends only on the physical characteristics of the solenoid, consistent with its definition.
One common application of inductance is used in traffic lights that can tell when vehicles are waiting at the intersection. An electrical circuit with an inductor is placed in the road under the place a waiting car will stop over. The body of the car increases the inductance and the circuit changes sending a signal to the traffic lights to change colors. Similarly, metal detectors used for airport security employ the same technique. A coil or inductor in the metal detector frame acts as both a transmitter and a receiver. The pulsed signal in the transmitter coil induces a signal in the receiver. The self-inductance of the circuit is affected by any metal object in the path. Such detectors can be adjusted for sensitivity and also can indicate the approximate location of metal found on a person. See .
### Energy Stored in an Inductor
We know from Lenz’s law that inductances oppose changes in current. There is an alternative way to look at this opposition that is based on energy. Energy is stored in a magnetic field. It takes time to build up energy, and it also takes time to deplete energy; hence, there is an opposition to rapid change. In an inductor, the magnetic field is directly proportional to current and to the inductance of the device. It can be shown that the energy stored in an inductor is given by
This expression is similar to that for the energy stored in a capacitor.
### Section Summary
1. Inductance is the property of a device that tells how effectively it induces an emf in another device.
2. Mutual inductance is the effect of two devices in inducing emfs in each other.
3. A change in current in one induces an emf in the second:
where
is defined to be the mutual inductance between the two devices, and the minus sign is due to Lenz’s law.
4. Symmetrically, a change in current through the second device induces an emf in the first:
where
is the same mutual inductance as in the reverse process.
5. Current changes in a device induce an emf in the device itself.
6. Self-inductance is the effect of the device inducing emf in itself.
7. The device is called an inductor, and the emf induced in it by a change in current through it is
where is the self-inductance of the inductor, and is the rate of change of current through it. The minus sign indicates that emf opposes the change in current, as required by Lenz’s law.
8. The unit of self- and mutual inductance is the henry (H), where .
9. The self-inductance of an inductor is proportional to how much flux changes with current. For an -turn inductor,
10. The self-inductance of a solenoid is
where is its number of turns in the solenoid, is its cross-sectional area, is its length, and is the permeability of free space.
11. The energy stored in an inductor is
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## RL Circuits
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the current in an RL circuit after a specified number of characteristic time steps.
2. Calculate the characteristic time of an RL circuit.
3. Sketch the current in an RL circuit over time.
We know that the current through an inductor cannot be turned on or off instantaneously. The change in current changes flux, inducing an emf opposing the change (Lenz’s law). How long does the opposition last? Current will flow and can be turned off, but how long does it take? shows a switching circuit that can be used to examine current through an inductor as a function of time.
When the switch is first moved to position 1 (at ), the current is zero and it eventually rises to , where
is the total resistance of the circuit. The opposition of the inductor is greatest at the beginning, because the amount of change is greatest. The opposition it poses is in the form of an induced emf, which decreases to zero as the current approaches its final value. The opposing emf is proportional to the amount of change left. This is the hallmark of an exponential behavior, and it can be shown with calculus that
is the current in an RL circuit when switched on (Note the similarity to the exponential behavior of the voltage on a charging capacitor). The initial current is zero and approaches with a characteristic time constant
for an RL circuit, given by
where has units of seconds, since
.
In the first period of time , the current rises from zero to , since . The current will go 0.632 of the remainder in the next time . A well-known property of the exponential is that the final value is never exactly reached, but 0.632 of the remainder to that value is achieved in every characteristic time . In just a few multiples of the time , the final value is very nearly achieved, as the graph in (b) illustrates.
The characteristic time depends on only two factors, the inductance and the resistance . The greater the inductance , the greater is, which makes sense since a large inductance is very effective in opposing change. The smaller the resistance , the greater is. Again this makes sense, since a small resistance means a large final current and a greater change to get there. In both cases—large and small —more energy is stored in the inductor and more time is required to get it in and out.
When the switch in (a) is moved to position 2 and cuts the battery out of the circuit, the current drops because of energy dissipation by the resistor. But this is also not instantaneous, since the inductor opposes the decrease in current by inducing an emf in the same direction as the battery that drove the current. Furthermore, there is a certain amount of energy, , stored in the inductor, and it is dissipated at a finite rate. As the current approaches zero, the rate of decrease slows, since the energy dissipation rate is . Once again the behavior is exponential, and
is found to be
(See (c).) In the first period of time after the switch is closed, the current falls to 0.368 of its initial value, since . In each successive time , the current falls to 0.368 of the preceding value, and in a few multiples of , the current becomes very close to zero, as seen in the graph in (c).
In summary, when the voltage applied to an inductor is changed, the current also changes, but the change in current lags the change in voltage in an RL circuit. In Reactance, Inductive and Capacitive, we explore how an RL circuit behaves when a sinusoidal AC voltage is applied.
### Section Summary
1. When a series connection of a resistor and an inductor—an RL circuit—is connected to a voltage source, the time variation of the current is
where
is the final current.
2. The characteristic time constant is , where
is the inductance and
is the resistance.
3. In the first time constant , the current rises from zero to , and 0.632 of the remainder in every subsequent time interval .
4. When the inductor is shorted through a resistor, current decreases as
Here is the initial current.
5. Current falls to in the first time interval , and 0.368 of the remainder toward zero in each subsequent time .
### Problem Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## Reactance, Inductive and Capacitive
### Learning Objectives
By the end of this section, you will be able to:
1. Sketch voltage and current versus time in simple inductive, capacitive, and resistive circuits.
2. Calculate inductive and capacitive reactance.
3. Calculate current and/or voltage in simple inductive, capacitive, and resistive circuits.
Many circuits also contain capacitors and inductors, in addition to resistors and an AC voltage source. We have seen how capacitors and inductors respond to DC voltage when it is switched on and off. We will now explore how inductors and capacitors react to sinusoidal AC voltage.
### Inductors and Inductive Reactance
Suppose an inductor is connected directly to an AC voltage source, as shown in . It is reasonable to assume negligible resistance, since in practice we can make the resistance of an inductor so small that it has a negligible effect on the circuit. Also shown is a graph of voltage and current as functions of time.
The graph in (b) starts with voltage at a maximum. Note that the current starts at zero and rises to its peak after the voltage that drives it, just as was the case when DC voltage was switched on in the preceding section. When the voltage becomes negative at point a, the current begins to decrease; it becomes zero at point b, where voltage is its most negative. The current then becomes negative, again following the voltage. The voltage becomes positive at point c and begins to make the current less negative. At point d, the current goes through zero just as the voltage reaches its positive peak to start another cycle. This behavior is summarized as follows:
Current lags behind voltage, since inductors oppose change in current. Changing current induces a back emf . This is considered to be an effective resistance of the inductor to AC. The rms current through an inductor is given by a version of Ohm’s law:
where
is the rms voltage across the inductor and is defined to be
with the frequency of the AC voltage source in hertz (An analysis of the circuit using Kirchhoff’s loop rule and calculus actually produces this expression). is called the inductive reactance, because the inductor reacts to impede the current. has units of ohms (, so that frequency times inductance has units of ), consistent with its role as an effective resistance. It makes sense that is proportional to , since the greater the induction the greater its resistance to change. It is also reasonable that is proportional to frequency , since greater frequency means greater change in current. That is, is large for large frequencies (large , small ). The greater the change, the greater the opposition of an inductor.
Note that although the resistance in the circuit considered is negligible, the AC current is not extremely large because inductive reactance impedes its flow. With AC, there is no time for the current to become extremely large.
### Capacitors and Capacitive Reactance
Consider the capacitor connected directly to an AC voltage source as shown in . The resistance of a circuit like this can be made so small that it has a negligible effect compared with the capacitor, and so we can assume negligible resistance. Voltage across the capacitor and current are graphed as functions of time in the figure.
The graph in starts with voltage across the capacitor at a maximum. The current is zero at this point, because the capacitor is fully charged and halts the flow. Then voltage drops and the current becomes negative as the capacitor discharges. At point a, the capacitor has fully discharged ( on it) and the voltage across it is zero. The current remains negative between points a and b, causing the voltage on the capacitor to reverse. This is complete at point b, where the current is zero and the voltage has its most negative value. The current becomes positive after point b, neutralizing the charge on the capacitor and bringing the voltage to zero at point c, which allows the current to reach its maximum. Between points c and d, the current drops to zero as the voltage rises to its peak, and the process starts to repeat. Throughout the cycle, the voltage follows what the current is doing by one-fourth of a cycle:
The capacitor is affecting the current, having the ability to stop it altogether when fully charged. Since an AC voltage is applied, there is an rms current, but it is limited by the capacitor. This is considered to be an effective resistance of the capacitor to AC, and so the rms current in the circuit containing only a capacitor is given by another version of Ohm’s law to be
where is the rms voltage and is defined (As with , this expression for results from an analysis of the circuit using Kirchhoff’s rules and calculus) to be
where is called the capacitive reactance, because the capacitor reacts to impede the current. has units of ohms (verification left as an exercise for the reader). is inversely proportional to the capacitance ; the larger the capacitor, the greater the charge it can store and the greater the current that can flow. It is also inversely proportional to the frequency ; the greater the frequency, the less time there is to fully charge the capacitor, and so it impedes current less.
Although a capacitor is basically an open circuit, there is an rms current in a circuit with an AC voltage applied to a capacitor. This is because the voltage is continually reversing, charging and discharging the capacitor. If the frequency goes to zero (DC), tends to infinity, and the current is zero once the capacitor is charged. At very high frequencies, the capacitor’s reactance tends to zero—it has a negligible reactance and does not impede the current (it acts like a simple wire). Capacitors have the opposite effect on AC circuits that inductors have.
### Resistors in an AC Circuit
Just as a reminder, consider , which shows an AC voltage applied to a resistor and a graph of voltage and current versus time. The voltage and current are exactly in phase in a resistor. There is no frequency dependence to the behavior of plain resistance in a circuit:
### Section Summary
1. For inductors in AC circuits, we find that when a sinusoidal voltage is applied to an inductor, the voltage leads the current by one-fourth of a cycle, or by a
phase angle.
2. The opposition of an inductor to a change in current is expressed as a type of AC resistance.
3. Ohm’s law for an inductor is
where is the rms voltage across the inductor.
4. is defined to be the inductive reactance, given by
with the frequency of the AC voltage source in hertz.
5. Inductive reactance has units of ohms and is greatest at high frequencies.
6. For capacitors, we find that when a sinusoidal voltage is applied to a capacitor, the voltage follows the current by one-fourth of a cycle, or by a
phase angle.
7. Since a capacitor can stop current when fully charged, it limits current and offers another form of AC resistance; Ohm’s law for a capacitor is
where is the rms voltage across the capacitor.
8. is defined to be the capacitive reactance, given by
9. has units of ohms and is greatest at low frequencies.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Induction, AC Circuits, and Electrical Technologies
## RLC Series AC Circuits
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the impedance, phase angle, resonant frequency, power, power factor, voltage, and/or current in a RLC series circuit.
2. Draw the circuit diagram for an RLC series circuit.
3. Explain the significance of the resonant frequency.
### Impedance
When alone in an AC circuit, inductors, capacitors, and resistors all impede current. How do they behave when all three occur together? Interestingly, their individual resistances in ohms do not simply add. Because inductors and capacitors behave in opposite ways, they partially to totally cancel each other’s effect. shows an RLC series circuit with an AC voltage source, the behavior of which is the subject of this section. The crux of the analysis of an RLC circuit is the frequency dependence of and , and the effect they have on the phase of voltage versus current (established in the preceding section). These give rise to the frequency dependence of the circuit, with important “resonance” features that are the basis of many applications, such as radio tuners.
The combined effect of resistance , inductive reactance , and capacitive reactance is defined to be impedance, an AC analogue to resistance in a DC circuit. Current, voltage, and impedance in an RLC circuit are related by an AC version of Ohm’s law:
Here is the peak current, the peak source voltage, and
is the impedance of the circuit. The units of impedance are ohms, and its effect on the circuit is as you might expect: the greater the impedance, the smaller the current. To get an expression for in terms of
, , and , we will now examine how the voltages across the various components are related to the source voltage. Those voltages are labeled , , and in .
Conservation of charge requires current to be the same in each part of the circuit at all times, so that we can say the currents in , , and are equal and in phase. But we know from the preceding section that the voltage across the inductor leads the current by one-fourth of a cycle, the voltage across the capacitor follows the current by one-fourth of a cycle, and the voltage across the resistor is exactly in phase with the current. shows these relationships in one graph, as well as showing the total voltage around the circuit , where all four voltages are the instantaneous values. According to Kirchhoff’s loop rule, the total voltage around the circuit
is also the voltage of the source.
You can see from that while is in phase with the current, leads by
, and follows by
. Thus and are
out of phase (crest to trough) and tend to cancel, although not completely unless they have the same magnitude. Since the peak voltages are not aligned (not in phase), the peak voltage of the source does not equal the sum of the peak voltages across , , and . The actual relationship is
where , , and are the peak voltages across , , and , respectively. Now, using Ohm’s law and definitions from Reactance, Inductive and Capacitive, we substitute into the above, as well as , , and , yielding
cancels to yield an expression for
:
which is the impedance of an RLC series AC circuit. For circuits without a resistor, take
; for those without an inductor, take ; and for those without a capacitor, take .
### Resonance in RLC Series AC Circuits
How does an RLC circuit behave as a function of the frequency of the driving voltage source? Combining Ohm’s law, , and the expression for impedance
from gives
The reactances vary with frequency, with large at high frequencies and large at low frequencies, as we have seen in three previous examples. At some intermediate frequency , the reactances will be equal and cancel, giving —this is a minimum value for impedance, and a maximum value for results. We can get an expression for by taking
Substituting the definitions of and ,
Solving this expression for yields
where is the resonant frequency of an RLC series circuit. This is also the natural frequency at which the circuit would oscillate if not driven by the voltage source. At , the effects of the inductor and capacitor cancel, so that , and is a maximum.
Resonance in AC circuits is analogous to mechanical resonance, where resonance is defined to be a forced oscillation—in this case, forced by the voltage source—at the natural frequency of the system. The receiver in a radio is an RLC circuit that oscillates best at its . A variable capacitor is often used to adjust to receive a desired frequency and to reject others. is a graph of current as a function of frequency, illustrating a resonant peak in at . The two curves are for two different circuits, which differ only in the amount of resistance in them. The peak is lower and broader for the higher-resistance circuit. Thus the higher-resistance circuit does not resonate as strongly and would not be as selective in a radio receiver, for example.
### Power in RLC Series AC Circuits
If current varies with frequency in an RLC circuit, then the power delivered to it also varies with frequency. But the average power is not simply current times voltage, as it is in purely resistive circuits. As was seen in , voltage and current are out of phase in an RLC circuit. There is a phase angle between the source voltage and the current , which can be found from
For example, at the resonant frequency or in a purely resistive circuit , so that . This implies that and that voltage and current are in phase, as expected for resistors. At other frequencies, average power is less than at resonance. This is both because voltage and current are out of phase and because is lower. The fact that source voltage and current are out of phase affects the power delivered to the circuit. It can be shown that the average power is
Thus is called the power factor, which can range from 0 to 1. Power factors near 1 are desirable when designing an efficient motor, for example. At the resonant frequency, .
Power delivered to an RLC series AC circuit is dissipated by the resistance alone. The inductor and capacitor have energy input and output but do not dissipate it out of the circuit. Rather they transfer energy back and forth to one another, with the resistor dissipating exactly what the voltage source puts into the circuit. This assumes no significant electromagnetic radiation from the inductor and capacitor, such as radio waves. Such radiation can happen and may even be desired, as we will see in the next chapter on electromagnetic radiation, but it can also be suppressed as is the case in this chapter. The circuit is analogous to the wheel of a car driven over a corrugated road as shown in . The regularly spaced bumps in the road are analogous to the voltage source, driving the wheel up and down. The shock absorber is analogous to the resistance damping and limiting the amplitude of the oscillation. Energy within the system goes back and forth between kinetic (analogous to maximum current, and energy stored in an inductor) and potential energy stored in the car spring (analogous to no current, and energy stored in the electric field of a capacitor). The amplitude of the wheels’ motion is a maximum if the bumps in the road are hit at the resonant frequency.
A pure LC circuit with negligible resistance oscillates at , the same resonant frequency as an RLC circuit. It can serve as a frequency standard or clock circuit—for example, in a digital wristwatch. With a very small resistance, only a very small energy input is necessary to maintain the oscillations. The circuit is analogous to a car with no shock absorbers. Once it starts oscillating, it continues at its natural frequency for some time. shows the analogy between an LC circuit and a mass on a spring.
### Section Summary
1. The AC analogy to resistance is impedance
, the combined effect of resistors, inductors, and capacitors, defined by the AC version of Ohm’s law:
where is the peak current and is the peak source voltage.
2. Impedance has units of ohms and is given by .
3. The resonant frequency , at which , is
4. In an AC circuit, there is a phase angle between source voltage and the current , which can be found from
5. for a purely resistive circuit or an RLC circuit at resonance.
6. The average power delivered to an RLC circuit is affected by the phase angle and is given by
is called the power factor, which ranges from 0 to 1.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Waves
## Connection for AP® Courses
Electromagnetic waves are all around us. The beauty of a coral reef, the warmth of sunshine, sunburn, an X-ray image revealing a broken bone, even microwave popcorn—all involve electromagnetic waves. The list of the various types of electromagnetic waves, ranging from radio transmission waves to nuclear gamma-rays (γ-rays), is interesting in itself. Even more intriguing is that all of these widely varied phenomena are different manifestations of the same thing—electromagnetic waves. (See .)
What are electromagnetic waves? How are they created, and how do they travel? How can we understand and conceptualize their widely varying properties? What is their relationship to electric and magnetic effects? These and other questions will be explored in this chapter.
Electromagnetic waves support Big Idea 6 that waves can transport energy and momentum. In general, electromagnetic waves behave like any other wave, as they are traveling disturbances (Enduring Understanding 6.A). They consist of oscillating electric and magnetic fields, which can be conceived of as transverse waves (Essential Knowledge 6.A.1). They are periodic and can be described by their amplitude, frequency, wavelength, speed, and energy (Enduring Understanding 6.B).
Simple waves can be modeled mathematically using sine or cosine functions involving the wavelength, amplitude, and frequency of the wave. (Essential Knowledge 6.B.3). However, electromagnetic waves also have some unique properties compared to other waves. They can travel through both matter and a vacuum (Essential Knowledge 6.F.2), unlike mechanical waves, including sound, that require a medium (Essential Knowledge 6.A.2).
Maxwell’s equations define the relationship between electric permittivity, the magnetic permeability of free space (vacuum), and the speed of light, which is the speed of propagation of all electromagnetic waves in a vacuum. This chapter uses the properties electric permittivity (Essential Knowledge 1.E.4) and magnetic permeability (Essential Knowledge 1.E.5) to support Big Idea 1 that objects and systems have certain properties and may have internal structure.
The particular properties mentioned are the macroscopic results of the atomic and molecular structure of materials (Enduring Understanding 1.E). Electromagnetic radiation can be modeled as a wave or as fundamental particles (Enduring Understanding 6.F). This chapter also introduces different types of electromagnetic radiation that are characterized by their wavelengths (Essential Knowledge 6.F.1) and have been given specific names (see ).
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.E Materials have many macroscopic properties that result from the arrangement and interactions of the atoms and molecules that make up the material.
Essential Knowledge 1.E.4 Matter has a property called electric permittivity.
Essential Knowledge 1.E.5 Matter has a property called magnetic permeability.
Big Idea 6. Waves can transfer energy and momentum from one location to another without the permanent transfer of mass and serve as a mathematical model for the description of other phenomena.
Enduring Understanding 6.A A wave is a traveling disturbance that transfers energy and momentum.
Essential Knowledge 6.A.1 Waves can propagate via different oscillation modes such as transverse and longitudinal.
Essential Knowledge 6.A.2 For propagation, mechanical waves require a medium, while electromagnetic waves do not require a physical medium. Examples include light traveling through a vacuum and sound not traveling through a vacuum.
Enduring Understanding 6.B A periodic wave is one that repeats as a function of both time and position and can be described by its amplitude, frequency, wavelength, speed, and energy.
Essential Knowledge 6.B.3 A simple wave can be described by an equation involving one sine or cosine function involving the wavelength, amplitude, and frequency of the wave.
Enduring Understanding 6.F Electromagnetic radiation can be modeled as waves or as fundamental particles.
Essential Knowledge 6.F.1 Types of electromagnetic radiation are characterized by their wavelengths, and certain ranges of wavelength have been given specific names. These include (in order of increasing wavelength spanning a range from picometers to kilometers) gamma rays, x-rays, ultraviolet, visible light, infrared, microwaves, and radio waves.
Essential Knowledge 6.F.2 Electromagnetic waves can transmit energy through a medium and through a vacuum.
### Discovering a New Phenomenon
It is worth noting at the outset that the general phenomenon of electromagnetic waves was predicted by theory before it was realized that light is a form of electromagnetic wave. The prediction was made by James Clerk Maxwell in the mid-19th century when he formulated a single theory combining all the electric and magnetic effects known by scientists at that time. “Electromagnetic waves” was the name he gave to the phenomena his theory predicted.
Such a theoretical prediction followed by experimental verification is an indication of the power of science in general, and physics in particular. The underlying connections and unity of physics allow certain great minds to solve puzzles without having all the pieces. The prediction of electromagnetic waves is one of the most spectacular examples of this power. Certain others, such as the prediction of antimatter, will be discussed in later modules. |
# Electromagnetic Waves
## Maxwell’s Equations: Electromagnetic Waves Predicted and Observed
### Learning Objectives
By the end of this section, you will be able to:
1. Restate Maxwell’s equations.
The Scotsman James Clerk Maxwell (1831–1879) is regarded as the greatest theoretical physicist of the 19th century. (See .) Although he died young, Maxwell not only formulated a complete electromagnetic theory, represented by Maxwell’s equations, he also developed the kinetic theory of gases and made significant contributions to the understanding of color vision and the nature of Saturn’s rings.
Maxwell brought together all the work that had been done by brilliant physicists such as Oersted, Coulomb, Gauss, and Faraday, and added his own insights to develop the overarching theory of electromagnetism. Maxwell’s equations are paraphrased here in words because their mathematical statement is beyond the level of this text. However, the equations illustrate how apparently simple mathematical statements can elegantly unite and express a multitude of concepts—why mathematics is the language of science.
Maxwell’s equations encompass the major laws of electricity and magnetism. What is not so apparent is the symmetry that Maxwell introduced in his mathematical framework. Especially important is his addition of the hypothesis that changing electric fields create magnetic fields. This is exactly analogous (and symmetric) to Faraday’s law of induction and had been suspected for some time, but fits beautifully into Maxwell’s equations.
Symmetry is apparent in nature in a wide range of situations. In contemporary research, symmetry plays a major part in the search for sub-atomic particles using massive multinational particle accelerators such as the new Large Hadron Collider at CERN.
Since changing electric fields create relatively weak magnetic fields, they could not be easily detected at the time of Maxwell’s hypothesis. Maxwell realized, however, that oscillating charges, like those in AC circuits, produce changing electric fields. He predicted that these changing fields would propagate from the source like waves generated on a lake by a jumping fish.
The waves predicted by Maxwell would consist of oscillating electric and magnetic fields—defined to be an electromagnetic wave (EM wave). Electromagnetic waves would be capable of exerting forces on charges great distances from their source, and they might thus be detectable. Maxwell calculated that electromagnetic waves would propagate at a speed given by the equation
When the values for and are entered into the equation for
, we find that
which is the speed of light. In fact, Maxwell concluded that light is an electromagnetic wave having such wavelengths that it can be detected by the eye.
Other wavelengths should exist—it remained to be seen if they did. If so, Maxwell’s theory and remarkable predictions would be verified, the greatest triumph of physics since Newton. Experimental verification came within a few years, but not before Maxwell’s death.
### Hertz’s Observations
The German physicist Heinrich Hertz (1857–1894) was the first to generate and detect certain types of electromagnetic waves in the laboratory. Starting in 1887, he performed a series of experiments that not only confirmed the existence of electromagnetic waves, but also verified that they travel at the speed of light.
Hertz used an AC (resistor-inductor-capacitor) circuit that resonates at a known frequency and connected it to a loop of wire as shown in . High voltages induced across the gap in the loop produced sparks that were visible evidence of the current in the circuit and that helped generate electromagnetic waves.
Across the laboratory, Hertz had another loop attached to another circuit, which could be tuned (as the dial on a radio) to the same resonant frequency as the first and could, thus, be made to receive electromagnetic waves. This loop also had a gap across which sparks were generated, giving solid evidence that electromagnetic waves had been received.
Hertz also studied the reflection, refraction, and interference patterns of the electromagnetic waves he generated, verifying their wave character. He was able to determine wavelength from the interference patterns, and knowing their frequency, he could calculate the propagation speed using the equation (velocity—or speed—equals frequency times wavelength). Hertz was thus able to prove that electromagnetic waves travel at the speed of light. The SI unit for frequency, the hertz (), is named in his honor.
### Section Summary
1. Electromagnetic waves consist of oscillating electric and magnetic fields and propagate at the speed of light
. They were predicted by Maxwell, who also showed that
where
2. Maxwell’s prediction of electromagnetic waves resulted from his formulation of a complete and symmetric theory of electricity and magnetism, known as Maxwell’s equations.
3. These four equations are paraphrased in this text, rather than presented numerically, and encompass the major laws of electricity and magnetism. First is Gauss’s law for electricity, second is Gauss’s law for magnetism, third is Faraday’s law of induction, including Lenz’s law, and fourth is Ampere’s law in a symmetric formulation that adds another source of magnetism—changing electric fields.
### Problems & Exercises
|
# Electromagnetic Waves
## Production of Electromagnetic Waves
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the electric and magnetic waves as they move out from a source, such as an AC generator.
2. Explain the mathematical relationship between the magnetic field strength and the electrical field strength.
3. Calculate the maximum strength of the magnetic field in an electromagnetic wave, given the maximum electric field strength.
We can get a good understanding of electromagnetic waves (EM) by considering how they are produced. Whenever a current varies, associated electric and magnetic fields vary, moving out from the source like waves. Perhaps the easiest situation to visualize is a varying current in a long straight wire, produced by an AC generator at its center, as illustrated in .
The electric field () shown surrounding the wire is produced by the charge distribution on the wire. Both the and the charge distribution vary as the current changes. The changing field propagates outward at the speed of light.
There is an associated magnetic field () which propagates outward as well (see ). The electric and magnetic fields are closely related and propagate as an electromagnetic wave. This is what happens in broadcast antennae such as those in radio and TV stations.
Closer examination of the one complete cycle shown in reveals the periodic nature of the generator-driven charges oscillating up and down in the antenna and the electric field produced. At time , there is the maximum separation of charge, with negative charges at the top and positive charges at the bottom, producing the maximum magnitude of the electric field (or -field) in the upward direction. One-fourth of a cycle later, there is no charge separation and the field next to the antenna is zero, while the maximum -field has moved away at speed .
As the process continues, the charge separation reverses and the field reaches its maximum downward value, returns to zero, and rises to its maximum upward value at the end of one complete cycle. The outgoing wave has an amplitude proportional to the maximum separation of charge. Its wavelength is proportional to the period of the oscillation and, hence, is smaller for short periods or high frequencies. (As usual, wavelength and frequency are inversely proportional.)
### Electric and Magnetic Waves: Moving Together
Following Ampere’s law, current in the antenna produces a magnetic field, as shown in . The relationship between and is shown at one instant in (a). As the current varies, the magnetic field varies in magnitude and direction.
The magnetic field lines also propagate away from the antenna at the speed of light, forming the other part of the electromagnetic wave, as seen in (b). The magnetic part of the wave has the same period and wavelength as the electric part, since they are both produced by the same movement and separation of charges in the antenna.
The electric and magnetic waves are shown together at one instant in time in . The electric and magnetic fields produced by a long straight wire antenna are exactly in phase. Note that they are perpendicular to one another and to the direction of propagation, making this a transverse wave.
Electromagnetic waves generally propagate out from a source in all directions, sometimes forming a complex radiation pattern. A linear antenna like this one will not radiate parallel to its length, for example. The wave is shown in one direction from the antenna in to illustrate its basic characteristics.
Instead of the AC generator, the antenna can also be driven by an AC circuit. In fact, charges radiate whenever they are accelerated. But while a current in a circuit needs a complete path, an antenna has a varying charge distribution forming a standing wave, driven by the AC. The dimensions of the antenna are critical for determining the frequency of the radiated electromagnetic waves. This is a resonant phenomenon and when we tune radios or TV, we vary electrical properties to achieve appropriate resonant conditions in the antenna.
### Receiving Electromagnetic Waves
Electromagnetic waves carry energy away from their source, similar to a sound wave carrying energy away from a standing wave on a guitar string. An antenna for receiving EM signals works in reverse. And like antennas that produce EM waves, receiver antennas are specially designed to resonate at particular frequencies.
An incoming electromagnetic wave accelerates electrons in the antenna, setting up a standing wave. If the radio or TV is switched on, electrical components pick up and amplify the signal formed by the accelerating electrons. The signal is then converted to audio and/or video format. Sometimes big receiver dishes are used to focus the signal onto an antenna.
In fact, charges radiate whenever they are accelerated. When designing circuits, we often assume that energy does not quickly escape AC circuits, and mostly this is true. A broadcast antenna is specially designed to enhance the rate of electromagnetic radiation, and shielding is necessary to keep the radiation close to zero. Some familiar phenomena are based on the production of electromagnetic waves by varying currents. Your microwave oven, for example, sends electromagnetic waves, called microwaves, from a concealed antenna that has an oscillating current imposed on it.
### Relating -Field and -Field Strengths
There is a relationship between the - and -field strengths in an electromagnetic wave. This can be understood by again considering the antenna just described. The stronger the -field created by a separation of charge, the greater the current and, hence, the greater the -field created.
Since current is directly proportional to voltage (Ohm’s law) and voltage is directly proportional to -field strength, the two should be directly proportional. It can be shown that the magnitudes of the fields do have a constant ratio, equal to the speed of light. That is,
is the ratio of -field strength to -field strength in any electromagnetic wave. This is true at all times and at all locations in space. A simple and elegant result.
The result of this example is consistent with the statement made in the module Maxwell’s Equations: Electromagnetic Waves Predicted and Observed that changing electric fields create relatively weak magnetic fields. They can be detected in electromagnetic waves, however, by taking advantage of the phenomenon of resonance, as Hertz did. A system with the same natural frequency as the electromagnetic wave can be made to oscillate. All radio and TV receivers use this principle to pick up and then amplify weak electromagnetic waves, while rejecting all others not at their resonant frequency.
### Test Prep for AP Courses
### Section Summary
1. Electromagnetic waves are created by oscillating charges (which radiate whenever accelerated) and have the same frequency as the oscillation.
2. Since the electric and magnetic fields in most electromagnetic waves are perpendicular to the direction in which the wave moves, it is ordinarily a transverse wave.
3. The strengths of the electric and magnetic parts of the wave are related by
which implies that the magnetic field
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Waves
## The Electromagnetic Spectrum
### Learning Objectives
By the end of this section, you will be able to:
1. List three “rules of thumb” that apply to the different frequencies along the electromagnetic spectrum.
2. Explain why the higher the frequency, the shorter the wavelength of an electromagnetic wave.
3. Draw a simplified electromagnetic spectrum, indicating the relative positions, frequencies, and spacing of the different types of radiation bands.
4. List and explain the different methods by which electromagnetic waves are produced across the spectrum.
In this module we examine how electromagnetic waves are classified into categories such as radio, infrared, ultraviolet, and so on, so that we can understand some of their similarities as well as some of their differences. We will also find that there are many connections with previously discussed topics, such as wavelength and resonance. A brief overview of the production and utilization of electromagnetic waves is found in .
As noted before, an electromagnetic wave has a frequency and a wavelength associated with it and travels at the speed of light, or . The relationship among these wave characteristics can be described by , where is the propagation speed of the wave, is the frequency, and is the wavelength. Here , so that for all electromagnetic waves,
Thus, for all electromagnetic waves, the greater the frequency, the smaller the wavelength.
shows how the various types of electromagnetic waves are categorized according to their wavelengths and frequencies—that is, it shows the electromagnetic spectrum. Many of the characteristics of the various types of electromagnetic waves are related to their frequencies and wavelengths, as we shall see.
### Transmission, Reflection, and Absorption
What happens when an electromagnetic wave impinges on a material? If the material is transparent to the particular frequency, then the wave can largely be transmitted. If the material is opaque to the frequency, then the wave can be totally reflected. The wave can also be absorbed by the material, indicating that there is some interaction between the wave and the material, such as the thermal agitation of molecules.
Of course it is possible to have partial transmission, reflection, and absorption. We normally associate these properties with visible light, but they do apply to all electromagnetic waves. What is not obvious is that something that is transparent to light may be opaque at other frequencies. For example, ordinary glass is transparent to visible light but largely opaque to ultraviolet radiation. Human skin is opaque to visible light—we cannot see through people—but transparent to X-rays.
### Radio and TV Waves
The broad category of radio waves is defined to contain any electromagnetic wave produced by currents in wires and circuits. Its name derives from their most common use as a carrier of audio information (i.e., radio). The name is applied to electromagnetic waves of similar frequencies regardless of source. Radio waves from outer space, for example, do not come from alien radio stations. They are created by many astronomical phenomena, and their study has revealed much about nature on the largest scales.
There are many uses for radio waves, and so the category is divided into many subcategories, including microwaves and those electromagnetic waves used for AM and FM radio, cellular telephones, and TV.
The lowest commonly encountered radio frequencies are produced by high-voltage AC power transmission lines at frequencies of 50 or 60 Hz. (See .) These extremely long wavelength electromagnetic waves (about 6000 km!) are one means of energy loss in long-distance power transmission.
There was a concern regarding potential health hazards associated with exposure to these electromagnetic fields (-fields). Some people suspect that living near such transmission lines may cause a variety of illnesses, including cancer. But these power lines produce non-ionizing radiation, which government environmental organizations, medical researchers, and cancer organizations indicate are not risk factors for illness. Recent reports that have looked at many European and American epidemiological studies have found no increase in risk for cancer due to exposure to -fields.
Extremely low frequency (ELF) radio waves of about 1 kHz are used to communicate with submerged submarines. The ability of radio waves to penetrate salt water is related to their wavelength (much like ultrasound penetrating tissue)—the longer the wavelength, the farther they penetrate. Since salt water is a good conductor, radio waves are strongly absorbed by it, and very long wavelengths are needed to reach a submarine under the surface. (See .)
AM radio waves are used to carry commercial radio signals in the frequency range from 540 to 1600 kHz. The abbreviation AM stands for amplitude modulation, which is the method for placing information on these waves. (See .) A carrier wave having the basic frequency of the radio station, say 1530 kHz, is varied or modulated in amplitude by an audio signal. The resulting wave has a constant frequency, but a varying amplitude.
A radio receiver tuned to have the same resonant frequency as the carrier wave can pick up the signal, while rejecting the many other frequencies impinging on its antenna. The receiver’s circuitry is designed to respond to variations in amplitude of the carrier wave to replicate the original audio signal. That audio signal is amplified to drive a speaker or perhaps to be recorded.
### FM Radio Waves
FM radio waves are also used for commercial radio transmission, but in the frequency range of 88 to 108 MHz. FM stands for frequency modulation, another method of carrying information. (See .) Here a carrier wave having the basic frequency of the radio station, perhaps 105.1 MHz, is modulated in frequency by the audio signal, producing a wave of constant amplitude but varying frequency.
Since audible frequencies range up to 20 kHz (or 0.020 MHz) at most, the frequency of the FM radio wave can vary from the carrier by as much as 0.020 MHz. Thus the carrier frequencies of two different radio stations cannot be closer than 0.020 MHz. An FM receiver is tuned to resonate at the carrier frequency and has circuitry that responds to variations in frequency, reproducing the audio information.
FM radio is inherently less subject to noise from stray radio sources than AM radio. The reason is that amplitudes of waves add. So an AM receiver would interpret noise added onto the amplitude of its carrier wave as part of the information. An FM receiver can be made to reject amplitudes other than that of the basic carrier wave and only look for variations in frequency. It is thus easier to reject noise from FM, since noise produces a variation in amplitude.
Television is also broadcast on electromagnetic waves. Since the waves must carry a great deal of visual as well as audio information, each channel requires a larger range of frequencies than simple radio transmission. TV channels utilize frequencies in the range of 54 to 88 MHz and 174 to 222 MHz. (The entire FM radio band lies between channels 88 MHz and 174 MHz.) These TV channels are called VHF (for very high frequency). Other channels called UHF (for ultra high frequency) utilize an even higher frequency range of 470 to 1000 MHz.
The TV video signal is AM, while the TV audio is FM. Note that these frequencies are those of free transmission with the user utilizing an old-fashioned roof antenna. Satellite dishes and cable transmission of TV occurs at significantly higher frequencies and is rapidly evolving with the use of the high-definition or HD format.
The wavelengths found in the preceding example are representative of AM, FM, and cell phones, and account for some of the differences in how they are broadcast and how well they travel. The most efficient length for a linear antenna, such as discussed in Production of Electromagnetic Waves, is , half the wavelength of the electromagnetic wave. Thus a very large antenna is needed to efficiently broadcast typical AM radio with its carrier wavelengths on the order of hundreds of meters.
One benefit to these long AM wavelengths is that they can go over and around rather large obstacles (like buildings and hills), just as ocean waves can go around large rocks. FM and TV are best received when there is a line of sight between the broadcast antenna and receiver, and they are often sent from very tall structures. FM, TV, and mobile phone antennas themselves are much smaller than those used for AM, but they are elevated to achieve an unobstructed line of sight. (See .)
### Radio Wave Interference
Astronomers and astrophysicists collect signals from outer space using electromagnetic waves. A common problem for astrophysicists is the “pollution” from electromagnetic radiation pervading our surroundings from communication systems in general. Even everyday gadgets like our car keyless entry devices and remote starters and being able to turn TVs on and off using remotes involve radio-wave frequencies. In order to prevent interference between all these electromagnetic signals, strict regulations are drawn up for different organizations to utilize different radio frequency bands.
One reason why we are sometimes asked to switch off our mobile phones (operating in the range of 1.9 GHz) or put them into a noncommunicative mode on airplanes and in hospitals is that important communications or medical equipment often uses similar radio frequencies and their operation can be affected by frequencies used in the communication devices.
For example, radio waves used in magnetic resonance imaging (MRI) have frequencies on the order of 100 MHz, although this varies significantly depending on the strength of the magnetic field used and the nuclear type being scanned. MRI is an important medical imaging and research tool, producing highly detailed two- and three-dimensional images. Radio waves are broadcast, absorbed, and reemitted in a resonance process that is sensitive to the density of nuclei (usually protons or hydrogen nuclei).
The wavelength of 100-MHz radio waves is 3 m, yet using the sensitivity of the resonant frequency to the magnetic field strength, details smaller than a millimeter can be imaged. This is a good example of an exception to a rule of thumb (in this case, the rubric that details much smaller than the probe’s wavelength cannot be detected). The intensity of the radio waves used in MRI presents little or no hazard to human health.
### Microwaves
Microwaves are the highest-frequency electromagnetic waves that can be produced by currents in macroscopic circuits and devices. Microwave frequencies range from about to the highest practical resonance at nearly . Since they have high frequencies, their wavelengths are short compared with those of other radio waves—hence the name “microwave.”
Microwaves can also be produced by atoms and molecules. They are, for example, a component of electromagnetic radiation generated by thermal agitation. The thermal motion of atoms and molecules in any object at a temperature above absolute zero causes them to emit and absorb radiation.
Since it is possible to carry more information per unit time on high frequencies, microwaves are quite suitable for communications. Most satellite-transmitted information is carried on microwaves, as are land-based long-distance transmissions. A clear line of sight between transmitter and receiver is needed because of the short wavelengths involved.
Radar is a common application of microwaves that was first developed in World War II. By detecting and timing microwave echoes, radar systems can determine the distance to objects as diverse as clouds and aircraft. A Doppler shift in the radar echo can be used to determine the speed of a car or the intensity of a rainstorm. Sophisticated radar systems are used to map the Earth and other planets, with a resolution limited by wavelength. (See .) The shorter the wavelength of any probe, the smaller the detail it is possible to observe.
### Heating with Microwaves
How does the ubiquitous microwave oven produce microwaves electronically, and why does food absorb them preferentially? Microwaves at a frequency of 2.45 GHz are produced by accelerating electrons. The microwaves are then used to induce an alternating electric field in the oven.
Water and some other constituents of food have a slightly negative charge at one end and a slightly positive charge at one end (called polar molecules). The range of microwave frequencies is specially selected so that the polar molecules, in trying to keep orienting themselves with the electric field, absorb these energies and increase their temperatures—called dielectric heating.
The energy thereby absorbed results in thermal agitation heating food and not the plate, which does not contain water. Hot spots in the food are related to constructive and destructive interference patterns. Rotating antennas and food turntables help spread out the hot spots.
Another use of microwaves for heating is within the human body. Microwaves will penetrate more than shorter wavelengths into tissue and so can accomplish “deep heating” (called microwave diathermy). This is used for treating muscular pains, spasms, tendonitis, and rheumatoid arthritis.
Microwaves generated by atoms and molecules far away in time and space can be received and detected by electronic circuits. Deep space acts like a blackbody with a 2.7 K temperature, radiating most of its energy in the microwave frequency range. In 1964, Penzias and Wilson detected this radiation and eventually recognized that it was the radiation of the Big Bang’s cooled remnants.
### Infrared Radiation
The microwave and infrared regions of the electromagnetic spectrum overlap (see ). Infrared radiation is generally produced by thermal motion and the vibration and rotation of atoms and molecules. Electronic transitions in atoms and molecules can also produce infrared radiation.
The range of infrared frequencies extends up to the lower limit of visible light, just below red. In fact, infrared means “below red.” Frequencies at its upper limit are too high to be produced by accelerating electrons in circuits, but small systems, such as atoms and molecules, can vibrate fast enough to produce these waves.
Water molecules rotate and vibrate particularly well at infrared frequencies, emitting and absorbing them so efficiently that the emissivity for skin is in the infrared. Night-vision scopes can detect the infrared emitted by various warm objects, including humans, and convert it to visible light.
We can examine radiant heat transfer from a house by using a camera capable of detecting infrared radiation. Reconnaissance satellites can detect buildings, vehicles, and even individual humans by their infrared emissions, whose power radiation is proportional to the fourth power of the absolute temperature. More mundanely, we use infrared lamps, some of which are called quartz heaters, to preferentially warm us because we absorb infrared better than our surroundings.
The Sun radiates like a nearly perfect blackbody (that is, it has ), with a 6000 K surface temperature. About half of the solar energy arriving at the Earth is in the infrared region, with most of the rest in the visible part of the spectrum, and a relatively small amount in the ultraviolet. On average, 50 percent of the incident solar energy is absorbed by the Earth.
The relatively constant temperature of the Earth is a result of the energy balance between the incoming solar radiation and the energy radiated from the Earth. Most of the infrared radiation emitted from the Earth is absorbed by and in the atmosphere and then radiated back to Earth or into outer space. This radiation back to Earth is known as the greenhouse effect, and it maintains the surface temperature of the Earth about higher than it would be if there is no absorption. Some scientists think that the increased concentration of and other greenhouse gases in the atmosphere, resulting from increases in fossil fuel burning, has increased global average temperatures.
### Visible Light
Visible light is the narrow segment of the electromagnetic spectrum to which the normal human eye responds. Visible light is produced by vibrations and rotations of atoms and molecules, as well as by electronic transitions within atoms and molecules. The receivers or detectors of light largely utilize electronic transitions. We say the atoms and molecules are excited when they absorb and relax when they emit through electronic transitions.
shows this part of the spectrum, together with the colors associated with particular pure wavelengths. We usually refer to visible light as having wavelengths of between 400 nm and 750 nm. (The retina of the eye actually responds to the lowest ultraviolet frequencies, but these do not normally reach the retina because they are absorbed by the cornea and lens of the eye.)
Red light has the lowest frequencies and longest wavelengths, while violet has the highest frequencies and shortest wavelengths. Blackbody radiation from the Sun peaks in the visible part of the spectrum but is more intense in the red than in the violet, making the Sun yellowish in appearance.
Living things—plants and animals—have evolved to utilize and respond to parts of the electromagnetic spectrum they are embedded in. Visible light is the most predominant and we enjoy the beauty of nature through visible light. Plants are more selective. Photosynthesis makes use of parts of the visible spectrum to make sugars.
Optics is the study of the behavior of visible light and other forms of electromagnetic waves. Optics falls into two distinct categories. When electromagnetic radiation, such as visible light, interacts with objects that are large compared with its wavelength, its motion can be represented by straight lines like rays. Ray optics is the study of such situations and includes lenses and mirrors.
When electromagnetic radiation interacts with objects about the same size as the wavelength or smaller, its wave nature becomes apparent. For example, observable detail is limited by the wavelength, and so visible light can never detect individual atoms, because they are so much smaller than its wavelength. Physical or wave optics is the study of such situations and includes all wave characteristics.
### Ultraviolet Radiation
Ultraviolet means “above violet.” The electromagnetic frequencies of ultraviolet radiation (UV) extend upward from violet, the highest-frequency visible light. Ultraviolet is also produced by atomic and molecular motions and electronic transitions. The wavelengths of ultraviolet extend from 400 nm down to about 10 nm at its highest frequencies, which overlap with the lowest X-ray frequencies. It was recognized as early as 1801 by Johann Ritter that the solar spectrum had an invisible component beyond the violet range.
Solar UV radiation is broadly subdivided into three regions: UV-A (320–400 nm), UV-B (290–320 nm), and UV-C (220–290 nm), ranked from long to shorter wavelengths (from smaller to larger energies). Most UV-B and all UV-C is absorbed by ozone () molecules in the upper atmosphere. Consequently, 99% of the solar UV radiation reaching the Earth’s surface is UV-A.
One of the first illustrations of UV light’s impact on Earth occurred during the Apollo 16 mission in 1972. The mission included the first astronomical images taken from the moon, using a compact and resilient Far Ultraviolet Camera/Spectrograph designed for moon use by scientist and inventor George Robert Carruthers. Designed to capture UV images without the obscuring effects of the Earth’s atmosphere, its most famous image was of the planet itself. Carruthers, who also trained the astronauts on the device’s use, mentioned afterward that “the most immediately obvious and spectacular results were really for the Earth observations, because this was the first time that the Earth had been photographed from a distance in ultraviolet (UV) light, so that you could see the full extent of the hydrogen atmosphere, the polar auroris and what we call the tropical airglow belt.”
### Human Exposure to UV Radiation
It is largely exposure to UV-B that causes skin cancer. It is estimated that as many as 20% of adults will develop skin cancer over the course of their lifetime. Again, treatment is often successful if caught early. Despite very little UV-B reaching the Earth’s surface, there are substantial increases in skin-cancer rates in countries such as Australia, indicating how important it is that UV-B and UV-C continue to be absorbed by the upper atmosphere.
All UV radiation can damage collagen fibers, resulting in an acceleration of the aging process of skin and the formation of wrinkles. Because there is so little UV-B and UV-C reaching the Earth’s surface, sunburn is caused by large exposures, and skin cancer from repeated exposure. Some studies indicate a link between overexposure to the Sun when young and melanoma later in life.
The tanning response is a defense mechanism in which the body produces pigments to absorb future exposures in inert skin layers above living cells. Basically UV-B radiation excites DNA molecules, distorting the DNA helix, leading to mutations and the possible formation of cancerous cells.
Repeated exposure to UV-B may also lead to the formation of cataracts in the eyes—a cause of blindness among people living in the equatorial belt where medical treatment is limited. Cataracts, clouding in the eye’s lens and a loss of vision, are age related; 60% of those between the ages of 65 and 74 will develop cataracts. However, treatment is easy and successful, as one replaces the lens of the eye with a plastic lens. Prevention is important. Eye protection from UV is more effective with plastic sunglasses than those made of glass.
A major acute effect of extreme UV exposure is the suppression of the immune system, both locally and throughout the body.
Low-intensity ultraviolet is used to sterilize haircutting implements, implying that the energy associated with ultraviolet is deposited in a manner different from lower-frequency electromagnetic waves. (Actually this is true for all electromagnetic waves with frequencies greater than visible light.)
Flash photography is generally not allowed of precious artworks and colored prints because the UV radiation from the flash can cause photo-degradation in the artworks. Often artworks will have an extra-thick layer of glass in front of them, which is especially designed to absorb UV radiation.
### UV Light and the Ozone Layer
If all of the Sun’s ultraviolet radiation reached the Earth’s surface, there would be extremely grave effects on the biosphere from the severe cell damage it causes. However, the layer of ozone () in our upper atmosphere (10 to 50 km above the Earth) protects life by absorbing most of the dangerous UV radiation.
Unfortunately, today we are observing a depletion in ozone concentrations in the upper atmosphere. This depletion has led to the formation of an “ozone hole” in the upper atmosphere. The hole is more centered over the southern hemisphere, and changes with the seasons, being largest in the spring. This depletion is attributed to the breakdown of ozone molecules by refrigerant gases called chlorofluorocarbons (CFCs).
The UV radiation helps dissociate the CFC’s, releasing highly reactive chlorine (Cl) atoms, which catalyze the destruction of the ozone layer. For example, the reaction of with a photon of light can be written as:
The Cl atom then catalyzes the breakdown of ozone as follows:
A single chlorine atom could destroy ozone molecules for up to two years before being transported down to the surface. The CFCs are relatively stable and will contribute to ozone depletion for years to come. CFCs are found in refrigerants, air conditioning systems, foams, and aerosols.
International concern over this problem led to the establishment of the “Montreal Protocol” agreement (1987) to phase out CFC production in most countries. However, developing-country participation is needed if worldwide production and elimination of CFCs is to be achieved. Probably the largest contributor to CFC emissions today is China. And while there are indicators that the Protocol has been a success, there is still substantial risk and variability in the ozone layer. The 2019 Antarctic ozone hole was small and short-lived, continuing the general trend toward recovery. But the 2020 Antarctic ozone hole was the largest and longest-lasting on record, partially due to atmospheric conditions. Furthermore, emissions are not the only concern. Susan Solomon and her colleagues at MIT have uncovered the substantial impact of CFC “banks,” in certain regions, where outdated and deteriorating equipment (such as air conditioners) or materials can release enough CFCs to be detectable in the atmosphere and deplete the ozone layer. (See .)
### Benefits of UV Light
Besides the adverse effects of ultraviolet radiation, there are also benefits of exposure in nature and uses in technology. Vitamin D production in the skin (epidermis) results from exposure to UVB radiation, generally from sunlight. A number of studies indicate lack of vitamin D can result in the development of a range of cancers (prostate, breast, colon), so a certain amount of UV exposure is helpful. Lack of vitamin D is also linked to osteoporosis. Exposures (with no sunscreen) of 10 minutes a day to arms, face, and legs might be sufficient to provide the accepted dietary level. However, in the winter time north of about latitude, most UVB gets blocked by the atmosphere.
UV radiation is used in the treatment of infantile jaundice and in some skin conditions. It is also used in sterilizing workspaces and tools, and killing germs in a wide range of applications. It is also used as an analytical tool to identify substances.
When exposed to ultraviolet, some substances, such as minerals, glow in characteristic visible wavelengths, a process called fluorescence. So-called black lights emit ultraviolet to cause posters and clothing to fluoresce in the visible. Ultraviolet is also used in special microscopes to detect details smaller than those observable with longer-wavelength visible-light microscopes.
### X-Rays
In the 1850s, scientists (such as Faraday) began experimenting with high-voltage electrical discharges in tubes filled with rarefied gases. It was later found that these discharges created an invisible, penetrating form of very high frequency electromagnetic radiation. This radiation was called an X-ray, because its identity and nature were unknown.
As described in Things Great and Small, there are two methods by which X-rays are created—both are submicroscopic processes and can be caused by high-voltage discharges. While the low-frequency end of the X-ray range overlaps with the ultraviolet, X-rays extend to much higher frequencies (and energies).
X-rays have adverse effects on living cells similar to those of ultraviolet radiation, and they have the additional liability of being more penetrating, affecting more than the surface layers of cells. Cancer and genetic defects can be induced by exposure to X-rays. Because of their effect on rapidly dividing cells, X-rays can also be used to treat and even cure cancer.
The widest use of X-rays is for imaging objects that are opaque to visible light, such as the human body or aircraft parts. In humans, the risk of cell damage is weighed carefully against the benefit of the diagnostic information obtained. However, questions have risen in recent years as to accidental overexposure of some people during CT scans—a mistake at least in part due to poor monitoring of radiation dose.
The ability of X-rays to penetrate matter depends on density, and so an X-ray image can reveal very detailed density information. shows an example of the simplest type of X-ray image, an X-ray shadow on film. The amount of information in a simple X-ray image is impressive, but more sophisticated techniques, such as CT scans, can reveal three-dimensional information with details smaller than a millimeter.
The use of X-ray technology in medicine is called radiology—an established and relatively cheap tool in comparison to more sophisticated technologies. Consequently, X-rays are widely available and used extensively in medical diagnostics. During World War I, mobile X-ray units, advocated by Marie Curie, were used to diagnose soldiers.
Because they can have wavelengths less than 0.01 nm, X-rays can be scattered (a process called X-ray diffraction) to detect the shape of molecules and the structure of crystals. X-ray diffraction was crucial to Crick, Watson, and Wilkins in the determination of the shape of the double-helix DNA molecule.
X-rays are also used as a precise tool for trace-metal analysis in X-ray induced fluorescence, in which the energy of the X-ray emissions are related to the specific types of elements and amounts of materials present.
### Gamma Rays
Soon after nuclear radioactivity was first detected in 1896, it was found that at least three distinct types of radiation were being emitted. The most penetrating nuclear radiation was called a gamma ray ( (again a name given because its identity and character were unknown), and it was later found to be an extremely high frequency electromagnetic wave.
In fact, rays are any electromagnetic radiation emitted by a nucleus. This can be from natural nuclear decay or induced nuclear processes in nuclear reactors and weapons. The lower end of the frequency range overlaps the upper end of the X-ray range, but rays can have the highest frequency of any electromagnetic radiation.
Gamma rays have characteristics identical to X-rays of the same frequency—they differ only in source. At higher frequencies, rays are more penetrating and more damaging to living tissue. They have many of the same uses as X-rays, including cancer therapy. Gamma radiation from radioactive materials is used in nuclear medicine.
shows a medical image based on rays. Food spoilage can be greatly inhibited by exposing it to large doses of radiation, thereby obliterating responsible microorganisms. Damage to food cells through irradiation occurs as well, and the long-term hazards of consuming radiation-preserved food are unknown and controversial for some groups. Both X-ray and technologies are also used in scanning luggage at airports.
### Detecting Electromagnetic Waves from Space
The entire electromagnetic spectrum is used by researchers for investigating stars, space, and time. Arthur B. C. Walker was a pioneer in X-ray and ultraviolet observations, and designed specialized telescopes and instruments to observe the Sun’s atmosphere and corona. His developments significantly advanced our understanding of stars, and some of his developments are currently in use in space telescopes as well as in microchip manufacturing. As noted earlier, Penzias and Wilson detected microwaves to identify the background radiation originating from the Big Bang. Radio telescopes such as the Arecibo Radio Telescope in Puerto Rico and Parkes Observatory in Australia were designed to detect radio waves.
Infrared telescopes need to have their detectors cooled by liquid nitrogen to be able to gather useful signals. Since infrared radiation is predominantly from thermal agitation, if the detectors were not cooled, the vibrations of the molecules in the antenna would be stronger than the signal being collected.
The most famous of these infrared sensitive telescopes is the James Clerk Maxwell Telescope in Hawaii. The earliest telescopes, developed in the seventeenth century, were optical telescopes, collecting visible light. Telescopes in the ultraviolet, X-ray, and -ray regions are placed outside the atmosphere on satellites orbiting the Earth.
The Hubble Space Telescope (launched in 1990) gathers ultraviolet radiation as well as visible light. In the X-ray region, there is the Chandra X-ray Observatory (launched in 1999), and in the -ray region, there is the new Fermi Gamma-ray Space Telescope (launched in 2008—taking the place of the Compton Gamma Ray Observatory, 1991–2000). The James Webb Space Telescope, launched in late 2021, observes in a lower-frequency portion of the spectrum compared to Hubble. The JWST observes in long-wavelength visible light (red) through infrared, enabling it to detect objects that are further away, older, and fainter than previous telescopes could detect.
### Test Prep for AP Courses
### Section Summary
1. The relationship among the speed of propagation, wavelength, and frequency for any wave is given by , so that for electromagnetic waves,
where is the frequency, is the wavelength, and is the speed of light.
2. The electromagnetic spectrum is separated into many categories and subcategories, based on the frequency and wavelength, source, and uses of the electromagnetic waves.
3. Any electromagnetic wave produced by currents in wires is classified as a radio wave, the lowest frequency electromagnetic waves. Radio waves are divided into many types, depending on their applications, ranging up to microwaves at their highest frequencies.
4. Infrared radiation lies below visible light in frequency and is produced by thermal motion and the vibration and rotation of atoms and molecules. Infrared’s lower frequencies overlap with the highest-frequency microwaves.
5. Visible light is largely produced by electronic transitions in atoms and molecules, and is defined as being detectable by the human eye. Its colors vary with frequency, from red at the lowest to violet at the highest.
6. Ultraviolet radiation starts with frequencies just above violet in the visible range and is produced primarily by electronic transitions in atoms and molecules.
7. X-rays are created in high-voltage discharges and by electron bombardment of metal targets. Their lowest frequencies overlap the ultraviolet range but extend to much higher values, overlapping at the high end with gamma rays.
8. Gamma rays are nuclear in origin and are defined to include the highest-frequency electromagnetic radiation of any type.
### Conceptual Questions
### Problems & Exercises
|
# Electromagnetic Waves
## Energy in Electromagnetic Waves
### Learning Objectives
By the end of this section, you will be able to:
1. Explain how the energy and amplitude of an electromagnetic wave are related.
2. Given its power output and the heating area, calculate the intensity of a microwave oven’s electromagnetic field, as well as its peak electric and magnetic field strengths
Anyone who has used a microwave oven knows there is energy in electromagnetic waves. Sometimes this energy is obvious, such as in the warmth of the summer sun. Other times it is subtle, such as the unfelt energy of gamma rays, which can destroy living cells.
Electromagnetic waves can bring energy into a system by virtue of their electric and magnetic fields. These fields can exert forces and move charges in the system and, thus, do work on them. If the frequency of the electromagnetic wave is the same as the natural frequencies of the system (such as microwaves at the resonant frequency of water molecules), the transfer of energy is much more efficient.
But there is energy in an electromagnetic wave, whether it is absorbed or not. Once created, the fields carry energy away from a source. If absorbed, the field strengths are diminished and anything left travels on. Clearly, the larger the strength of the electric and magnetic fields, the more work they can do and the greater the energy the electromagnetic wave carries.
A wave’s energy is proportional to its amplitude squared ( or ). This is true for waves on guitar strings, for water waves, and for sound waves, where amplitude is proportional to pressure. In electromagnetic waves, the amplitude is the maximum field strength of the electric and magnetic fields. (See .)
Thus the energy carried and the intensity of an electromagnetic wave is proportional to and . In fact, for a continuous sinusoidal electromagnetic wave, the average intensity is given by
where is the speed of light, is the permittivity of free space, and is the maximum electric field strength; intensity, as always, is power per unit area (here in ).
The average intensity of an electromagnetic wave can also be expressed in terms of the magnetic field strength by using the relationship , and the fact that , where is the permeability of free space. Algebraic manipulation produces the relationship
where is the maximum magnetic field strength.
One more expression for in terms of both electric and magnetic field strengths is useful. Substituting the fact that , the previous expression becomes
Whichever of the three preceding equations is most convenient can be used, since they are really just different versions of the same principle: Energy in a wave is related to amplitude squared. Furthermore, since these equations are based on the assumption that the electromagnetic waves are sinusoidal, peak intensity is twice the average; that is, .
### Test Prep for AP Courses
### Section Summary
1. The energy carried by any wave is proportional to its amplitude squared. For electromagnetic waves, this means intensity can be expressed as
where
2. This can also be expressed in terms of the maximum magnetic field strength as
and in terms of both electric and magnetic fields as
3. The three expressions for are all equivalent.
### Problems & Exercises
|
# Geometric Optics
## Connection for AP® Courses
Many visual aspects of light result from the transfer of energy in the form of electromagnetic waves (Big Idea 6). Light from this page or screen is formed into an image by the lens of your eye, much like the lens of the camera that make a photograph. Mirrors, like lenses, can also form images that in turn are captured by your eye (Essential Knowledge 6.E.2, Essential Knowledge 6.E.4). In this chapter, you will explore the behavior of light as an electromagnetic wave and learn:
1. what makes a diamond sparkle (Essential Knowledge 6.E.3),
2. how images are formed by lenses for the purposes of magnification or photography (Essential Knowledge 6.E.5),
3. why objects in some mirrors are closer than they appear (Essential Knowledge 6.E.2), and
4. why clear mountain streams are always a little bit deeper than they appear to be.
You will examine different ways of thinking about and modeling light and when each method is most appropriate (Enduring Understanding 6.F, Essential Knowledge 6.F.4). You will also learn how to use simple geometry to predict how light will move when crossing from one medium to another, or when passing through a lens, or when reflecting off a curved surface (Enduring Understanding 6.E, Essential Knowledge 6.E.1). With this knowledge, you will be able to predict what kind of image will form when light interacts with matter.
Big Idea 6 Waves can transfer energy and momentum from one location to another without the permanent transfer of mass and serve as a mathematical model for the description of other phenomena.
Enduring Understanding 6.E The direction of propagation of a wave such as light may be changed when the wave encounters an interface between two media.
Essential Knowledge 6.E.1 When light travels from one medium to another, some of the light is transmitted, some is reflected, and some is absorbed. (Qualitative understanding only.)
Essential Knowledge 6.E.2 When light hits a smooth reflecting surface at an angle, it reflects at the same angle on the other side of the line perpendicular to the surface (specular reflection); and this law of reflection accounts for the size and location of images seen in plane mirrors.
Essential Knowledge 6.E.3 When light travels across a boundary from one transparent material to another, the speed of propagation changes. At a non–normal incident angle, the path of the light ray bends closer to the perpendicular in the optically slower substance. This is called refraction.
Essential Knowledge 6.E.4 The reflection of light from surfaces can be used to form images.
Essential Knowledge 6.E.5 The refraction of light as it travels from one transparent medium to another can be used to form images.
Enduring Understanding 6.F Electromagnetic radiation can be modeled as waves or as fundamental particles.
Essential Knowledge 6.F.4 The nature of light requires that different models of light are most appropriate at different scales. |
# Geometric Optics
## The Ray Aspect of Light
### Learning Objectives
By the end of this section, you will be able to:
1. List the ways by which light travels from a source to another location.
There are three ways in which light can travel from a source to another location. (See .) It can come directly from the source through empty space, such as from the Sun to Earth. Or light can travel through various media, such as air and glass, to the person. Light can also arrive after being reflected, such as by a mirror. In all of these cases, light is modeled as traveling in straight lines called rays. Light may change direction when it encounters objects (such as a mirror) or in passing from one material to another (such as in passing from air to glass), but it then continues in a straight line or as a ray. The word ray comes from mathematics and here means a straight line that originates at some point. It is acceptable to visualize light rays as laser rays (or even science fiction depictions of ray guns).
Experiments, as well as our own experiences, show that when light interacts with objects several times as large as its wavelength, it travels in straight lines and acts like a ray. Its wave characteristics are not pronounced in such situations. Since the wavelength of light is less than a micron (a thousandth of a millimeter), it acts like a ray in the many common situations in which it encounters objects larger than a micron. For example, when light encounters anything we can observe with unaided eyes, such as a mirror, it acts like a ray, with only subtle wave characteristics. We will concentrate on the ray characteristics in this chapter.
Since light moves in straight lines, changing directions when it interacts with materials, it is described by geometry and simple trigonometry. This part of optics, where the ray aspect of light dominates, is therefore called geometric optics. There are two laws that govern how light changes direction when it interacts with matter. These are the law of reflection, for situations in which light bounces off matter, and the law of refraction, for situations in which light passes through matter.
### Test Prep for AP Courses
### Section Summary
1. A straight line that originates at some point is called a ray.
2. The part of optics dealing with the ray aspect of light is called geometric optics.
3. Light can travel in three ways from a source to another location: (1) directly from the source through empty space; (2) through various media; (3) after being reflected from a mirror.
### Problems & Exercises
|
# Geometric Optics
## The Law of Reflection
### Learning Objectives
By the end of this section, you will be able to:
1. Explain reflection of light from polished and rough surfaces.
Whenever we look into a mirror, or squint at sunlight glinting from a lake, we are seeing a reflection. When you look at this page, too, you are seeing light reflected from it. Large telescopes use reflection to form an image of stars and other astronomical objects.
The law of reflection is illustrated in , which also shows how the angles are measured relative to the perpendicular to the surface at the point where the light ray strikes. We expect to see reflections from smooth surfaces, but illustrates how a rough surface reflects light. Since the light strikes different parts of the surface at different angles, it is reflected in many different directions, or diffused. Diffused light is what allows us to see a sheet of paper from any angle, as illustrated in . Many objects, such as people, clothing, leaves, and walls, have rough surfaces and can be seen from all sides. A mirror, on the other hand, has a smooth surface (compared with the wavelength of light) and reflects light at specific angles, as illustrated in . When the moon reflects from a lake, as shown in , a combination of these effects takes place.
The law of reflection is very simple: The angle of reflection equals the angle of incidence.
When we see ourselves in a mirror, it appears that our image is actually behind the mirror. This is illustrated in . We see the light coming from a direction determined by the law of reflection. The angles are such that our image is exactly the same distance behind the mirror as we stand away from the mirror. If the mirror is on the wall of a room, the images in it are all behind the mirror, which can make the room seem bigger. Although these mirror images make objects appear to be where they cannot be (like behind a solid wall), the images are not figments of our imagination. Mirror images can be photographed and videotaped by instruments and look just as they do with our eyes (optical instruments themselves). The precise manner in which images are formed by mirrors and lenses will be treated in later sections of this chapter.
### Test Prep for AP Courses
### Section Summary
1. The angle of reflection equals the angle of incidence.
2. A mirror has a smooth surface and reflects light at specific angles.
3. Light is diffused when it reflects from a rough surface.
4. Mirror images can be photographed and videotaped by instruments.
### Conceptual Questions
### Problems & Exercises
|
# Geometric Optics
## The Law of Refraction
### Learning Objectives
By the end of this section, you will be able to:
1. Determine the index of refraction, given the speed of light in a medium.
It is easy to notice some odd things when looking into a fish tank. For example, you may see the same fish appearing to be in two different places. (See .) This is because light coming from the fish to us changes direction when it leaves the tank, and in this case, it can travel two different paths to get to our eyes. The changing of a light ray’s direction (loosely called bending) when it passes through variations in matter is called refraction. Refraction is responsible for a tremendous range of optical phenomena, from the action of lenses to voice transmission through optical fibers.
Why does light change direction when passing from one material (medium) to another? It is because light changes speed when going from one material to another. So before we study the law of refraction, it is useful to discuss the speed of light and how it varies in different media.
### The Speed of Light
Early attempts to measure the speed of light, such as those made by Galileo, determined that light moved extremely fast, perhaps instantaneously. The first real evidence that light traveled at a finite speed came from the Danish astronomer Ole Roemer in the late 17th century. Roemer had noted that the average orbital period of one of Jupiter’s moons, as measured from Earth, varied depending on whether Earth was moving toward or away from Jupiter. He correctly concluded that the apparent change in period was due to the change in distance between Earth and Jupiter and the time it took light to travel this distance. From his 1676 data, a value of the speed of light was calculated to be (only 25% different from today’s accepted value). In more recent times, physicists have measured the speed of light in numerous ways and with increasing accuracy. One particularly direct method, used in 1887 by the American physicist Albert Michelson (1852–1931), is illustrated in . Light reflected from a rotating set of mirrors was reflected from a stationary mirror 35 km away and returned to the rotating mirrors. The time for the light to travel can be determined by how fast the mirrors must rotate for the light to be returned to the observer’s eye.
The speed of light is now known to great precision. In fact, the speed of light in a vacuum is so important that it is accepted as one of the basic physical quantities and has the fixed value
where the approximate value of is used whenever three-digit accuracy is sufficient. The speed of light through matter is less than it is in a vacuum, because light interacts with atoms in a material. The speed of light depends strongly on the type of material, since its interaction with different atoms, crystal lattices, and other substructures varies. We define the index of refraction of a material to be
where is the observed speed of light in the material. Since the speed of light is always less than in matter and equals only in a vacuum, the index of refraction is always greater than or equal to one.
That is, . gives the indices of refraction for some representative substances. The values are listed for a particular wavelength of light, because they vary slightly with wavelength. (This can have important effects, such as colors produced by a prism.) Note that for gases, is close to 1.0. This seems reasonable, since atoms in gases are widely separated and light travels at in the vacuum between atoms. It is common to take for gases unless great precision is needed. Although the speed of light in a medium varies considerably from its value in a vacuum, it is still a large speed.
### Law of Refraction
shows how a ray of light changes direction when it passes from one medium to another. As before, the angles are measured relative to a perpendicular to the surface at the point where the light ray crosses it. (Some of the incident light will be reflected from the surface, but for now we will concentrate on the light that is transmitted.) The change in direction of the light ray depends on how the speed of light changes. The change in the speed of light is related to the indices of refraction of the media involved. In the situations shown in , medium 2 has a greater index of refraction than medium 1. This means that the speed of light is less in medium 2 than in medium 1. Note that as shown in (a), the direction of the ray moves closer to the perpendicular when it slows down. Conversely, as shown in (b), the direction of the ray moves away from the perpendicular when it speeds up. The path is exactly reversible. In both cases, you can imagine what happens by thinking about pushing a lawn mower from a footpath onto grass, and vice versa. Going from the footpath to grass, the front wheels are slowed and pulled to the side as shown. This is the same change in direction as for light when it goes from a fast medium to a slow one. When going from the grass to the footpath, the front wheels can move faster and the mower changes direction as shown. This, too, is the same change in direction as for light going from slow to fast.
The amount that a light ray changes its direction depends both on the incident angle and the amount that the speed changes. For a ray at a given incident angle, a large change in speed causes a large change in direction, and thus a large change in angle. The exact mathematical relationship is the law of refraction, or “Snell’s Law,” which is stated in equation form as
Here and are the indices of refraction for medium 1 and 2, and and are the angles between the rays and the perpendicular in medium 1 and 2, as shown in . The incoming ray is called the incident ray and the outgoing ray the refracted ray, and the associated angles the incident angle and the refracted angle. The law of refraction is also called Snell’s law after the Dutch mathematician Willebrord Snell (1591–1626). While the law has been named after Snell, the Arabian physicist, Ibn Sahl, found the law of refraction in 984 and used it in his work On Burning Mirrors and Lenses. Snell’s experiments showed that the law of refraction was obeyed and that a characteristic index of refraction could be assigned to a given medium. Snell was not aware that the speed of light varied in different media, but through experiments he was able to determine indices of refraction from the way light rays changed direction.
### Test Prep for AP Courses
### Section Summary
1. The changing of a light ray’s direction when it passes through variations in matter is called refraction.
2. The speed of light in vacuum
3. Index of refraction , where is the speed of light in the material, is the speed of light in vacuum, and is the index of refraction.
4. Snell’s law, the law of refraction, is stated in equation form as .
### Conceptual Questions
### Problems & Exercises
|
# Geometric Optics
## Total Internal Reflection
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the phenomenon of total internal reflection.
2. Describe the workings and uses of fiber optics.
3. Analyze the reason for the sparkle of diamonds.
A good-quality mirror may reflect more than 90% of the light that falls on it, absorbing the rest. But it would be useful to have a mirror that reflects all of the light that falls on it. Interestingly, we can produce total reflection using an aspect of refraction.
Consider what happens when a ray of light strikes the surface between two materials, such as is shown in (a). Part of the light crosses the boundary and is refracted; the rest is reflected. If, as shown in the figure, the index of refraction for the second medium is less than for the first, the ray bends away from the perpendicular. (Since , the angle of refraction is greater than the angle of incidence—that is, .) Now imagine what happens as the incident angle is increased. This causes to increase also. The largest the angle of refraction can be is , as shown in (b).The critical angle for a combination of materials is defined to be the incident angle that produces an angle of refraction of . That is, is the incident angle for which . If the incident angle is greater than the critical angle, as shown in (c), then all of the light is reflected back into medium 1, a condition called total internal reflection.
Snell’s law states the relationship between angles and indices of refraction. It is given by
When the incident angle equals the critical angle (), the angle of refraction is (). Noting that , Snell’s law in this case becomes
The critical angle for a given combination of materials is thus
Total internal reflection occurs for any incident angle greater than the critical angle , and it can only occur when the second medium has an index of refraction less than the first. Note the above equation is written for a light ray that travels in medium 1 and reflects from medium 2, as shown in the figure.
### Fiber Optics: Endoscopes to Telephones
Fiber optics is one application of total internal reflection that is in wide use. In communications, it is used to transmit telephone, internet, and cable TV signals. Fiber optics employs the transmission of light down fibers of plastic or glass. Because the fibers are thin, light entering one is likely to strike the inside surface at an angle greater than the critical angle and, thus, be totally reflected (See .) The index of refraction outside the fiber must be smaller than inside, a condition that is easily satisfied by coating the outside of the fiber with a material having an appropriate refractive index. In fact, most fibers have a varying refractive index to allow more light to be guided along the fiber through total internal refraction. Rays are reflected around corners as shown, making the fibers into tiny light pipes.
Bundles of fibers can be used to transmit an image without a lens, as illustrated in . The output of a device called an endoscope is shown in (b). Endoscopes are used to explore the body through various orifices or minor incisions. Light is transmitted down one fiber bundle to illuminate internal parts, and the reflected light is transmitted back out through another to be observed. Surgery can be performed, such as arthroscopic surgery on the knee joint, employing cutting tools attached to and observed with the endoscope. Samples can also be obtained, such as by lassoing an intestinal polyp for external examination.
Fiber optics has revolutionized surgical techniques and observations within the body. There are a host of medical diagnostic and therapeutic uses. The flexibility of the fiber optic bundle allows it to navigate around difficult and small regions in the body, such as the intestines, the heart, blood vessels, and joints. Transmission of an intense laser beam to burn away obstructing plaques in major arteries as well as delivering light to activate chemotherapy drugs are becoming commonplace. Optical fibers have in fact enabled microsurgery and remote surgery where the incisions are small and the surgeon’s fingers do not need to touch the diseased tissue.
Fibers in bundles are surrounded by a cladding material that has a lower index of refraction than the core. (See .) The cladding prevents light from being transmitted between fibers in a bundle. Without cladding, light could pass between fibers in contact, since their indices of refraction are identical. Since no light gets into the cladding (there is total internal reflection back into the core), none can be transmitted between clad fibers that are in contact with one another. The cladding prevents light from escaping out of the fiber; instead most of the light is propagated along the length of the fiber, minimizing the loss of signal and ensuring that a quality image is formed at the other end. The cladding and an additional protective layer make optical fibers flexible and durable.
Special tiny lenses that can be attached to the ends of bundles of fibers are being designed and fabricated. Light emerging from a fiber bundle can be focused and a tiny spot can be imaged. In some cases the spot can be scanned, allowing quality imaging of a region inside the body. Special minute optical filters inserted at the end of the fiber bundle have the capacity to image tens of microns below the surface without cutting the surface—non-intrusive diagnostics. This is particularly useful for determining the extent of cancers in the stomach and bowel.
Most telephone conversations and Internet communications are now carried by laser signals along optical fibers. Extensive optical fiber cables have been placed on the ocean floor and underground to enable optical communications. Optical fiber communication systems offer several advantages over electrical (copper) based systems, particularly for long distances. The fibers can be made so transparent that light can travel many kilometers before it becomes dim enough to require amplification—much superior to copper conductors. This property of optical fibers is called low loss. Lasers emit light with characteristics that allow far more conversations in one fiber than are possible with electric signals on a single conductor. This property of optical fibers is called high bandwidth. Optical signals in one fiber do not produce undesirable effects in other adjacent fibers. This property of optical fibers is called reduced crosstalk. We shall explore the unique characteristics of laser radiation in a later chapter.
### Corner Reflectors and Diamonds
A light ray that strikes an object consisting of two mutually perpendicular reflecting surfaces is reflected back exactly parallel to the direction from which it came. This is true whenever the reflecting surfaces are perpendicular, and it is independent of the angle of incidence. Such an object, shown in , is called a corner reflector, since the light bounces from its inside corner. Many inexpensive reflector buttons on bicycles, cars, and warning signs have corner reflectors designed to return light in the direction from which it originated. It was more expensive for astronauts to place one on the moon. Laser signals can be bounced from that corner reflector to measure the gradually increasing distance to the moon with great precision.
Corner reflectors are perfectly efficient when the conditions for total internal reflection are satisfied. With common materials, it is easy to obtain a critical angle that is less than . One use of these perfect mirrors is in binoculars, as shown in . Another use is in periscopes found in submarines.
### The Sparkle of Diamonds
Total internal reflection, coupled with a large index of refraction, explains why diamonds sparkle more than other materials. The critical angle for a diamond-to-air surface is only , and so when light enters a diamond, it has trouble getting back out. (See .) Although light freely enters the diamond, it can exit only if it makes an angle less than . Facets on diamonds are specifically intended to make this unlikely, so that the light can exit only in certain places. Good diamonds are very clear, so that the light makes many internal reflections and is concentrated at the few places it can exit—hence the sparkle. (Zircon is a natural gemstone that has an exceptionally large index of refraction, but not as large as diamond, so it is not as highly prized. Cubic zirconia is manufactured and has an even higher index of refraction (), but still less than that of diamond.) The colors you see emerging from a sparkling diamond are not due to the diamond’s color, which is usually nearly colorless. Those colors result from dispersion, the topic of Dispersion: The Rainbow and Prisms. Colored diamonds get their color from structural defects of the crystal lattice and the inclusion of minute quantities of graphite and other materials. The Argyle Mine in Western Australia produces around 90% of the world’s pink, red, champagne, and cognac diamonds, while around 50% of the world’s clear diamonds come from central and southern Africa.
### Test Prep for AP Courses
### Section Summary
1. The incident angle that produces an angle of refraction of is called critical angle.
2. Total internal reflection is a phenomenon that occurs at the boundary between two mediums, such that if the incident angle in the first medium is greater than the critical angle, then all the light is reflected back into that medium.
3. Fiber optics involves the transmission of light down fibers of plastic or glass, applying the principle of total internal reflection.
4. Endoscopes are used to explore the body through various orifices or minor incisions, based on the transmission of light through optical fibers.
5. Cladding prevents light from being transmitted between fibers in a bundle.
6. Diamonds sparkle due to total internal reflection coupled with a large index of refraction.
### Conceptual Questions
### Problems & Exercises
|
# Geometric Optics
## Dispersion: The Rainbow and Prisms
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the phenomenon of dispersion and discuss its advantages and disadvantages.
Everyone enjoys the spectacle and surprise of rainbows. They’ve been hailed as symbols of hope and spirituality and are the subject of stories and myths across the world’s cultures. Just how does sunlight falling on water droplets cause the multicolored image we see, and what else does this phenomenon tell us about light, color, and radiation? Working in his native Persia (now Iran), Kamal al-Din Hasan ibn Ali ibn Hasan al-Farisi (1267–1319) designed a series of innovative experiments to answer this question and clarify the explanations of many earlier scientists. At that time, there were no microscopes to examine tiny drops of water similar to those in the atmosphere, so Farisi created an enormous drop of water. He filled a large glass vessel with water and placed it inside a camera obscura, in which he could carefully control the entry of light. Using a series of careful observations on the resulting multicolored spectra of light, he deduced and confirmed that the droplets split—or decompose—white light into the colors of the rainbow. Farisi’s contemporary, Theodoric of Freiberg (in Germany), performed similar experiments using other equipment. Both relied on the prior work of Ibn al-Haytham, often known as the founder of optics and among the first to formalize a scientific method.
We see about six colors in a rainbow—red, orange, yellow, green, blue, and violet; sometimes indigo is listed, too. Those colors are associated with different wavelengths of light, as shown in . When our eye receives pure-wavelength light, we tend to see only one of the six colors, depending on wavelength. The thousands of other hues we can sense in other situations are our eye’s response to various mixtures of wavelengths. White light, in particular, is a fairly uniform mixture of all visible wavelengths. Sunlight, considered to be white, actually appears to be a bit yellow because of its mixture of wavelengths, but it does contain all visible wavelengths. The sequence of colors in rainbows is the same sequence as the colors plotted versus wavelength in . What this implies is that white light is spread out according to wavelength in a rainbow. Dispersion is defined as the spreading of white light into its full spectrum of wavelengths. More technically, dispersion occurs whenever there is a process that changes the direction of light in a manner that depends on wavelength. Dispersion, as a general phenomenon, can occur for any type of wave and always involves wavelength-dependent processes.
Refraction is responsible for dispersion in rainbows and many other situations. The angle of refraction depends on the index of refraction, as we saw in The Law of Refraction. We know that the index of refraction depends on the medium. But for a given medium, also depends on wavelength. (See . Note that, for a given medium, increases as wavelength decreases and is greatest for violet light. Thus violet light is bent more than red light, as shown for a prism in (b), and the light is dispersed into the same sequence of wavelengths as seen in and .
Rainbows are produced by a combination of refraction and reflection. You may have noticed that you see a rainbow only when you look away from the sun. Light enters a drop of water and is reflected from the back of the drop, as shown in . The light is refracted both as it enters and as it leaves the drop. Since the index of refraction of water varies with wavelength, the light is dispersed, and a rainbow is observed, as shown in (a). (There is no dispersion caused by reflection at the back surface, since the law of reflection does not depend on wavelength.) The actual rainbow of colors seen by an observer depends on the myriad of rays being refracted and reflected toward the observer’s eyes from numerous drops of water. The effect is most spectacular when the background is dark, as in stormy weather, but can also be observed in waterfalls and lawn sprinklers. The arc of a rainbow comes from the need to be looking at a specific angle relative to the direction of the sun, as illustrated in (b). (If there are two reflections of light within the water drop, another “secondary” rainbow is produced. This rare event produces an arc that lies above the primary rainbow arc—see (c).)
Dispersion may produce beautiful rainbows, but it can cause problems in optical systems. White light used to transmit messages in a fiber is dispersed, spreading out in time and eventually overlapping with other messages. Since a laser produces a nearly pure wavelength, its light experiences little dispersion, an advantage over white light for transmission of information. In contrast, dispersion of electromagnetic waves coming to us from outer space can be used to determine the amount of matter they pass through. As with many phenomena, dispersion can be useful or a nuisance, depending on the situation and our human goals.
### Section Summary
1. The spreading of white light into its full spectrum of wavelengths is called dispersion.
2. Rainbows are produced by a combination of refraction and reflection and involve the dispersion of sunlight into a continuous distribution of colors.
3. Dispersion produces beautiful rainbows but also causes problems in certain optical systems.
### Problems & Exercises
|
# Geometric Optics
## Image Formation by Lenses
### Learning Objectives
By the end of this section, you will be able to:
1. List the rules for ray tracking for thin lenses.
2. Illustrate the formation of images using the technique of ray tracking.
3. Determine power of a lens given the focal length.
Lenses are found in a huge array of optical instruments, ranging from a simple magnifying glass to the eye to a camera’s zoom lens. In this section, we will use the law of refraction to explore the properties of lenses and how they form images.
The word lens derives from the Latin word for a lentil bean, the shape of which is similar to the convex lens in . The convex lens shown has been shaped so that all light rays that enter it parallel to its axis cross one another at a single point on the opposite side of the lens. (The axis is defined to be a line normal to the lens at its center, as shown in .) Such a lens is called a converging (or convex) lens for the converging effect it has on light rays. An expanded view of the path of one ray through the lens is shown, to illustrate how the ray changes direction both as it enters and as it leaves the lens. Since the index of refraction of the lens is greater than that of air, the ray moves towards the perpendicular as it enters and away from the perpendicular as it leaves. (This is in accordance with the law of refraction.) Due to the lens’s shape, light is thus bent toward the axis at both surfaces. The point at which the rays cross is defined to be the focal point F of the lens. The distance from the center of the lens to its focal point is defined to be the focal length of the lens. shows how a converging lens, such as that in a magnifying glass, can converge the nearly parallel light rays from the sun to a small spot.
The greater effect a lens has on light rays, the more powerful it is said to be. For example, a powerful converging lens will focus parallel light rays closer to itself and will have a smaller focal length than a weak lens. The light will also focus into a smaller and more intense spot for a more powerful lens. The power of a lens is defined to be the inverse of its focal length. In equation form, this is
shows a concave lens and the effect it has on rays of light that enter it parallel to its axis (the path taken by ray 2 in the figure is the axis of the lens). The concave lens is a diverging lens, because it causes the light rays to bend away (diverge) from its axis. In this case, the lens has been shaped so that all light rays entering it parallel to its axis appear to originate from the same point, , defined to be the focal point of a diverging lens. The distance from the center of the lens to the focal point is again called the focal length of the lens. Note that the focal length and power of a diverging lens are defined to be negative. For example, if the distance to in is 5.00 cm, then the focal length is and the power of the lens is . An expanded view of the path of one ray through the lens is shown in the figure to illustrate how the shape of the lens, together with the law of refraction, causes the ray to follow its particular path and be diverged.
As noted in the initial discussion of the law of refraction in The Law of Refraction, the paths of light rays are exactly reversible. This means that the direction of the arrows could be reversed for all of the rays in and . For example, if a point light source is placed at the focal point of a convex lens, as shown in , parallel light rays emerge from the other side.
### Ray Tracing and Thin Lenses
Ray tracing is the technique of determining or following (tracing) the paths that light rays take. For rays passing through matter, the law of refraction is used to trace the paths. Here we use ray tracing to help us understand the action of lenses in situations ranging from forming images on film to magnifying small print to correcting nearsightedness. While ray tracing for complicated lenses, such as those found in sophisticated cameras, may require computer techniques, there is a set of simple rules for tracing rays through thin lenses. A thin lens is defined to be one whose thickness allows rays to refract, as illustrated in , but does not allow properties such as dispersion and aberrations. An ideal thin lens has two refracting surfaces but the lens is thin enough to assume that light rays bend only once. A thin symmetrical lens has two focal points, one on either side and both at the same distance from the lens. (See .) Another important characteristic of a thin lens is that light rays through its center are deflected by a negligible amount, as seen in .
Using paper, pencil, and a straight edge, ray tracing can accurately describe the operation of a lens. The rules for ray tracing for thin lenses are based on the illustrations already discussed:
1. A ray entering a converging lens parallel to its axis passes through the focal point F of the lens on the other side. (See rays 1 and 3 in .)
2. A ray entering a diverging lens parallel to its axis seems to come from the focal point F. (See rays 1 and 3 in .)
3. A ray passing through the center of either a converging or a diverging lens does not change direction. (See , and see ray 2 in and .)
4. A ray entering a converging lens through its focal point exits parallel to its axis. (The reverse of rays 1 and 3 in .)
5. A ray that enters a diverging lens by heading toward the focal point on the opposite side exits parallel to the axis. (The reverse of rays 1 and 3 in .)
### Image Formation by Thin Lenses
In some circumstances, a lens forms an obvious image, such as when a movie projector casts an image onto a screen. In other cases, the image is less obvious. Where, for example, is the image formed by eyeglasses? We will use ray tracing for thin lenses to illustrate how they form images, and we will develop equations to describe the image formation quantitatively.
Consider an object some distance away from a converging lens, as shown in . To find the location and size of the image formed, we trace the paths of selected light rays originating from one point on the object, in this case the top of the person’s head. The figure shows three rays from the top of the object that can be traced using the ray tracing rules given above. (Rays leave this point going in many directions, but we concentrate on only a few with paths that are easy to trace.) The first ray is one that enters the lens parallel to its axis and passes through the focal point on the other side (rule 1). The second ray passes through the center of the lens without changing direction (rule 3). The third ray passes through the nearer focal point on its way into the lens and leaves the lens parallel to its axis (rule 4). The three rays cross at the same point on the other side of the lens. The image of the top of the person’s head is located at this point. All rays that come from the same point on the top of the person’s head are refracted in such a way as to cross at the point shown. Rays from another point on the object, such as her belt buckle, will also cross at another common point, forming a complete image, as shown. Although three rays are traced in , only two are necessary to locate the image. It is best to trace rays for which there are simple ray tracing rules. Before applying ray tracing to other situations, let us consider the example shown in in more detail.
The image formed in is a real image, meaning that it can be projected. That is, light rays from one point on the object actually cross at the location of the image and can be projected onto a screen, a piece of film, or the retina of an eye, for example. shows how such an image would be projected onto film by a camera lens. This figure also shows how a real image is projected onto the retina by the lens of an eye. Note that the image is there whether it is projected onto a screen or not.
Several important distances appear in . We define to be the object distance, the distance of an object from the center of a lens. Image distance is defined to be the distance of the image from the center of a lens. The height of the object and height of the image are given the symbols and , respectively. Images that appear upright relative to the object have heights that are positive and those that are inverted have negative heights. Using the rules of ray tracing and making a scale drawing with paper and pencil, like that in , we can accurately describe the location and size of an image. But the real benefit of ray tracing is in visualizing how images are formed in a variety of situations. To obtain numerical information, we use a pair of equations that can be derived from a geometric analysis of ray tracing for thin lenses. The thin lens equations are
and
We define the ratio of image height to object height () to be the magnification . (The minus sign in the equation above will be discussed shortly.) The thin lens equations are broadly applicable to all situations involving thin lenses (and “thin” mirrors, as we will see later). We will explore many features of image formation in the following worked examples.
Real images, such as the one considered in the previous example, are formed by converging lenses whenever an object is farther from the lens than its focal length. This is true for movie projectors, cameras, and the eye. We shall refer to these as case 1 images. A case 1 image is formed when and is positive, as in (a). (A summary of the three cases or types of image formation appears at the end of this section.)
A different type of image is formed when an object, such as a person's face, is held close to a convex lens. The image is upright and larger than the object, as seen in (b), and so the lens is called a magnifier. If you slowly pull the magnifier away from the face, you will see that the magnification steadily increases until the image begins to blur. Pulling the magnifier even farther away produces an inverted image as seen in (a). The distance at which the image blurs, and beyond which it inverts, is the focal length of the lens. To use a convex lens as a magnifier, the object must be closer to the converging lens than its focal length. This is called a case 2 image. A case 2 image is formed when and is positive.
uses ray tracing to show how an image is formed when an object is held closer to a converging lens than its focal length. Rays coming from a common point on the object continue to diverge after passing through the lens, but all appear to originate from a point at the location of the image. The image is on the same side of the lens as the object and is farther away from the lens than the object. This image, like all case 2 images, cannot be projected and, hence, is called a virtual image. Light rays only appear to originate at a virtual image; they do not actually pass through that location in space. A screen placed at the location of a virtual image will receive only diffuse light from the object, not focused rays from the lens. Additionally, a screen placed on the opposite side of the lens will receive rays that are still diverging, and so no image will be projected on it. We can see the magnified image with our eyes, because the lens of the eye converges the rays into a real image projected on our retina. Finally, we note that a virtual image is upright and larger than the object, meaning that the magnification is positive and greater than 1.
A third type of image is formed by a diverging or concave lens. Try looking through eyeglasses meant to correct nearsightedness. (See .) You will see an image that is upright but smaller than the object. This means that the magnification is positive but less than 1. The ray diagram in shows that the image is on the same side of the lens as the object and, hence, cannot be projected—it is a virtual image. Note that the image is closer to the lens than the object. This is a case 3 image, formed for any object by a negative focal length or diverging lens.
summarizes the three types of images formed by single thin lenses. These are referred to as case 1, 2, and 3 images. Convex (converging) lenses can form either real or virtual images (cases 1 and 2, respectively), whereas concave (diverging) lenses can form only virtual images (always case 3). Real images are always inverted, but they can be either larger or smaller than the object. For example, a slide projector forms an image larger than the slide, whereas a camera makes an image smaller than the object being photographed. Virtual images are always upright and cannot be projected. Virtual images are larger than the object only in case 2, where a convex lens is used. The virtual image produced by a concave lens is always smaller than the object—a case 3 image. We can see and photograph virtual images only by using an additional lens to form a real image.
In Image Formation by Mirrors, we shall see that mirrors can form exactly the same types of images as lenses.
### Problem-Solving Strategies for Lenses
Step 1. Examine the situation to determine that image formation by a lens is involved.
Step 2. Determine whether ray tracing, the thin lens equations, or both are to be employed. A sketch is very useful even if ray tracing is not specifically required by the problem. Write symbols and values on the sketch.
Step 3. Identify exactly what needs to be determined in the problem (identify the unknowns).
Step 4. Make alist of what is given or can be inferred from the problem as stated (identify the knowns). It is helpful to determine whether the situation involves a case 1, 2, or 3 image. While these are just names for types of images, they have certain characteristics (given in ) that can be of great use in solving problems.
Step 5. If ray tracing is required, use the ray tracing rules listed near the beginning of this section.
Step 6. Most quantitative problems require the use of the thin lens equations. These are solved in the usual manner by substituting knowns and solving for unknowns. Several worked examples serve as guides.
Step 7. Check to see if the answer is reasonable: Does it make sense? If you have identified the type of image (case 1, 2, or 3), you should assess whether your answer is consistent with the type of image, magnification, and so on.
### Test Prep for AP Courses
### Section Summary
1. Light rays entering a converging lens parallel to its axis cross one another at a single point on the opposite side.
2. For a converging lens, the focal point is the point at which converging light rays cross; for a diverging lens, the focal point is the point from which diverging light rays appear to originate.
3. The distance from the center of the lens to its focal point is called the focal length .
4. Power of a lens is defined to be the inverse of its focal length, .
5. A lens that causes the light rays to bend away from its axis is called a diverging lens.
6. Ray tracing is the technique of graphically determining the paths that light rays take.
7. The image in which light rays from one point on the object actually cross at the location of the image and can be projected onto a screen, a piece of film, or the retina of an eye is called a real image.
8. Thin lens equations are and (magnification).
9. The distance of the image from the center of the lens is called image distance.
10. An image that is on the same side of the lens as the object and cannot be projected on a screen is called a virtual image.
### Conceptual Questions
### Problems & Exercises
|
# Geometric Optics
## Image Formation by Mirrors
### Learning Objectives
By the end of this section, you will be able to:
1. Illustrate image formation in a flat mirror.
2. Explain with ray diagrams the formation of an image using spherical mirrors.
3. Determine focal length and magnification given radius of curvature, distance of object and image.
We only have to look as far as the nearest bathroom to find an example of an image formed by a mirror. Images in flat mirrors are the same size as the object and are located behind the mirror. Like lenses, mirrors can form a variety of images. For example, dental mirrors may produce a magnified image, just as makeup mirrors do. Security mirrors in shops, on the other hand, form images that are smaller than the object. We will use the law of reflection to understand how mirrors form images, and we will find that mirror images are analogous to those formed by lenses.
helps illustrate how a flat mirror forms an image. Two rays are shown emerging from the same point, striking the mirror, and being reflected into the observer’s eye. The rays can diverge slightly, and both still get into the eye. If the rays are extrapolated backward, they seem to originate from a common point behind the mirror, locating the image. (The paths of the reflected rays into the eye are the same as if they had come directly from that point behind the mirror.) Using the law of reflection—the angle of reflection equals the angle of incidence—we can see that the image and object are the same distance from the mirror. This is a virtual image, since it cannot be projected—the rays only appear to originate from a common point behind the mirror. Obviously, if you walk behind the mirror, you cannot see the image, since the rays do not go there. But in front of the mirror, the rays behave exactly as if they had come from behind the mirror, so that is where the image is situated.
Now let us consider the focal length of a mirror—for example, the concave spherical mirrors in . Rays of light that strike the surface follow the law of reflection. For a mirror that is large compared with its radius of curvature, as in (a), we see that the reflected rays do not cross at the same point, and the mirror does not have a well-defined focal point. If the mirror had the shape of a parabola, the rays would all cross at a single point, and the mirror would have a well-defined focal point. But parabolic mirrors are much more expensive to make than spherical mirrors. The solution is to use a mirror that is small compared with its radius of curvature, as shown in (b). (This is the mirror equivalent of the thin lens approximation.) To a very good approximation, this mirror has a well-defined focal point at F that is the focal distance from the center of the mirror. The focal length of a concave mirror is positive, since it is a converging mirror.
Just as for lenses, the shorter the focal length, the more powerful the mirror; thus, for a mirror, too. A more strongly curved mirror has a shorter focal length and a greater power. Using the law of reflection and some simple trigonometry, it can be shown that the focal length is half the radius of curvature, or
where is the radius of curvature of a spherical mirror. The smaller the radius of curvature, the smaller the focal length and, thus, the more powerful the mirror.
The convex mirror shown in also has a focal point. Parallel rays of light reflected from the mirror seem to originate from the point F at the focal distance behind the mirror. The focal length and power of a convex mirror are negative, since it is a diverging mirror.
Ray tracing is as useful for mirrors as for lenses. The rules for ray tracing for mirrors are based on the illustrations just discussed:
1. A ray approaching a concave converging mirror parallel to its axis is reflected through the focal point F of the mirror on the same side. (See rays 1 and 3 in (b).)
2. A ray approaching a convex diverging mirror parallel to its axis is reflected so that it seems to come from the focal point F behind the mirror. (See rays 1 and 3 in .)
3. Any ray striking the center of a mirror is followed by applying the law of reflection; it makes the same angle with the axis when leaving as when approaching. (See ray 2 in .)
4. A ray approaching a concave converging mirror through its focal point is reflected parallel to its axis. (The reverse of rays 1 and 3 in .)
5. A ray approaching a convex diverging mirror by heading toward its focal point on the opposite side is reflected parallel to the axis. (The reverse of rays 1 and 3 in .)
We will use ray tracing to illustrate how images are formed by mirrors, and we can use ray tracing quantitatively to obtain numerical information. But since we assume each mirror is small compared with its radius of curvature, we can use the thin lens equations for mirrors just as we did for lenses.
Consider the situation shown in , concave spherical mirror reflection, in which an object is placed farther from a concave (converging) mirror than its focal length. That is, is positive and > , so that we may expect an image similar to the case 1 real image formed by a converging lens. Ray tracing in shows that the rays from a common point on the object all cross at a point on the same side of the mirror as the object. Thus a real image can be projected onto a screen placed at this location. The image distance is positive, and the image is inverted, so its magnification is negative. This is a case 1 image for mirrors. It differs from the case 1 image for lenses only in that the image is on the same side of the mirror as the object. It is otherwise identical.
### Problem-Solving Strategy for Mirrors
Step 1. Examine the situation to determine that image formation by a mirror is involved.
Step 2. Refer to the Problem-Solving Strategies for Lenses. The same strategies are valid for mirrors as for lenses with one qualification—use the ray tracing rules for mirrors listed earlier in this section.
### Test Prep for AP Courses
### Section Summary
1. The characteristics of an image formed by a flat mirror are: (a) The image and object are the same distance from the mirror, (b) The image is a virtual image, and (c) The image is situated behind the mirror.
2. Image length is half the radius of curvature.
3. A convex mirror is a diverging mirror and forms only one type of image, namely a virtual image.
### Conceptual Questions
### Problems & Exercises
|
# Vision and Optical Instruments
## Connection for AP® Courses
Seeing faces and objects we love and cherish—one’s favorite teddy bear, a picture on the wall, or the sun rising over the mountains—is a delight. Intricate images help us understand nature and are invaluable for developing techniques and technologies in order to improve the quality of life. The image of a red blood cell that almost fills the cross-sectional area of a tiny capillary makes us wonder how blood makes it through and does not get stuck. We are able to see bacteria and viruses and understand their structure. It is the knowledge of physics that provides the fundamental understanding and the models required to develop new techniques and instruments. Therefore, physics is called an enabling science—it enables development and advancement in other areas. It is through optics and imaging that physics enables advancement in major areas of biosciences.
This chapter builds an understanding of vision and optical instruments on the idea that waves can transfer energy and momentum without the transfer of matter. In support of Big Idea 6, the way light waves travel is addressed using both conceptual and mathematical models. Throughout this unit, the direction of this travel is manipulated through the use of instruments like microscopes and telescopes, in support of Enduring Understanding 6.E.
When light enters a new transparent medium, like the crystalline lens of your eye or the glass lens of a microscope, it is bent either away or toward the line perpendicular to the boundary surface. This process is called “refraction,” as outlined in Essential Knowledge 6.E.3. In both the eye and the microscope, lenses use refraction in order to redirect light and form images. These images, alluded to by Essential Knowledge 6.E.4, can be magnified, shrunk, or inverted, depending upon the lens arrangement.
When a new medium is not fully transparent, the incident light may be reflected or absorbed, and some light may be transmitted. This idea, referenced in Essential Knowledge 6.E.1, is utilized in the construction of telescopes. By relying on the law of reflection and the idea that reflective surfaces can be used to form images, telescopes can be constructed using mirrors to distort the path of light. This distortion allows the person using the telescope to see objects at great distance. While household telescopes utilize wavelengths in the visible light range, telescopes like the Chandra X-ray Observatory and Square Kilometre Array are capable of collecting wavelengths of considerably different size. Essential Knowledge 6.E.2, 6.E.4, and 6.F.1 are all addressed within this telescope discussion.
While ray tracing may easily predict the images formed by lenses and mirrors, only the wave model can be used to describe observations of color. This concept, covered in Section 26.3, underlines Essential Knowledge 6.F.4, the idea that different models of light are appropriate at different scales. The understanding and utilization of both the particle and wave models of light, as described in Enduring Understanding 6.F, is critical to success throughout this chapter.
Big Idea 6 Waves can transfer energy and momentum from one location to another without the permanent transfer of mass and serve as a mathematical model for the description of other phenomena.
Enduring Understanding 6.E The direction of propagation of a wave such as light may be changed when the wave encounters an interface between two media.
Essential Knowledge 6.E.1 When light travels from one medium to another, some of the light is transmitted, some is reflected, and some is absorbed.
Essential Knowledge 6.E.2 When light hits a smooth reflecting surface at an angle, it reflects at the same angle on the other side of the line perpendicular to the surface (specular reflection); and this law of reflection accounts for the size and location of images seen in plane mirrors.
Essential Knowledge 6.E.3 When light travels across a boundary from one transparent material to another, the speed of propagation changes. At a non-normal incident angle, the path of the light ray bends closer to the perpendicular in the optically slower substance. This is called refraction.
Essential Knowledge 6.E.4 The reflection of light from surfaces can be used to form images.
Essential Knowledge 6.E.5 The refraction of light as it travels from one transparent medium to another can be used to form images.
Enduring Understanding 6.F Electromagnetic radiation can be modeled as waves or as fundamental particles.
Essential Knowledge 6.F.1 Types of electromagnetic radiation are characterized by their wavelengths, and certain ranges of wavelength have been given specific names. These include (in order of increasing wavelength spanning a range from picometers to kilometers) gamma rays, x-rays, ultraviolet, visible light, infrared, microwaves, and radio waves.
Essential Knowledge 6.F.4 The nature of light requires that different models of light are most appropriate at different scales. |
# Vision and Optical Instruments
## Physics of the Eye
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the image formation by the eye.
2. Explain why peripheral images lack detail and color.
3. Define refractive indices.
4. Analyze the accommodation of the eye for distant and near vision.
Early thinkers had a wide array of theories regarding vision. Euclid and Ptolemy believed that the eyes emitted rays of light; others promoted the idea that objects gave off some particle or substance that was discerned by the eye. Ibn al-Haytham (sometimes called Alhazen), who was mentioned earlier as an originator of the scientific method, conducted a number of experiments to illustrate how the anatomical construction of the eye led to its ability to form images. He recognized that light reflected from objects entered the eye through the lens and was passed to the optic nerve. Al-Haytham did not fully understand the mechanisms involved, but many subsequent discoveries in vision, reflection, and magnification built on his discoveries and methods.
The eye is perhaps the most interesting of all optical instruments. The eye is remarkable in how it forms images and in the richness of detail and color it can detect. However, our eyes commonly need some correction, to reach what is called “normal” vision, but should be called ideal rather than normal. Image formation by our eyes and common vision correction are easy to analyze with the optics discussed in Geometric Optics.
shows the basic anatomy of the eye. The cornea and lens form a system that, to a good approximation, acts as a single thin lens. For clear vision, a real image must be projected onto the light-sensitive retina, which lies at a fixed distance from the lens. The lens of the eye adjusts its power to produce an image on the retina for objects at different distances. The center of the image falls on the fovea, which has the greatest density of light receptors and the greatest acuity (sharpness) in the visual field. The variable opening (or pupil) of the eye along with chemical adaptation allows the eye to detect light intensities from the lowest observable to times greater (without damage). This is an incredible range of detection. Our eyes perform a vast number of functions, such as sense direction, movement, sophisticated colors, and distance. Processing of visual nerve impulses begins with interconnections in the retina and continues in the brain. The optic nerve conveys signals received by the eye to the brain.
Refractive indices are crucial to image formation using lenses. shows refractive indices relevant to the eye. The biggest change in the refractive index, and bending of rays, occurs at the cornea rather than the lens. The ray diagram in shows image formation by the cornea and lens of the eye. The rays bend according to the refractive indices provided in . The cornea provides about two-thirds of the power of the eye, owing to the fact that speed of light changes considerably while traveling from air into cornea. The lens provides the remaining power needed to produce an image on the retina. The cornea and lens can be treated as a single thin lens, even though the light rays pass through several layers of material (such as cornea, aqueous humor, several layers in the lens, and vitreous humor), changing direction at each interface. The image formed is much like the one produced by a single convex lens. This is a case 1 image. Images formed in the eye are inverted but the brain inverts them once more to make them seem upright.
As noted, the image must fall precisely on the retina to produce clear vision — that is, the image distance must equal the lens-to-retina distance. Because the lens-to-retina distance does not change, the image distance must be the same for objects at all distances. The eye manages this by varying the power (and focal length) of the lens to accommodate for objects at various distances. The process of adjusting the eye’s focal length is called accommodation. A person with normal (ideal) vision can see objects clearly at distances ranging from 25 cm to essentially infinity. However, although the near point (the shortest distance at which a sharp focus can be obtained) increases with age (becoming meters for some older people), we will consider it to be 25 cm in our treatment here.
shows the accommodation of the eye for distant and near vision. Since light rays from a nearby object can diverge and still enter the eye, the lens must be more converging (more powerful) for close vision than for distant vision. To be more converging, the lens is made thicker by the action of the ciliary muscle surrounding it. The eye is most relaxed when viewing distant objects, one reason that microscopes and telescopes are designed to produce distant images. Vision of very distant objects is called totally relaxed, while close vision is termed accommodated, with the closest vision being fully accommodated.
We will use the thin lens equations to examine image formation by the eye quantitatively. First, note the power of a lens is given as , so we rewrite the thin lens equations as
and
We understand that must equal the lens-to-retina distance to obtain clear vision, and that normal vision is possible for objects at distances to infinity.
The eye can detect an impressive amount of detail, considering how small the image is on the retina. To get some idea of how small the image can be, consider the following example.
### Test Prep for AP Courses
### Section Summary
1. Image formation by the eye is adequately described by the thin lens equations:
2. The eye produces a real image on the retina by adjusting its focal length and power in a process called accommodation.
3. For close vision, the eye is fully accommodated and has its greatest power, whereas for distant vision, it is totally relaxed and has its smallest power.
4. The loss of the ability to accommodate with age is called presbyopia, which is corrected by the use of a converging lens to add power for close vision.
### Conceptual Questions
### Problem Exercises
Unless otherwise stated, the lens-to-retina distance is 2.00 cm.
|
# Vision and Optical Instruments
## Vision Correction
### Learning Objectives
By the end of this section, you will be able to:
1. Identify and discuss common vision defects.
2. Explain nearsightedness and farsightedness corrections.
3. Explain laser vision correction.
The need for some type of vision correction is very common. Common vision defects are easy to understand, and some are simple to correct. illustrates two common vision defects. Nearsightedness, or myopia, is the inability to see distant objects clearly while close objects are clear. The eye overconverges the nearly parallel rays from a distant object, and the rays cross in front of the retina. More divergent rays from a close object are converged on the retina for a clear image. The distance to the farthest object that can be seen clearly is called the far point of the eye (normally infinity). Farsightedness, or hyperopia, is the inability to see close objects clearly while distant objects may be clear. A farsighted eye does not converge sufficient rays from a close object to make the rays meet on the retina. Less diverging rays from a distant object can be converged for a clear image. The distance to the closest object that can be seen clearly is called the near point of the eye (normally 25 cm).
Since the nearsighted eye over converges light rays, the correction for nearsightedness is to place a diverging spectacle lens in front of the eye. This reduces the power of an eye that is too powerful. Another way of thinking about this is that a diverging spectacle lens produces a case 3 image, which is closer to the eye than the object (see ). To determine the spectacle power needed for correction, you must know the person’s far point—that is, you must know the greatest distance at which the person can see clearly. Then the image produced by a spectacle lens must be at this distance or closer for the nearsighted person to be able to see it clearly. It is worth noting that wearing glasses does not change the eye in any way. The eyeglass lens is simply used to create an image of the object at a distance where the nearsighted person can see it clearly. Whereas someone not wearing glasses can see clearly objects that fall between their near point and their far point, someone wearing glasses can see images that fall between their near point and their far point.
Since the farsighted eye under converges light rays, the correction for farsightedness is to place a converging spectacle lens in front of the eye. This increases the power of an eye that is too weak. Another way of thinking about this is that a converging spectacle lens produces a case 2 image, which is farther from the eye than the object (see ). To determine the spectacle power needed for correction, you must know the person’s near point—that is, you must know the smallest distance at which the person can see clearly. Then the image produced by a spectacle lens must be at this distance or farther for the farsighted person to be able to see it clearly.
Another common vision defect is astigmatism, an unevenness or asymmetry in the focus of the eye. For example, rays passing through a vertical region of the eye may focus closer than rays passing through a horizontal region, resulting in the image appearing elongated. This is mostly due to irregularities in the shape of the cornea but can also be due to lens irregularities or unevenness in the retina. Because of these irregularities, different parts of the lens system produce images at different locations. The eye-brain system can compensate for some of these irregularities, but they generally manifest themselves as less distinct vision or sharper images along certain axes. shows a chart used to detect astigmatism. Astigmatism can be at least partially corrected with a spectacle having the opposite irregularity of the eye. If an eyeglass prescription has a cylindrical correction, it is there to correct astigmatism. The normal corrections for short- or farsightedness are spherical corrections, uniform along all axes.
Contact lenses have advantages over glasses beyond their cosmetic aspects. One problem with glasses is that as the eye moves, it is not at a fixed distance from the spectacle lens. Contacts rest on and move with the eye, eliminating this problem. Because contacts cover a significant portion of the cornea, they provide superior peripheral vision compared with eyeglasses. Contacts also correct some corneal astigmatism caused by surface irregularities. The tear layer between the smooth contact and the cornea fills in the irregularities. Since the index of refraction of the tear layer and the cornea are very similar, you now have a regular optical surface in place of an irregular one. If the curvature of a contact lens is not the same as the cornea (as may be necessary with some individuals to obtain a comfortable fit), the tear layer between the contact and cornea acts as a lens. If the tear layer is thinner in the center than at the edges, it has a negative power, for example. Skilled optometrists will adjust the power of the contact to compensate.
Other advances in vision correction demonstrate the interconnectedness and value of scientific research. In the 1980s, Donna Strickland and Gérard Mourou worked on ways to make small but powerful lasers. Up until that time, powerful lasers had to be quite large in order to function properly. Essentially, the intensity of the beam itself would modify the instrument’s ability to function and create too much heat to be practical. Strickland and Mourou used ultrashort laser pulses passed over a grating that modified the beam but retained its power. Chirped pulse amplification, as it became known, has been used to develop most of the highest-powered lasers in the world, but also some of the smallest and most common. Decades after their initial discovery, Strickland and Mourou were awarded the Nobel Prize for Physics (with Strickland becoming the third woman to receive the award) partly due to CPA’s pivotal role in the increasingly common practice of laser vision correction—an application neither planned during their initial research.
Laser vision correction has progressed rapidly in the last few years. It is the latest and by far the most successful in a series of procedures that correct vision by reshaping the cornea. As noted at the beginning of this section, the cornea accounts for about two-thirds of the power of the eye. Thus, small adjustments of its curvature have the same effect as putting a lens in front of the eye. To a reasonable approximation, the power of multiple lenses placed close together equals the sum of their powers. For example, a concave spectacle lens (for nearsightedness) having has the same effect on vision as reducing the power of the eye itself by 3.00 D. So to correct the eye for nearsightedness, the cornea is flattened to reduce its power. Similarly, to correct for farsightedness, the curvature of the cornea is enhanced to increase the power of the eye—the same effect as the positive power spectacle lens used for farsightedness. Laser vision correction uses high intensity electromagnetic radiation to ablate (to remove material from the surface) and reshape the corneal surfaces.
Today, the most commonly used laser vision correction procedure is Laser in situ Keratomileusis (LASIK). The top layer of the cornea is surgically peeled back and the underlying tissue ablated by multiple bursts of finely controlled ultraviolet radiation produced by an excimer laser. Lasers are used because they not only produce well-focused intense light, but they also emit very pure wavelength electromagnetic radiation that can be controlled more accurately than mixed wavelength light. The 193 nm wavelength UV commonly used is extremely and strongly absorbed by corneal tissue, allowing precise evaporation of very thin layers. A computer controlled program applies more bursts, usually at a rate of 10 per second, to the areas that require deeper removal. Typically a spot less than 1 mm in diameter and about in thickness is removed by each burst. Nearsightedness, farsightedness, and astigmatism can be corrected with an accuracy that produces normal distant vision in more than 90% of the patients, in many cases right away. The corneal flap is replaced; healing takes place rapidly and is nearly painless. More than 1 million Americans per year undergo LASIK (see ).
### Test Prep for AP Courses
### Section Summary
1. Nearsightedness, or myopia, is the inability to see distant objects and is corrected with a diverging lens to reduce power.
2. Farsightedness, or hyperopia, is the inability to see close objects and is corrected with a converging lens to increase power.
3. In myopia and hyperopia, the corrective lenses produce images at a distance that the person can see clearly—the far point and near point, respectively.
### Conceptual Questions
### Problem Exercises
|
# Vision and Optical Instruments
## Color and Color Vision
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the simple theory of color vision.
2. Outline the coloring properties of light sources.
3. Describe the retinex theory of color vision.
The gift of vision is made richer by the existence of color. Objects and lights abound with thousands of hues that stimulate our eyes, brains, and emotions. Two basic questions are addressed in this brief treatment—what does color mean in scientific terms, and how do we, as humans, perceive it?
### Simple Theory of Color Vision
We have already noted that color is associated with the wavelength of visible electromagnetic radiation. When our eyes receive pure-wavelength light, we tend to see only a few colors. Six of these (most often listed) are red, orange, yellow, green, blue, and violet. These are the rainbow of colors produced when white light is dispersed according to different wavelengths. There are thousands of other hues that we can perceive. These include brown, teal, gold, pink, and white. One simple theory of color vision implies that all these hues are our eye’s response to different combinations of wavelengths. This is true to an extent, but we find that color perception is even subtler than our eye’s response for various wavelengths of light.
The two major types of light-sensing cells (photoreceptors) in the retina are rods and cones. Rods are more sensitive than cones by a factor of about 1000 and are solely responsible for peripheral vision as well as vision in very dark environments. They are also important for motion detection. There are about 120 million rods in the human retina. Rods do not yield color information. You may notice that you lose color vision when it is very dark, but you retain the ability to discern grey scales.
Cones are most concentrated in the fovea, the central region of the retina. There are no rods here. The fovea is at the center of the macula, a 5 mm diameter region responsible for our central vision. The cones work best in bright light and are responsible for high resolution vision. There are about 6 million cones in the human retina. There are three types of cones, and each type is sensitive to different ranges of wavelengths, as illustrated in . A simplified theory of color vision is that there are three primary colors corresponding to the three types of cones. The thousands of other hues that we can distinguish among are created by various combinations of stimulations of the three types of cones. Color television uses a three-color system in which the screen is covered with equal numbers of red, green, and blue phosphor dots. The broad range of hues a viewer sees is produced by various combinations of these three colors. For example, you will perceive yellow when red and green are illuminated with the correct ratio of intensities. White may be sensed when all three are illuminated. Then, it would seem that all hues can be produced by adding three primary colors in various proportions. But there is an indication that color vision is more sophisticated. There is no unique set of three primary colors. Another set that works is yellow, green, and blue. A further indication of the need for a more complex theory of color vision is that various different combinations can produce the same hue. Yellow can be sensed with yellow light, or with a combination of red and green, and also with white light from which violet has been removed. The three-primary-colors aspect of color vision is well established; more sophisticated theories expand on it rather than deny it.
Consider why various objects display color—that is, why are feathers blue and red in a crimson rosella? The true color of an object is defined by its absorptive or reflective characteristics. shows white light falling on three different objects, one pure blue, one pure red, and one black, as well as pure red light falling on a white object. Other hues are created by more complex absorption characteristics. Pink, for example on a galah cockatoo, can be due to weak absorption of all colors except red. An object can appear a different color under non-white illumination. For example, a pure blue object illuminated with pure red light will appear black, because it absorbs all the red light falling on it. But, the true color of the object is blue, which is independent of illumination.
Similarly, light sources have colors that are defined by the wavelengths they produce. A helium-neon laser emits pure red light. In fact, the phrase “pure red light” is defined by having a sharp constrained spectrum, a characteristic of laser light. The Sun produces a broad yellowish spectrum, fluorescent lights emit bluish-white light, and incandescent lights emit reddish-white hues as seen in . As you would expect, you sense these colors when viewing the light source directly or when illuminating a white object with them. All of this fits neatly into the simplified theory that a combination of wavelengths produces various hues.
### Color Constancy and a Modified Theory of Color Vision
The eye-brain color-sensing system can, by comparing various objects in its view, perceive the true color of an object under varying lighting conditions—an ability that is called color constancy. We can sense that a white tablecloth, for example, is white whether it is illuminated by sunlight, fluorescent light, or candlelight. The wavelengths entering the eye are quite different in each case, as the graphs in imply, but our color vision can detect the true color by comparing the tablecloth with its surroundings.
Theories that take color constancy into account are based on a large body of anatomical evidence as well as perceptual studies. There are nerve connections among the light receptors on the retina, and there are far fewer nerve connections to the brain than there are rods and cones. This means that there is signal processing in the eye before information is sent to the brain. For example, the eye makes comparisons between adjacent light receptors and is very sensitive to edges as seen in . Rather than responding simply to the light entering the eye, which is uniform in the various rectangles in this figure, the eye responds to the edges and senses false darkness variations.
One theory that takes various factors into account was advanced by Edwin Land (1909 – 1991), the creative founder of the Polaroid Corporation. Land proposed, based partly on his many elegant experiments, that the three types of cones are organized into systems called retinexes. Each retinex forms an image that is compared with the others, and the eye-brain system thus can compare a candle-illuminated white table cloth with its generally reddish surroundings and determine that it is actually white. This retinex theory of color vision is an example of modified theories of color vision that attempt to account for its subtleties. One striking experiment performed by Land demonstrates that some type of image comparison may produce color vision. Two pictures are taken of a scene on black-and-white film, one using a red filter, the other a blue filter. Resulting black-and-white slides are then projected and superimposed on a screen, producing a black-and-white image, as expected. Then a red filter is placed in front of the slide taken with a red filter, and the images are again superimposed on a screen. You would expect an image in various shades of pink, but instead, the image appears to humans in full color with all the hues of the original scene. This implies that color vision can be induced by comparison of the black-and-white and red images. Color vision is not completely understood or explained, and the retinex theory is not totally accepted. It is apparent that color vision is much subtler than what a first look might imply.
### Test Prep for AP Courses
### Section Summary
1. The eye has four types of light receptors—rods and three types of color-sensitive cones.
2. The rods are good for night vision, peripheral vision, and motion changes, while the cones are responsible for central vision and color.
3. We perceive many hues, from light having mixtures of wavelengths.
4. A simplified theory of color vision states that there are three primary colors, which correspond to the three types of cones, and that various combinations of the primary colors produce all the hues.
5. The true color of an object is related to its relative absorption of various wavelengths of light. The color of a light source is related to the wavelengths it produces.
6. Color constancy is the ability of the eye-brain system to discern the true color of an object illuminated by various light sources.
7. The retinex theory of color vision explains color constancy by postulating the existence of three retinexes or image systems, associated with the three types of cones that are compared to obtain sophisticated information.
### Conceptual Questions
|
# Vision and Optical Instruments
## Microscopes
### Learning Objectives
By the end of this section, you will be able to:
1. Investigate different types of microscopes.
2. Learn how image is formed in a compound microscope.
Although the eye is marvelous in its ability to see objects large and small, it obviously has limitations to the smallest details it can detect. Human desire to see beyond what is possible with the naked eye led to the use of optical instruments. In this section we will examine microscopes, instruments for enlarging the detail that we cannot see with the unaided eye. The microscope is a multiple-element system having more than a single lens or mirror. (See ) A microscope can be made from two convex lenses. The image formed by the first element becomes the object for the second element. The second element forms its own image, which is the object for the third element, and so on. Ray tracing helps to visualize the image formed. If the device is composed of thin lenses and mirrors that obey the thin lens equations, then it is not difficult to describe their behavior numerically.
Microscopes were first developed in the early 1600s by eyeglass makers in The Netherlands and Denmark. The simplest compound microscope is constructed from two convex lenses as shown schematically in . The first lens is called the objective lens, and has typical magnification values from to . In standard microscopes, the objectives are mounted such that when you switch between objectives, the sample remains in focus. Objectives arranged in this way are described as parfocal. The second, the eyepiece, also referred to as the ocular, has several lenses which slide inside a cylindrical barrel. The focusing ability is provided by the movement of both the objective lens and the eyepiece. The purpose of a microscope is to magnify small objects, and both lenses contribute to the final magnification. Additionally, the final enlarged image is produced in a location far enough from the observer to be easily viewed, since the eye cannot focus on objects or images that are too close.
To see how the microscope in forms an image, we consider its two lenses in succession. The object is slightly farther away from the objective lens than its focal length , producing a case 1 image that is larger than the object. This first image is the object for the second lens, or eyepiece. The eyepiece is intentionally located so it can further magnify the image. The eyepiece is placed so that the first image is closer to it than its focal length . Thus the eyepiece acts as a magnifying glass, and the final image is made even larger. The final image remains inverted, but it is farther from the observer, making it easy to view (the eye is most relaxed when viewing distant objects and normally cannot focus closer than 25 cm). Since each lens produces a magnification that multiplies the height of the image, it is apparent that the overall magnification is the product of the individual magnifications:
where is the magnification of the objective and is the magnification of the eyepiece. This equation can be generalized for any combination of thin lenses and mirrors that obey the thin lens equations.
Normal optical microscopes can magnify up to with a theoretical resolution of . The lenses can be quite complicated and are composed of multiple elements to reduce aberrations. Microscope objective lenses are particularly important as they primarily gather light from the specimen. Three parameters describe microscope objectives: the numerical aperture , the magnification , and the working distance. The is related to the light gathering ability of a lens and is obtained using the angle of acceptance formed by the maximum cone of rays focusing on the specimen (see (a)) and is given by
where is the refractive index of the medium between the lens and the specimen and . As the angle of acceptance given by increases, becomes larger and more light is gathered from a smaller focal region giving higher resolution. A objective gives more detail than a objective.
While the numerical aperture can be used to compare resolutions of various objectives, it does not indicate how far the lens could be from the specimen. This is specified by the “working distance,” which is the distance (in mm usually) from the front lens element of the objective to the specimen, or cover glass. The higher the the closer the lens will be to the specimen and the more chances there are of breaking the cover slip and damaging both the specimen and the lens. The focal length of an objective lens is different than the working distance. This is because objective lenses are made of a combination of lenses and the focal length is measured from inside the barrel. The working distance is a parameter that microscopists can use more readily as it is measured from the outermost lens. The working distance decreases as the and magnification both increase.
The term in general is called the -number and is used to denote the light per unit area reaching the image plane. In photography, an image of an object at infinity is formed at the focal point and the -number is given by the ratio of the focal length of the lens and the diameter of the aperture controlling the light into the lens (see (b)). If the acceptance angle is small the of the lens can also be used as given below.
As the -number decreases, the camera is able to gather light from a larger angle, giving wide-angle photography. As usual there is a trade-off. A greater means less light reaches the image plane. A setting of usually allows one to take pictures in bright sunlight as the aperture diameter is small. In optical fibers, light needs to be focused into the fiber. shows the angle used in calculating the of an optical fiber.
Can the be larger than 1.00? The answer is ‘yes’ if we use immersion lenses in which a medium such as oil, glycerine or water is placed between the objective and the microscope cover slip. This minimizes the mismatch in refractive indices as light rays go through different media, generally providing a greater light-gathering ability and an increase in resolution. shows light rays when using air and immersion lenses.
When using a microscope we do not see the entire extent of the sample. Depending on the eyepiece and objective lens we see a restricted region which we say is the field of view. The objective is then manipulated in two-dimensions above the sample to view other regions of the sample. Electronic scanning of either the objective or the sample is used in scanning microscopy. The image formed at each point during the scanning is combined using a computer to generate an image of a larger region of the sample at a selected magnification.
When using a microscope, we rely on gathering light to form an image. Hence most specimens need to be illuminated, particularly at higher magnifications, when observing details that are so small that they reflect only small amounts of light. To make such objects easily visible, the intensity of light falling on them needs to be increased. Special illuminating systems called condensers are used for this purpose. The type of condenser that is suitable for an application depends on how the specimen is examined, whether by transmission, scattering or reflecting. See for an example of each. White light sources are common and lasers are often used. Laser light illumination tends to be quite intense and it is important to ensure that the light does not result in the degradation of the specimen.
We normally associate microscopes with visible light, but x ray and electron microscopes provide greater resolution. The focusing and basic physics is the same as that just described, even though the lenses require different technology. The electron microscope requires vacuum chambers so that the electrons can proceed unheeded. Magnifications of 50 million times provide the ability to determine positions of individual atoms within materials. An electron microscope is shown in . We do not use our eyes to form images; rather images are recorded electronically and displayed on computers. In fact observing and saving images formed by optical microscopes on computers is now done routinely. Video recordings of what occurs in a microscope can be made for viewing by many people at later dates. Advances in this powerful technology continue. In the 1990s, Pratibha L. Gai invented the environmental transmission electron microscope (ETEM), which was the first device capable of observing individual atoms in chemical reactions.
### Test Prep for AP Courses
### Section Summary
1. The microscope is a multiple-element system having more than a single lens or mirror.
2. Many optical devices contain more than a single lens or mirror. These are analysed by considering each element sequentially. The image formed by the first is the object for the second, and so on. The same ray tracing and thin lens techniques apply to each lens element.
3. The overall magnification of a multiple-element system is the product of the magnifications of its individual elements. For a two-element system with an objective and an eyepiece, this is
where is the magnification of the objective and is the magnification of the eyepiece, such as for a microscope.
4. Microscopes are instruments for allowing us to see detail we would not be able to see with the unaided eye and consist of a range of components.
5. The eyepiece and objective contribute to the magnification. The numerical aperture of an objective is given by
where is the refractive index and the angle of acceptance.
6. Immersion techniques are often used to improve the light gathering ability of microscopes. The specimen is illuminated by transmitted, scattered or reflected light though a condenser.
7. The describes the light gathering ability of a lens. It is given by
### Conceptual Questions
### Problem Exercises
|
# Vision and Optical Instruments
## Telescopes
### Learning Objectives
By the end of this section, you will be able to:
1. Outline the invention of a telescope.
2. Describe the working of a telescope.
Telescopes are meant for viewing distant objects, producing an image that is larger than the image that can be seen with the unaided eye. Telescopes gather far more light than the eye, allowing dim objects to be observed with greater magnification and better resolution. Although Galileo is often credited with inventing the telescope, he actually did not. What he did was more important. He constructed several early telescopes, was the first to study the heavens with them, and made monumental discoveries using them. Among these are the moons of Jupiter, the craters and mountains on the Moon, the details of sunspots, and the fact that the Milky Way is composed of vast numbers of individual stars.
(a) shows a telescope made of two lenses, the convex objective and the concave eyepiece, the same construction used by Galileo. Such an arrangement produces an upright image and is used in spyglasses and opera glasses.
The most common two-lens telescope, like the simple microscope, uses two convex lenses and is shown in (b). The object is so far away from the telescope that it is essentially at infinity compared with the focal lengths of the lenses (). The first image is thus produced at , as shown in the figure. To prove this, note that
Because , this simplifies to
which implies that , as claimed. It is true that for any distant object and any lens or mirror, the image is at the focal length.
The first image formed by a telescope objective as seen in (b) will not be large compared with what you might see by looking at the object directly. For example, the spot formed by sunlight focused on a piece of paper by a magnifying glass is the image of the Sun, and it is small. The telescope eyepiece (like the microscope eyepiece) magnifies this first image. The distance between the eyepiece and the objective lens is made slightly less than the sum of their focal lengths so that the first image is closer to the eyepiece than its focal length. That is,
is less than
, and so the eyepiece forms a case 2 image that is large and to the left for easy viewing. If the angle subtended by an object as viewed by the unaided eye is
, and the angle subtended by the telescope image is
, then the angular magnification
is defined to be their ratio. That is,
. It can be shown that the angular magnification of a telescope is related to the focal lengths of the objective and eyepiece; and is given by
The minus sign indicates the image is inverted. To obtain the greatest angular magnification, it is best to have a long focal length objective and a short focal length eyepiece. The greater the angular magnification , the larger an object will appear when viewed through a telescope, making more details visible. Limits to observable details are imposed by many factors, including lens quality and atmospheric disturbance.
The image in most telescopes is inverted, which is unimportant for observing the stars but a real problem for other applications, such as telescopes on ships or telescopic gun sights. If an upright image is needed, Galileo’s arrangement in (a) can be used. But a more common arrangement is to use a third convex lens as an eyepiece, increasing the distance between the first two and inverting the image once again as seen in .
A telescope can also be made with a concave mirror as its first element or objective, since a concave mirror acts like a convex lens as seen in . Flat mirrors are often employed in optical instruments to make them more compact or to send light to cameras and other sensing devices. There are many advantages to using mirrors rather than lenses for telescope objectives. Mirrors can be constructed much larger than lenses and can, thus, gather large amounts of light, as needed to view distant galaxies, for example. Large and relatively flat mirrors have very long focal lengths, so that great angular magnification is possible.
Telescopes, like microscopes, can utilize a range of frequencies from the electromagnetic spectrum. (a) shows the Australia Telescope Compact Array, which uses six 22-m antennas for mapping the southern skies using radio waves. (b) shows the focusing of x rays on the Chandra X-ray Observatory—a satellite orbiting earth since 1999 and looking at high temperature events as exploding stars, quasars, and black holes. X rays, with much more energy and shorter wavelengths than RF and light, are mainly absorbed and not reflected when incident perpendicular to the medium. But they can be reflected when incident at small glancing angles, much like a rock will skip on a lake if thrown at a small angle. The mirrors for the Chandra consist of a long barrelled pathway and 4 pairs of mirrors to focus the rays at a point 10 meters away from the entrance. The mirrors are extremely smooth and consist of a glass ceramic base with a thin coating of metal (iridium). Four pairs of precision manufactured mirrors are exquisitely shaped and aligned so that x rays ricochet off the mirrors like bullets off a wall, focusing on a spot.
A current exciting development is a collaborative effort involving 17 countries to construct a Square Kilometre Array (SKA) of telescopes capable of covering from 80 MHz to 2 GHz. The initial stage of the project is the construction of the Australian Square Kilometre Array Pathfinder in Western Australia (see ). The project will use cutting-edge technologies such as adaptive optics in which the lens or mirror is constructed from lots of carefully aligned tiny lenses and mirrors that can be manipulated using computers. A range of rapidly changing distortions can be minimized by deforming or tilting the tiny lenses and mirrors. The use of adaptive optics in vision correction is a current area of research.
### Test Prep for AP Courses
### Section Summary
1. Simple telescopes can be made with two lenses. They are used for viewing objects at large distances and utilize the entire range of the electromagnetic spectrum.
2. The angular magnification M for a telescope is given by
where is the angle subtended by an object viewed by the unaided eye, is the angle subtended by a magnified image, and and are the focal lengths of the objective and the eyepiece.
### Conceptual Questions
### Problem Exercises
Unless otherwise stated, the lens-to-retina distance is 2.00 cm.
|
# Vision and Optical Instruments
## Aberrations
### Learning Objectives
By the end of this section, you will be able to:
1. Describe optical aberration.
Real lenses behave somewhat differently from how they are modeled using the thin lens equations, producing aberrations. An aberration is a distortion in an image. There are a variety of aberrations due to a lens size, material, thickness, and position of the object. One common type of aberration is chromatic aberration, which is related to color. Since the index of refraction of lenses depends on color or wavelength, images are produced at different places and with different magnifications for different colors. (The law of reflection is independent of wavelength, and so mirrors do not have this problem. This is another advantage for mirrors in optical systems such as telescopes.) (a) shows chromatic aberration for a single convex lens and its partial correction with a two-lens system. Violet rays are bent more than red, since they have a higher index of refraction and are thus focused closer to the lens. The diverging lens partially corrects this, although it is usually not possible to do so completely. Lenses of different materials and having different dispersions may be used. For example an achromatic doublet consisting of a converging lens made of crown glass and a diverging lens made of flint glass in contact can dramatically reduce chromatic aberration (see (b)).
Quite often in an imaging system the object is off-center. Consequently, different parts of a lens or mirror do not refract or reflect the image to the same point. This type of aberration is called a coma and is shown in . The image in this case often appears pear-shaped. Another common aberration is spherical aberration where rays converging from the outer edges of a lens converge to a focus closer to the lens and rays closer to the axis focus further (see ). Aberrations due to astigmatism in the lenses of the eyes are discussed in Vision Correction, and a chart used to detect astigmatism is shown in . Such aberrations and can also be an issue with manufactured lenses.
The image produced by an optical system needs to be bright enough to be discerned. It is often a challenge to obtain a sufficiently bright image. The brightness is determined by the amount of light passing through the optical system. The optical components determining the brightness are the diameter of the lens and the diameter of pupils, diaphragms or aperture stops placed in front of lenses. Optical systems often have entrance and exit pupils to specifically reduce aberrations but they inevitably reduce brightness as well. Consequently, optical systems need to strike a balance between the various components used. The iris in the eye dilates and constricts, acting as an entrance pupil. You can see objects more clearly by looking through a small hole made with your hand in the shape of a fist. Squinting, or using a small hole in a piece of paper, also will make the object sharper.
So how are aberrations corrected? The lenses may also have specially shaped surfaces, as opposed to the simple spherical shape that is relatively easy to produce. Expensive camera lenses are large in diameter, so that they can gather more light, and need several elements to correct for various aberrations. Further, advances in materials science have resulted in lenses with a range of refractive indices—technically referred to as graded index (GRIN) lenses. Spectacles often have the ability to provide a range of focusing ability using similar techniques. GRIN lenses are particularly important at the end of optical fibers in endoscopes. Advanced computing techniques allow for a range of corrections on images after the image has been collected and certain characteristics of the optical system are known. Some of these techniques are sophisticated versions of what are available on commercial packages like Adobe Photoshop.
### Section Summary
1. Aberrations or image distortions can arise due to the finite thickness of optical instruments, imperfections in the optical components, and limitations on the ways in which the components are used.
2. The means for correcting aberrations range from better components to computational techniques.
### Conceptual Questions
### Problem Exercises
|
# Wave Optics
## Connection for AP® Courses
If you have ever looked at the reds, blues, and greens in a sunlit soap bubble and wondered how straw-colored soapy water could produce them, you have hit upon one of the many phenomena that can only be explained by the wave character of light. The same is true for the colors seen in an oil slick or in the light reflected from an optical data disk. These and other interesting phenomena, such as the dispersion of white light into a rainbow of colors when passed through a narrow slit, cannot be explained fully by geometric optics. In these cases, light interacts with objects and exhibits a number of wave characteristics. The branch of optics that considers the behavior of light when it exhibits wave characteristics is called “wave optics” (or sometimes “physical optics”).
These soap bubbles exhibit brilliant colors when exposed to sunlight. How are the colors produced if they are not pigments in the soap?
This chapter supports Big Idea 6 in its coverage of wave optics by presenting explanations and examples of many phenomena that can only be explained by the wave aspect of light. You will learn how only waves can exhibit diffraction and interference patterns that we observe in light (Enduring Understanding 6.C). As explained by Huygens’s principle, diffraction is the bending of waves around the edges of a nontransparent object or after passing through an opening (Essential Knowledge 6.C.4). Interference results from the superposition of two or more traveling waves (Enduring Understanding 6.D, Enduring Understanding 6.D.1). Superposition causes variations in the resultant wave amplitude (Essential Knowledge 6.D.2). The interference can be described as constructive interference, which increases amplitude, and destructive interference, which decreases amplitude. Based on an understanding of diffraction and interference of light, this chapter also explains experimental observations that occur when light passes through an opening or set of openings with dimensions comparable to the wavelength of the light – specifically the effects of double-slit, multiple-slit (Essential Knowledge 6.C.3), and single-slit (Essential Knowledge 6.C.2) openings. Another aspect of light waves that you will learn about in this chapter is polarization, a phenomenon in which light waves all vibrate in a single plane. The explanation for this phenomenon is based on the fact that light is a traveling electromagnetic wave (Enduring Understanding 6.A) that propagates via transverse oscillations of both electric and magnetic field vectors (Enduring Understanding 6.A.1). Light waves can be polarized by passing through filters. Many sunglasses contain polarizing filters to reduce glare, and certain types of 3-D glasses use polarization to create an effect of depth on the movie screen.
Big Idea 6 Waves can transfer energy and momentum from one location to another without the permanent transfer of mass and serve as a mathematical model for the description of other phenomena.
Enduring Understanding 6.A A wave is a traveling disturbance that transfers energy and momentum.
Essential Knowledge 6.A.1 Waves can propagate via different oscillation modes such as transverse and longitudinal.
Enduring Understanding 6.C Only waves exhibit interference and diffraction.
Essential Knowledge 6.C.2 When waves pass through an opening whose dimensions are comparable to the wavelength, a diffraction pattern can be observed.
Essential Knowledge 6.C.3 When waves pass through a set of openings whose spacing is comparable to the wavelength, an interference pattern can be observed. Examples should include monochromatic double-slit interference.
Essential Knowledge 6.C.4 When waves pass by an edge, they can diffract into the “shadow region” behind the edge. Examples should include hearing around corners, but not seeing around them, and water waves bending around obstacles.
Enduring Understanding 6.D Interference and superposition lead to standing waves and beats.
Essential Knowledge 6.D.1 Two or more wave pulses can interact in such a way as to produce amplitude variations in the resultant wave. When two pulses cross, they travel through each other; they do not bounce off each other. Where the pulses overlap, the resulting displacement can be determined by adding the displacements of the two pulses. This is called superposition.
Essential Knowledge 6.D.2 Two or more traveling waves can interact in such a way as to produce amplitude variations in the resultant wave. |
# Wave Optics
## The Wave Aspect of Light: Interference
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the wave character of light.
2. Identify the changes when light enters a medium.
We know that visible light is the type of electromagnetic wave to which our eyes respond. Like all other electromagnetic waves, it obeys the equation
where is the speed of light in vacuum, is the frequency of the electromagnetic waves, and is its wavelength. The range of visible wavelengths is approximately 380 to 760 nm. As is true for all waves, light travels in straight lines and acts like a ray when it interacts with objects several times as large as its wavelength. However, when it interacts with smaller objects, it displays its wave characteristics prominently. Interference is the hallmark of a wave, and in both the ray and wave characteristics of light can be seen. The laser beam emitted by the observatory epitomizes a ray, traveling in a straight line. However, passing a pure-wavelength beam through vertical slits with a size close to the wavelength of the beam reveals the wave character of light, as the beam spreads out horizontally into a pattern of bright and dark regions caused by systematic constructive and destructive interference. Rather than spreading out, a ray would continue traveling straight ahead after passing through slits.
Light has wave characteristics in various media as well as in a vacuum. When light goes from a vacuum to some medium, like water, its speed and wavelength change, but its frequency remains the same. (We can think of light as a forced oscillation that must have the frequency of the original source.) The speed of light in a medium is , where is its index of refraction. If we divide both sides of equation
by
, we get . This implies that
, where is the wavelength in a medium and that
where is the wavelength in vacuum and is the medium’s index of refraction. Therefore, the wavelength of light is smaller in any medium than it is in vacuum. In water, for example, which has , the range of visible wavelengths is to , or . Although wavelengths change while traveling from one medium to another, colors do not, since colors are associated with frequency.
### Section Summary
1. Wave optics is the branch of optics that must be used when light interacts with small objects or whenever the wave characteristics of light are considered.
2. Wave characteristics are those associated with interference and diffraction.
3. Visible light is the type of electromagnetic wave to which our eyes respond and has a wavelength in the range of 380 to 760 nm.
4. Like all EM waves, the following relationship is valid in vacuum: , where is the speed of light, is the frequency of the electromagnetic wave, and is its wavelength in vacuum.
5. The wavelength of light in a medium with index of refraction is . Its frequency is the same as in vacuum.
### Conceptual Questions
### Problems & Exercises
|
# Wave Optics
## Huygens's Principle: Diffraction
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the propagation of transverse waves.
2. Discuss Huygens’s principle.
3. Explain the bending of light.
shows how a transverse wave looks as viewed from above and from the side. A light wave can be imagined to propagate like this, although we do not actually see it wiggling through space. From above, we view the wavefronts (or wave crests) as we would by looking down on the ocean waves. The side view would be a graph of the electric or magnetic field. The view from above is perhaps the most useful in developing concepts about wave optics.
The Dutch scientist Christiaan Huygens (1629–1695) developed a useful technique for determining in detail how and where waves propagate. Starting from some known position, Huygens’s principle states that:
Every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets.
shows how Huygens’s principle is applied. A wavefront is the long edge that moves, for example, the crest or the trough. Each point on the wavefront emits a semicircular wave that moves at the propagation speed . These are drawn at a time later, so that they have moved a distance . The new wavefront is a line tangent to the wavelets and is where we would expect the wave to be a time later. Huygens’s principle works for all types of waves, including water waves, sound waves, and light waves. We will find it useful not only in describing how light waves propagate, but also in explaining the laws of reflection and refraction. In addition, we will see that Huygens’s principle tells us how and where light rays interfere.
shows how a mirror reflects an incoming wave at an angle equal to the incident angle, verifying the law of reflection. As the wavefront strikes the mirror, wavelets are first emitted from the left part of the mirror and then the right. The wavelets closer to the left have had time to travel farther, producing a wavefront traveling in the direction shown.
The law of refraction can be explained by applying Huygens’s principle to a wavefront passing from one medium to another (see ). Each wavelet in the figure was emitted when the wavefront crossed the interface between the media. Since the speed of light is smaller in the second medium, the waves do not travel as far in a given time, and the new wavefront changes direction as shown. This explains why a ray changes direction to become closer to the perpendicular when light slows down. Snell’s law can be derived from the geometry in , but this is left as an exercise for ambitious readers.
What happens when a wave passes through an opening, such as light shining through an open door into a dark room? For light, we expect to see a sharp shadow of the doorway on the floor of the room, and we expect no light to bend around corners into other parts of the room. When sound passes through a door, we expect to hear it everywhere in the room and, thus, expect that sound spreads out when passing through such an opening (see ). What is the difference between the behavior of sound waves and light waves in this case? The answer is that light has very short wavelengths and acts like a ray. Sound has wavelengths on the order of the size of the door and bends around corners (for frequency of 1000 Hz, , about three times smaller than the width of the doorway).
If we pass light through smaller openings, often called slits, we can use Huygens’s principle to see that light bends as sound does (see ). The bending of a wave around the edges of an opening or an obstacle is called diffraction. Diffraction is a wave characteristic and occurs for all types of waves. If diffraction is observed for some phenomenon, it is evidence that the phenomenon is a wave. Thus the horizontal diffraction of the laser beam after it passes through slits in is evidence that light is a wave.
### Test Prep for AP Courses
### Section Summary
1. An accurate technique for determining how and where waves propagate is given by Huygens’s principle: Every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets.
2. Diffraction is the bending of a wave around the edges of an opening or other obstacle.
### Conceptual Questions
|
# Wave Optics
## Young’s Double Slit Experiment
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the phenomena of interference.
2. Define constructive interference for a double slit and destructive interference for a double slit.
Although Christiaan Huygens thought that light was a wave, Isaac Newton did not. Newton felt that there were other explanations for color, and for the interference and diffraction effects that were observable at the time. Owing to Newton’s tremendous stature, his view generally prevailed. The fact that Huygens’s principle worked was not considered evidence that was direct enough to prove that light is a wave. The acceptance of the wave character of light came many years later when, in 1801, the English physicist and physician Thomas Young (1773–1829) did his now-classic double slit experiment (see ).
Why do we not ordinarily observe wave behavior for light, such as observed in Young’s double slit experiment? First, light must interact with something small, such as the closely spaced slits used by Young, to show pronounced wave effects. Furthermore, Young first passed light from a single source (the Sun) through a single slit to make the light somewhat coherent. By coherent, we mean waves are in phase or have a definite phase relationship. Incoherent means the waves have random phase relationships. Why did Young then pass the light through a double slit? The answer to this question is that two slits provide two coherent light sources that then interfere constructively or destructively. Young used sunlight, where each wavelength forms its own pattern, making the effect more difficult to see. We illustrate the double slit experiment with monochromatic (single ) light to clarify the effect. shows the pure constructive and destructive interference of two waves having the same wavelength and amplitude.
When light passes through narrow slits, it is diffracted into semicircular waves, as shown in (a). Pure constructive interference occurs where the waves are crest to crest or trough to trough. Pure destructive interference occurs where they are crest to trough. The light must fall on a screen and be scattered into our eyes for us to see the pattern. An analogous pattern for water waves is shown in (b). Note that regions of constructive and destructive interference move out from the slits at well-defined angles to the original beam. These angles depend on wavelength and the distance between the slits, as we shall see below.
To understand the double slit interference pattern, we consider how two waves travel from the slits to the screen, as illustrated in . Each slit is a different distance from a given point on the screen. Thus different numbers of wavelengths fit into each path. Waves start out from the slits in phase (crest to crest), but they may end up out of phase (crest to trough) at the screen if the paths differ in length by half a wavelength, interfering destructively as shown in (a). If the paths differ by a whole wavelength, then the waves arrive in phase (crest to crest) at the screen, interfering constructively as shown in (b). More generally, if the paths taken by the two waves differ by any half-integral number of wavelengths [, , , etc.], then destructive interference occurs. Similarly, if the paths taken by the two waves differ by any integral number of wavelengths (, , , etc.), then constructive interference occurs.
shows how to determine the path length difference for waves traveling from two slits to a common point on a screen. If the screen is a large distance away compared with the distance between the slits, then the angle between the path and a line from the slits to the screen (see the figure) is nearly the same for each path. The difference between the paths is shown in the figure; simple trigonometry shows it to be , where is the distance between the slits. To obtain constructive interference for a double slit, the path length difference must be an integral multiple of the wavelength, or
Similarly, to obtain destructive interference for a double slit, the path length difference must be a half-integral multiple of the wavelength, or
where is the wavelength of the light, is the distance between slits, and is the angle from the original direction of the beam as discussed above. We call the order of the interference. For example, is fourth-order interference.
The equations for double slit interference imply that a series of bright and dark lines are formed. For vertical slits, the light spreads out horizontally on either side of the incident beam into a pattern called interference fringes, illustrated in . The intensity of the bright fringes falls off on either side, being brightest at the center. The closer the slits are, the more is the spreading of the bright fringes. We can see this by examining the equation
For fixed and , the smaller is, the larger must be, since
.
This is consistent with our contention that wave effects are most noticeable when the object the wave encounters (here, slits a distance apart) is small. Small gives large , hence a large effect.
### Test Prep for AP Courses
### Section Summary
1. Young’s double slit experiment gave definitive proof of the wave character of light.
2. An interference pattern is obtained by the superposition of light from two slits.
3. There is constructive interference when , where is the distance between the slits, is the angle relative to the incident direction, and is the order of the interference.
4. There is destructive interference when .
### Conceptual Questions
### Problems & Exercises
|
# Wave Optics
## Multiple Slit Diffraction
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the pattern obtained from diffraction grating.
2. Explain diffraction grating effects.
An interesting thing happens if you pass light through a large number of evenly spaced parallel slits, called a diffraction grating. An interference pattern is created that is very similar to the one formed by a double slit (see ). A diffraction grating can be manufactured by scratching glass with a sharp tool in a number of precisely positioned parallel lines, with the untouched regions acting like slits. These can be photographically mass produced rather cheaply. Diffraction gratings work both for transmission of light, as in , and for reflection of light, as on butterfly wings and the Australian opal in or the CD pictured in the opening photograph of this chapter. In addition to their use as novelty items, diffraction gratings are commonly used for spectroscopic dispersion and analysis of light. What makes them particularly useful is the fact that they form a sharper pattern than double slits do. That is, their bright regions are narrower and brighter, while their dark regions are darker. shows idealized graphs demonstrating the sharper pattern. Natural diffraction gratings occur in the feathers of certain birds. Tiny, finger-like structures in regular patterns act as reflection gratings, producing constructive interference that gives the feathers colors not solely due to their pigmentation. This is called iridescence.
The analysis of a diffraction grating is very similar to that for a double slit (see ). As we know from our discussion of double slits in Young's Double Slit Experiment, light is diffracted by each slit and spreads out after passing through. Rays traveling in the same direction (at an angle relative to the incident direction) are shown in the figure. Each of these rays travels a different distance to a common point on a screen far away. The rays start in phase, and they can be in or out of phase when they reach a screen, depending on the difference in the path lengths traveled. As seen in the figure, each ray travels a distance different from that of its neighbor, where is the distance between slits. If this distance equals an integral number of wavelengths, the rays all arrive in phase, and constructive interference (a maximum) is obtained. Thus, the condition necessary to obtain constructive interference for a diffraction grating is
where is the distance between slits in the grating, is the wavelength of light, and is the order of the maximum. Note that this is exactly the same equation as for double slits separated by . However, the slits are usually closer in diffraction gratings than in double slits, producing fewer maxima at larger angles.
Where are diffraction gratings used? Diffraction gratings are key components of monochromators used, for example, in optical imaging of particular wavelengths from biological or medical samples. A diffraction grating can be chosen to specifically analyze a wavelength emitted by molecules in diseased cells in a biopsy sample or to help excite strategic molecules in the sample with a selected frequency of light. Another vital use is in optical fiber technologies where fibers are designed to provide optimum performance at specific wavelengths. A range of diffraction gratings are available for selecting specific wavelengths for such use.
### Test Prep for AP Courses
### Section Summary
1. A diffraction grating is a large collection of evenly spaced parallel slits that produces an interference pattern similar to but sharper than that of a double slit.
2. There is constructive interference for a diffraction grating when , where is the distance between slits in the grating, is the wavelength of light, and is the order of the maximum.
### Conceptual Questions
### Problems & Exercises
|
# Wave Optics
## Single Slit Diffraction
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the single slit diffraction pattern.
Light passing through a single slit forms a diffraction pattern somewhat different from those formed by double slits or diffraction gratings. shows a single slit diffraction pattern. Note that the central maximum is larger than those on either side, and that the intensity decreases rapidly on either side. In contrast, a diffraction grating produces evenly spaced lines that dim slowly on either side of center.
The analysis of single slit diffraction is illustrated in . Here we consider light coming from different parts of the same slit. According to Huygens’s principle, every part of the wavefront in the slit emits wavelets. These are like rays that start out in phase and head in all directions. (Each ray is perpendicular to the wavefront of a wavelet.) Assuming the screen is very far away compared with the size of the slit, rays heading toward a common destination are nearly parallel. When they travel straight ahead, as in (a), they remain in phase, and a central maximum is obtained. However, when rays travel at an angle relative to the original direction of the beam, each travels a different distance to a common location, and they can arrive in or out of phase. In (b), the ray from the bottom travels a distance of one wavelength farther than the ray from the top. Thus a ray from the center travels a distance farther than the one on the left, arrives out of phase, and interferes destructively. A ray from slightly above the center and one from slightly above the bottom will also cancel one another. In fact, each ray from the slit will have another to interfere destructively, and a minimum in intensity will occur at this angle. There will be another minimum at the same angle to the right of the incident direction of the light.
At the larger angle shown in (c), the path lengths differ by for rays from the top and bottom of the slit. One ray travels a distance different from the ray from the bottom and arrives in phase, interfering constructively. Two rays, each from slightly above those two, will also add constructively. Most rays from the slit will have another to interfere with constructively, and a maximum in intensity will occur at this angle. However, all rays do not interfere constructively for this situation, and so the maximum is not as intense as the central maximum. Finally, in (d), the angle shown is large enough to produce a second minimum. As seen in the figure, the difference in path length for rays from either side of the slit is , and we see that a destructive minimum is obtained when this distance is an integral multiple of the wavelength.
Thus, to obtain destructive interference for a single slit,
where is the slit width, is the light’s wavelength, is the angle relative to the original direction of the light, and is the order of the minimum. shows a graph of intensity for single slit interference, and it is apparent that the maxima on either side of the central maximum are much less intense and not as wide. This is consistent with the illustration in (b).
### Test Prep for AP Courses
### Section Summary
1. A single slit produces an interference pattern characterized by a broad central maximum with narrower and dimmer maxima to the sides.
2. There is destructive interference for a single slit when
, where is the slit width,
is the light’s wavelength,
is the angle relative to the original direction of the light, and
is the order of the minimum. Note that there is no
minimum.
### Conceptual Questions
### Problems & Exercises
|
# Wave Optics
## Limits of Resolution: The Rayleigh Criterion
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the Rayleigh criterion.
Light diffracts as it moves through space, bending around obstacles, interfering constructively and destructively. While this can be used as a spectroscopic tool—a diffraction grating disperses light according to wavelength, for example, and is used to produce spectra—diffraction also limits the detail we can obtain in images. (a) shows the effect of passing light through a small circular aperture. Instead of a bright spot with sharp edges, a spot with a fuzzy edge surrounded by circles of light is obtained. This pattern is caused by diffraction similar to that produced by a single slit. Light from different parts of the circular aperture interferes constructively and destructively. The effect is most noticeable when the aperture is small, but the effect is there for large apertures, too.
How does diffraction affect the detail that can be observed when light passes through an aperture? (b) shows the diffraction pattern produced by two point light sources that are close to one another. The pattern is similar to that for a single point source, and it is just barely possible to tell that there are two light sources rather than one. If they were closer together, as in (c), we could not distinguish them, thus limiting the detail or resolution we can obtain. This limit is an inescapable consequence of the wave nature of light.
There are many situations in which diffraction limits the resolution. The acuity of our vision is limited because light passes through the pupil, the circular aperture of our eye. Be aware that the diffraction-like spreading of light is due to the limited diameter of a light beam, not the interaction with an aperture. Thus light passing through a lens with a diameter shows this effect and spreads, blurring the image, just as light passing through an aperture of diameter does. So diffraction limits the resolution of any system having a lens or mirror. Telescopes are also limited by diffraction, because of the finite diameter of their primary mirror.
Just what is the limit? To answer that question, consider the diffraction pattern for a circular aperture, which has a central maximum that is wider and brighter than the maxima surrounding it (similar to a slit) [see (a)]. It can be shown that, for a circular aperture of diameter , the first minimum in the diffraction pattern occurs at (providing the aperture is large compared with the wavelength of light, which is the case for most optical instruments). The accepted criterion for determining the diffraction limit to resolution based on this angle was developed by Lord Rayleigh in the 19th century. The Rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. See (b). The first minimum is at an angle of , so that two point objects are just resolvable if they are separated by the angle
where is the wavelength of light (or other electromagnetic radiation) and is the diameter of the aperture, lens, mirror, etc., with which the two objects are observed. In this expression, has units of radians.
Diffraction is not only a problem for optical instruments but also for the electromagnetic radiation itself. Any beam of light having a finite diameter and a wavelength exhibits diffraction spreading. The beam spreads out with an angle given by the equation . Take, for example, a laser beam made of rays as parallel as possible (angles between rays as close to as possible) instead spreads out at an angle , where is the diameter of the beam and is its wavelength. This spreading is impossible to observe for a flashlight, because its beam is not very parallel to start with. However, for long-distance transmission of laser beams or microwave signals, diffraction spreading can be significant (see ). To avoid this, we can increase . This is done for laser light sent to the Moon to measure its distance from the Earth. The laser beam is expanded through a telescope to make much larger and smaller.
In most biology laboratories, resolution is presented when the use of the microscope is introduced. The ability of a lens to produce sharp images of two closely spaced point objects is called resolution. The smaller the distance by which two objects can be separated and still be seen as distinct, the greater the resolution. The resolving power of a lens is defined as that distance . An expression for resolving power is obtained from the Rayleigh criterion. In (a) we have two point objects separated by a distance . According to the Rayleigh criterion, resolution is possible when the minimum angular separation is
where is the distance between the specimen and the objective lens, and we have used the small angle approximation (i.e., we have assumed that is much smaller than ), so that .
Therefore, the resolving power is
Another way to look at this is by re-examining the concept of Numerical Aperture () discussed in Microscopes. There, is a measure of the maximum acceptance angle at which the fiber will take light and still contain it within the fiber. (b) shows a lens and an object at point P. The here is a measure of the ability of the lens to gather light and resolve fine detail. The angle subtended by the lens at its focus is defined to be . From the figure and again using the small angle approximation, we can write
The for a lens is , where is the index of refraction of the medium between the objective lens and the object at point P.
From this definition for , we can see that
In a microscope, is important because it relates to the resolving power of a lens. A lens with a large will be able to resolve finer details. Lenses with larger will also be able to collect more light and so give a brighter image. Another way to describe this situation is that the larger the , the larger the cone of light that can be brought into the lens, and so more of the diffraction modes will be collected. Thus the microscope has more information to form a clear image, and so its resolving power will be higher.
One of the consequences of diffraction is that the focal point of a beam has a finite width and intensity distribution. Consider focusing when only considering geometric optics, shown in (a). The focal point is infinitely small with a huge intensity and the capacity to incinerate most samples irrespective of the of the objective lens. For wave optics, due to diffraction, the focal point spreads to become a focal spot (see (b)) with the size of the spot decreasing with increasing . Consequently, the intensity in the focal spot increases with increasing . The higher the , the greater the chances of photodegrading the specimen. However, the spot never becomes a true point.
### Test Prep for AP Courses
### Section Summary
1. Diffraction limits resolution.
2. For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other.
3. This occurs for two point objects separated by the angle , where is the wavelength of light (or other electromagnetic radiation) and is the diameter of the aperture, lens, mirror, etc. This equation also gives the angular spreading of a source of light having a diameter .
### Conceptual Questions
### Problems & Exercises
|
# Wave Optics
## Thin Film Interference
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the rainbow formation by thin films.
The bright colors seen in an oil slick floating on water or in a sunlit soap bubble are caused by interference. The brightest colors are those that interfere constructively. This interference is between light reflected from different surfaces of a thin film; thus, the effect is known as thin film interference. As noticed before, interference effects are most prominent when light interacts with something having a size similar to its wavelength. A thin film is one having a thickness smaller than a few times the wavelength of light, . Since color is associated indirectly with and since all interference depends in some way on the ratio of to the size of the object involved, we should expect to see different colors for different thicknesses of a film, as in . Some of the earliest measurements of such films and their effects were conducted by Agnes Pockels, a self-taught German chemist who investigated the characteristics of soapy and greasy films in water. Using homemade materials, Pockels developed a trough for measuring surface films and began conducting experiments. While scientific and societal barriers for women prevented her from publishing on her own, renowned scientist Lord Rayleigh supported her efforts and pushed for her work to be shared in the journal Nature. The trough Pockels invented became the basis for the contemporary version, as described below.
What causes thin film interference? shows how light reflected from the top and bottom surfaces of a film can interfere. Incident light is only partially reflected from the top surface of the film (ray 1). The remainder enters the film and is itself partially reflected from the bottom surface. Part of the light reflected from the bottom surface can emerge from the top of the film (ray 2) and interfere with light reflected from the top (ray 1). Since the ray that enters the film travels a greater distance, it may be in or out of phase with the ray reflected from the top. However, consider for a moment, again, the bubbles in . The bubbles are darkest where they are thinnest. Furthermore, if you observe a soap bubble carefully, you will note it gets dark at the point where it breaks. For very thin films, the difference in path lengths of ray 1 and ray 2 in is negligible; so why should they interfere destructively and not constructively? The answer is that a phase change can occur upon reflection. The rule is as follows:
When light reflects from a medium having an index of refraction greater than that of the medium in which it is traveling, a
If the film in is a soap bubble (essentially water with air on both sides), then there is a shift for ray 1 and none for ray 2. Thus, when the film is very thin, the path length difference between the two rays is negligible, they are exactly out of phase, and destructive interference will occur at all wavelengths and so the soap bubble will be dark here.
The thickness of the film relative to the wavelength of light is the other crucial factor in thin film interference. Ray 2 in travels a greater distance than ray 1. For light incident perpendicular to the surface, ray 2 travels a distance approximately farther than ray 1. When this distance is an integral or half-integral multiple of the wavelength in the medium (, where is the wavelength in vacuum and is the index of refraction), constructive or destructive interference occurs, depending also on whether there is a phase change in either ray.
Thin-film interference has created an entire field of research and industrial applications. Its foundations were laid by Irving Langmuir and Katharine Burr Blodgett, working at General Electric in the 1920s and 1930s. Langmuir had pioneered a method for producing ultra-thin layers on materials. Blodgett built on these practices by creating a method to precisely stack and compress these layers in order to produce a film of a desired thickness and quality. The device they developed became known as the Langmuir-Blodgett trough, built from principles developed by Agnes Pockels and still used in laboratories today. The earliest widely applied use of these principles was non-reflective glass, which Blodgett patented in 1938 and which was used almost immediately in the making of the film Gone With the Wind. The film is viewed as a tremendous leap in cinematography; cameras, microscopes, telescopes, and many other instruments rely on Blodgett's invention as well.
Thin film interference is most constructive or most destructive when the path length difference for the two rays is an integral or half-integral wavelength, respectively. That is, for rays incident perpendicularly, or . To know whether interference is constructive or destructive, you must also determine if there is a phase change upon reflection. Thin film interference thus depends on film thickness, the wavelength of light, and the refractive indices. For white light incident on a film that varies in thickness, you will observe rainbow colors of constructive interference for various wavelengths as the thickness varies.
Another example of thin film interference can be seen when microscope slides are separated (see ). The slides are very flat, so that the wedge of air between them increases in thickness very uniformly. A phase change occurs at the second surface but not the first, and so there is a dark band where the slides touch. The rainbow colors of constructive interference repeat, going from violet to red again and again as the distance between the slides increases. As the layer of air increases, the bands become more difficult to see, because slight changes in incident angle have greater effects on path length differences. If pure-wavelength light instead of white light is used, then bright and dark bands are obtained rather than repeating rainbow colors.
An important application of thin film interference is found in the manufacturing of optical instruments. A lens or mirror can be compared with a master as it is being ground, allowing it to be shaped to an accuracy of less than a wavelength over its entire surface. illustrates the phenomenon called Newton’s rings, which occurs when the plane surfaces of two lenses are placed together. (The circular bands are called Newton’s rings because Isaac Newton described them and their use in detail. Newton did not discover them; Robert Hooke did, and Newton did not believe they were due to the wave character of light.) Each successive ring of a given color indicates an increase of only one wavelength in the distance between the lens and the blank, so that great precision can be obtained. Once the lens is perfect, there will be no rings.
The wings of certain moths and butterflies have nearly iridescent colors due to thin film interference. In addition to pigmentation, the wing’s color is affected greatly by constructive interference of certain wavelengths reflected from its film-coated surface. Car manufacturers are offering special paint jobs that use thin film interference to produce colors that change with angle. This expensive option is based on variation of thin film path length differences with angle. Security features on credit cards, banknotes, driving licenses and similar items prone to forgery use thin film interference, diffraction gratings, or holograms. Australia led the way with dollar bills printed on polymer with a diffraction grating security feature making the currency difficult to forge. Other countries such as New Zealand and Taiwan are using similar technologies, while the United States currency includes a thin film interference effect.
### Problem-Solving Strategies for Wave Optics
Step 1. Examine the situation to determine that interference is involved. Identify whether slits or thin film interference are considered in the problem.
Step 2. If slits are involved, note that diffraction gratings and double slits produce very similar interference patterns, but that gratings have narrower (sharper) maxima. Single slit patterns are characterized by a large central maximum and smaller maxima to the sides.
Step 3. If thin film interference is involved, take note of the path length difference between the two rays that interfere. Be certain to use the wavelength in the medium involved, since it differs from the wavelength in vacuum. Note also that there is an additional phase shift when light reflects from a medium with a greater index of refraction.
Step 4. Identify exactly what needs to be determined in the problem (identify the unknowns). A written list is useful. Draw a diagram of the situation. Labeling the diagram is useful.
Step 5. Make a list of what is given or can be inferred from the problem as stated (identify the knowns).
Step 6. Solve the appropriate equation for the quantity to be determined (the unknown), and enter the knowns. Slits, gratings, and the Rayleigh limit involve equations.
Step 7. For thin film interference, you will have constructive interference for a total shift that is an integral number of wavelengths. You will have destructive interference for a total shift of a half-integral number of wavelengths. Always keep in mind that crest to crest is constructive whereas crest to trough is destructive.
Step 8. Check to see if the answer is reasonable: Does it make sense? Angles in interference patterns cannot be greater than , for example.
### Test Prep for AP Courses
### Section Summary
1. Thin film interference occurs between the light reflected from the top and bottom surfaces of a film. In addition to the path length difference, there can be a phase change.
2. When light reflects from a medium having an index of refraction greater than that of the medium in which it is traveling, a phase change (or a shift) occurs.
### Conceptual Questions
### Problems & Exercises
|
# Wave Optics
## Polarization
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the meaning of polarization.
2. Discuss the property of optical activity of certain materials.
Polaroid sunglasses are familiar to most of us. They have a special ability to cut the glare of light reflected from water or glass (see ). Polaroids have this ability because of a wave characteristic of light called polarization. What is polarization? How is it produced? What are some of its uses? The answers to these questions are related to the wave character of light.
Light is one type of electromagnetic (EM) wave. As noted earlier, EM waves are transverse waves consisting of varying electric and magnetic fields that oscillate perpendicular to the direction of propagation (see ). There are specific directions for the oscillations of the electric and magnetic fields. Polarization is the attribute that a wave’s oscillations have a definite direction relative to the direction of propagation of the wave. (This is not the same type of polarization as that discussed for the separation of charges.) Waves having such a direction are said to be polarized. For an EM wave, we define the direction of polarization to be the direction parallel to the electric field. Thus we can think of the electric field arrows as showing the direction of polarization, as in .
To examine this further, consider the transverse waves in the ropes shown in . The oscillations in one rope are in a vertical plane and are said to be vertically polarized. Those in the other rope are in a horizontal plane and are horizontally polarized. If a vertical slit is placed on the first rope, the waves pass through. However, a vertical slit blocks the horizontally polarized waves. For EM waves, the direction of the electric field is analogous to the disturbances on the ropes.
The Sun and many other light sources produce waves that are randomly polarized (see ). Such light is said to be unpolarized because it is composed of many waves with all possible directions of polarization. Polaroid materials, invented by the founder of Polaroid Corporation, Edwin Land, act as a polarizing slit for light, allowing only polarization in one direction to pass through. Polarizing filters are composed of long molecules aligned in one direction. Thinking of the molecules as many slits, analogous to those for the oscillating ropes, we can understand why only light with a specific polarization can get through. The axis of a polarizing filter is the direction along which the filter passes the electric field of an EM wave (see ).
shows the effect of two polarizing filters on originally unpolarized light. The first filter polarizes the light along its axis. When the axes of the first and second filters are aligned (parallel), then all of the polarized light passed by the first filter is also passed by the second. If the second polarizing filter is rotated, only the component of the light parallel to the second filter’s axis is passed. When the axes are perpendicular, no light is passed by the second.
Only the component of the EM wave parallel to the axis of a filter is passed. Let us call the angle between the direction of polarization and the axis of a filter . If the electric field has an amplitude , then the transmitted part of the wave has an amplitude (see ). Since the intensity of a wave is proportional to its amplitude squared, the intensity of the transmitted wave is related to the incident wave by
where is the intensity of the polarized wave before passing through the filter. (The above equation is known as Malus’s law.)
### Polarization by Reflection
By now you can probably guess that Polaroid sunglasses cut the glare in reflected light because that light is polarized. You can check this for yourself by holding Polaroid sunglasses in front of you and rotating them while looking at light reflected from water or glass. As you rotate the sunglasses, you will notice the light gets bright and dim, but not completely black. This implies the reflected light is partially polarized and cannot be completely blocked by a polarizing filter.
illustrates what happens when unpolarized light is reflected from a surface. Vertically polarized light is preferentially refracted at the surface, so that the reflected light is left more horizontally polarized. The reasons for this phenomenon are beyond the scope of this text, but a convenient mnemonic for remembering this is to imagine the polarization direction to be like an arrow. Vertical polarization would be like an arrow perpendicular to the surface and would be more likely to stick and not be reflected. Horizontal polarization is like an arrow bouncing on its side and would be more likely to be reflected. Sunglasses with vertical axes would then block more reflected light than unpolarized light from other sources.
Since the part of the light that is not reflected is refracted, the amount of polarization depends on the indices of refraction of the media involved. It can be shown that reflected light is completely polarized at a angle of reflection , given by
where is the medium in which the incident and reflected light travel and is the index of refraction of the medium that forms the interface that reflects the light. This equation is known as Brewster’s law, and is known as Brewster’s angle, named after the 19th-century Scottish physicist who discovered them.
### Polarization by Scattering
If you hold your Polaroid sunglasses in front of you and rotate them while looking at blue sky, you will see the sky get bright and dim. This is a clear indication that light scattered by air is partially polarized. helps illustrate how this happens. Since light is a transverse EM wave, it vibrates the electrons of air molecules perpendicular to the direction it is traveling. The electrons then radiate like small antennae. Since they are oscillating perpendicular to the direction of the light ray, they produce EM radiation that is polarized perpendicular to the direction of the ray. When viewing the light along a line perpendicular to the original ray, as in , there can be no polarization in the scattered light parallel to the original ray, because that would require the original ray to be a longitudinal wave. Along other directions, a component of the other polarization can be projected along the line of sight, and the scattered light will only be partially polarized. Furthermore, multiple scattering can bring light to your eyes from other directions and can contain different polarizations.
Photographs of the sky can be darkened by polarizing filters, a trick used by many photographers to make clouds brighter by contrast. Scattering from other particles, such as smoke or dust, can also polarize light. Detecting polarization in scattered EM waves can be a useful analytical tool in determining the scattering source.
There is a range of optical effects used in sunglasses. Besides being Polaroid, other sunglasses have colored pigments embedded in them, while others use non-reflective or even reflective coatings. A recent development is photochromic lenses, which darken in the sunlight and become clear indoors. Photochromic lenses are embedded with organic microcrystalline molecules that change their properties when exposed to UV in sunlight, but become clear in artificial lighting with no UV.
### Liquid Crystals and Other Polarization Effects in Materials
While you are undoubtedly aware of liquid crystal displays (LCDs) found in watches, calculators, computer screens, cellphones, flat screen televisions, and other myriad places, you may not be aware that they are based on polarization. Liquid crystals are so named because their molecules can be aligned even though they are in a liquid. Liquid crystals have the property that they can rotate the polarization of light passing through them by . Furthermore, this property can be turned off by the application of a voltage, as illustrated in . It is possible to manipulate this characteristic quickly and in small well-defined regions to create the contrast patterns we see in so many LCD devices.
In flat screen LCD televisions, there is a large light at the back of the TV. The light travels to the front screen through millions of tiny units called pixels (picture elements). One of these is shown in (a) and (b). Each unit has three cells, with red, blue, or green filters, each controlled independently. When the voltage across a liquid crystal is switched off, the liquid crystal passes the light through the particular filter. One can vary the picture contrast by varying the strength of the voltage applied to the liquid crystal.
Many crystals and solutions rotate the plane of polarization of light passing through them. Such substances are said to be optically active. Examples include sugar water, insulin, and collagen (see ). In addition to depending on the type of substance, the amount and direction of rotation depends on a number of factors. Among these is the concentration of the substance, the distance the light travels through it, and the wavelength of light. Optical activity is due to the asymmetric shape of molecules in the substance, such as being helical. Measurements of the rotation of polarized light passing through substances can thus be used to measure concentrations, a standard technique for sugars. It can also give information on the shapes of molecules, such as proteins, and factors that affect their shapes, such as temperature and pH.
Glass and plastic become optically active when stressed; the greater the stress, the greater the effect. Optical stress analysis on complicated shapes can be performed by making plastic models of them and observing them through crossed filters, as seen in . It is apparent that the effect depends on wavelength as well as stress. The wavelength dependence is sometimes also used for artistic purposes.
Another interesting phenomenon associated with polarized light is the ability of some crystals to split an unpolarized beam of light into two. Such crystals are said to be birefringent (see ). Each of the separated rays has a specific polarization. One behaves normally and is called the ordinary ray, whereas the other does not obey Snell’s law and is called the extraordinary ray. Birefringent crystals can be used to produce polarized beams from unpolarized light. Some birefringent materials preferentially absorb one of the polarizations. These materials are called dichroic and can produce polarization by this preferential absorption. This is fundamentally how polarizing filters and other polarizers work. The interested reader is invited to further pursue the numerous properties of materials related to polarization.
### Test Prep for AP Courses
### Section Summary
1. Polarization is the attribute that wave oscillations have a definite direction relative to the direction of propagation of the wave.
2. EM waves are transverse waves that may be polarized.
3. The direction of polarization is defined to be the direction parallel to the electric field of the EM wave.
4. Unpolarized light is composed of many rays having random polarization directions.
5. Light can be polarized by passing it through a polarizing filter or other polarizing material. The intensity of polarized light after passing through a polarizing filter is where is the original intensity and is the angle between the direction of polarization and the axis of the filter.
6. Polarization is also produced by reflection.
7. Brewster’s law states that reflected light will be completely polarized at the angle of reflection , known as Brewster’s angle, given by a statement known as Brewster’s law: , where is the medium in which the incident and reflected light travel and is the index of refraction of the medium that forms the interface that reflects the light.
8. Polarization can also be produced by scattering.
9. There are a number of types of optically active substances that rotate the direction of polarization of light passing through them.
### Conceptual Questions
### Problems & Exercises
|
# Wave Optics
## *Extended Topic* Microscopy Enhanced by the Wave Characteristics of Light
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the different types of microscopes.
Physics research underpins the advancement of developments in microscopy. As we gain knowledge of the wave nature of electromagnetic waves and methods to analyze and interpret signals, new microscopes that enable us to “see” more are being developed. It is the evolution and newer generation of microscopes that are described in this section.
The use of microscopes (microscopy) to observe small details is limited by the wave nature of light. Owing to the fact that light diffracts significantly around small objects, it becomes impossible to observe details significantly smaller than the wavelength of light. One rule of thumb has it that all details smaller than about are difficult to observe. Radar, for example, can detect the size of an aircraft, but not its individual rivets, since the wavelength of most radar is several centimeters or greater. Similarly, visible light cannot detect individual atoms, since atoms are about 0.1 nm in size and visible wavelengths range from 380 to 760 nm. Ironically, special techniques used to obtain the best possible resolution with microscopes take advantage of the same wave characteristics of light that ultimately limit the detail.
The most obvious method of obtaining better detail is to utilize shorter wavelengths. Ultraviolet (UV) microscopes have been constructed with special lenses that transmit UV rays and utilize photographic or electronic techniques to record images. The shorter UV wavelengths allow somewhat greater detail to be observed, but drawbacks, such as the hazard of UV to living tissue and the need for special detection devices and lenses (which tend to be dispersive in the UV), severely limit the use of UV microscopes. Elsewhere, we will explore practical uses of very short wavelength EM waves, such as x rays, and other short-wavelength probes, such as electrons in electron microscopes, to detect small details.
Another difficulty in microscopy is the fact that many microscopic objects do not absorb much of the light passing through them. The lack of contrast makes image interpretation very difficult. Contrast is the difference in intensity between objects and the background on which they are observed. Stains (such as dyes, fluorophores, etc.) are commonly employed to enhance contrast, but these tend to be application specific. More general wave interference techniques can be used to produce contrast. shows the passage of light through a sample. Since the indices of refraction differ, the number of wavelengths in the paths differs. Light emerging from the object is thus out of phase with light from the background and will interfere differently, producing enhanced contrast, especially if the light is coherent and monochromatic—as in laser light.
Interference microscopes enhance contrast between objects and background by superimposing a reference beam of light upon the light emerging from the sample. Since light from the background and objects differ in phase, there will be different amounts of constructive and destructive interference, producing the desired contrast in final intensity. shows schematically how this is done. Parallel rays of light from a source are split into two beams by a half-silvered mirror. These beams are called the object and reference beams. Each beam passes through identical optical elements, except that the object beam passes through the object we wish to observe microscopically. The light beams are recombined by another half-silvered mirror and interfere. Since the light rays passing through different parts of the object have different phases, interference will be significantly different and, hence, have greater contrast between them.
Another type of microscope utilizing wave interference and differences in phases to enhance contrast is called the phase-contrast microscope. While its principle is the same as the interference microscope, the phase-contrast microscope is simpler to use and construct. Its impact (and the principle upon which it is based) was so important that its developer, the Dutch physicist Frits Zernike (1888–1966), was awarded the Nobel Prize in 1953. shows the basic construction of a phase-contrast microscope. Phase differences between light passing through the object and background are produced by passing the rays through different parts of a phase plate (so called because it shifts the phase of the light passing through it). These two light rays are superimposed in the image plane, producing contrast due to their interference.
A polarization microscope also enhances contrast by utilizing a wave characteristic of light. Polarization microscopes are useful for objects that are optically active or birefringent, particularly if those characteristics vary from place to place in the object. Polarized light is sent through the object and then observed through a polarizing filter that is perpendicular to the original polarization direction. Nearly transparent objects can then appear with strong color and in high contrast. Many polarization effects are wavelength dependent, producing color in the processed image. Contrast results from the action of the polarizing filter in passing only components parallel to its axis.
Apart from the UV microscope, the variations of microscopy discussed so far in this section are available as attachments to fairly standard microscopes or as slight variations. The next level of sophistication is provided by commercial confocal microscopes, which use the extended focal region shown in (b) to obtain three-dimensional images rather than two-dimensional images. Here, only a single plane or region of focus is identified; out-of-focus regions above and below this plane are subtracted out by a computer so the image quality is much better. This type of microscope makes use of fluorescence, where a laser provides the excitation light. Laser light passing through a tiny aperture called a pinhole forms an extended focal region within the specimen. The reflected light passes through the objective lens to a second pinhole and the photomultiplier detector, see . The second pinhole is the key here and serves to block much of the light from points that are not at the focal point of the objective lens. The pinhole is conjugate (coupled) to the focal point of the lens. The second pinhole and detector are scanned, allowing reflected light from a small region or section of the extended focal region to be imaged at any one time. The out-of-focus light is excluded. Each image is stored in a computer, and a full scanned image is generated in a short time. Live cell processes can also be imaged at adequate scanning speeds allowing the imaging of three-dimensional microscopic movement. Confocal microscopy enhances images over conventional optical microscopy, especially for thicker specimens, and so has become quite popular.
The next level of sophistication is provided by microscopes attached to instruments that isolate and detect only a small wavelength band of light—monochromators and spectral analyzers. Here, the monochromatic light from a laser is scattered from the specimen. This scattered light shifts up or down as it excites particular energy levels in the sample. The uniqueness of the observed scattered light can give detailed information about the chemical composition of a given spot on the sample with high contrast—like molecular fingerprints. Applications are in materials science, nanotechnology, and the biomedical field. Fine details in biochemical processes over time can even be detected. The ultimate in microscopy is the electron microscope—to be discussed later. Research is being conducted into the development of new prototype microscopes that can become commercially available, providing better diagnostic and research capacities.
### Section Summary
1. To improve microscope images, various techniques utilizing the wave characteristics of light have been developed. Many of these enhance contrast with interference effects.
### Conceptual Questions
|
# Special Relativity
## Connection for AP® Courses
In this chapter you will be introduced to the theory of special relativity, which was first described by Albert Einstein in the year 1905. The chapter opens with a discussion of Einstein’s postulates that form the basis of special relativity. You will learn about an essential physics framework that is used to describe the observations and measurements made by an observer in what is called the “inertial frame of reference” (Enduring Understanding 3.A). Special relativity is a universally accepted theory that defines a relationship between space and time (Essential Knowledge 1.D.3). When the speed of an object approaches the speed of light, Newton’s laws no longer hold, which means that classical (Newtonian) mechanics (Enduring Understanding 1.D) is not sufficient to define the physical properties of such a system. This is where special relativity comes into play. Many interesting and counterintuitive physical results follow from the theory of special relativity. In this chapter we will explore the concepts of simultaneity, time dilation, and length contraction.
Further into the chapter you will find information that supports the concepts of relativistic velocity addition, relativistic momentum, and energy (Enduring Understanding 4.C). Learning these concepts will help you understand how the mass (Enduring Understanding 1.C and Essential Knowledge 4.C.4) of an object can appear to be different for different observers and how matter can be converted into energy and then back to matter so that the energy of the system remains conserved. (Essential Knowledge 1.C.4 and Enduring Understanding 5.B). The information and examples presented in the chapter support Big Ideas 1, 3, 4, and 5 of the AP® Physics Curriculum Framework.
The content of this chapter supports:
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.C Objects and systems have properties of inertial mass and gravitational mass that are experimentally verified to be the same and that satisfy conservation principles.
Essential Knowledge 1.C.4 In certain processes, mass can be converted to energy and energy can be converted to mass according to
, the equation derived from the theory of special relativity.
Enduring Understanding 1.D Classical mechanics cannot describe all properties of objects.
Essential Knowledge 1.D.3 Properties of space and time cannot always be treated as absolute.
Big Idea 3 The interactions of an object with other objects can be described by forces.
Enduring Understanding 3.A All forces share certain common characteristics when considered by observers in inertial reference frames.
Essential Knowledge 3.A.1 An observer in a particular reference frame can describe the motion of an object using such quantities as position, displacement, distance, velocity, speed, and acceleration.
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.C Interactions with other objects or systems can change the total energy of a system.
Essential Knowledge 4.C.4 Mass can be converted into energy and energy can be converted into mass.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.11 Beyond the classical approximation, mass is actually part of the internal energy of an object or system with
.
It is important to note that although classical mechanic, in general, and classical relativity, in particular, are limited, they are extremely good approximations for large, slow-moving objects. Otherwise, we could not use classical physics to launch satellites or build bridges. In the classical limit (objects larger than submicroscopic and moving slower than about 1% of the speed of light), relativistic mechanics becomes the same as classical mechanics. This fact will be noted at appropriate places throughout this chapter. |
# Special Relativity
## Einstein’s Postulates
### Learning Objectives
By the end of this section, you will be able to:
1. State and explain both of Einstein’s postulates.
2. Explain what an inertial frame of reference is.
3. Describe one way the speed of light can be changed.
Have you ever used the Pythagorean Theorem and gotten a wrong answer? Probably not, unless you made a mistake in either your algebra or your arithmetic. Each time you perform the same calculation, you know that the answer will be the same. Trigonometry is reliable because of the certainty that one part always flows from another in a logical way. Each part is based on a set of postulates, and you can always connect the parts by applying those postulates. Physics is the same way with the exception that all parts must describe nature. If we are careful to choose the correct postulates, then our theory will follow and will be verified by experiment.
Einstein essentially did the theoretical aspect of this method for relativity. With two deceptively simple postulates and a careful consideration of how measurements are made, he produced the theory of special relativity.
### Einstein’s First Postulate
The first postulate upon which Einstein based the theory of special relativity relates to reference frames. All velocities are measured relative to some frame of reference. For example, a car’s motion is measured relative to its starting point or the road it is moving over, a projectile’s motion is measured relative to the surface it was launched from, and a planet’s orbit is measured relative to the star it is orbiting around. The simplest frames of reference are those that are not accelerated and are not rotating. Newton’s first law, the law of inertia, holds exactly in such a frame.
The laws of physics seem to be simplest in inertial frames. For example, when you are in a plane flying at a constant altitude and speed, physics seems to work exactly the same as if you were standing on the surface of the Earth. However, in a plane that is taking off, matters are somewhat more complicated. In these cases, the net force on an object, , is not equal to the product of mass and acceleration, . Instead, is equal to plus a fictitious force. This situation is not as simple as in an inertial frame. Not only are laws of physics simplest in inertial frames, but they should be the same in all inertial frames, since there is no preferred frame and no absolute motion. Einstein incorporated these ideas into his first postulate of special relativity.
As with many fundamental statements, there is more to this postulate than meets the eye. The laws of physics include only those that satisfy this postulate. We shall find that the definitions of relativistic momentum and energy must be altered to fit. Another outcome of this postulate is the famous equation .
### Einstein’s Second Postulate
The second postulate upon which Einstein based his theory of special relativity deals with the speed of light. Late in the 19th century, the major tenets of classical physics were well established. Two of the most important were the laws of electricity and magnetism and Newton’s laws. In particular, the laws of electricity and magnetism predict that light travels at in a vacuum, but they do not specify the frame of reference in which light has this speed.
There was a contradiction between this prediction and Newton’s laws, in which velocities add like simple vectors. If the latter were true, then two observers moving at different speeds would see light traveling at different speeds. Imagine what a light wave would look like to a person traveling along with it at a speed . If such a motion were possible then the wave would be stationary relative to the observer. It would have electric and magnetic fields that varied in strength at various distances from the observer but were constant in time. This is not allowed by Maxwell’s equations. So either Maxwell’s equations are wrong, or an object with mass cannot travel at speed . Einstein concluded that the latter is true. An object with mass cannot travel at speed . This conclusion implies that light in a vacuum must always travel at speed relative to any observer. Maxwell’s equations are correct, and Newton’s addition of velocities is not correct for light.
Investigations such as Young’s double slit experiment in the early-1800s had convincingly demonstrated that light is a wave. Many types of waves were known, and all travelled in some medium. Scientists therefore assumed that a medium carried light, even in a vacuum, and light travelled at a speed relative to that medium. Starting in the mid-1880s, the American physicist A. A. Michelson, later aided by E. W. Morley, made a series of direct measurements of the speed of light. The results of their measurements were startling.
The eventual conclusion derived from this result is that light, unlike mechanical waves such as sound, does not need a medium to carry it. Furthermore, the Michelson-Morley results implied that the speed of light is independent of the motion of the source relative to the observer. That is, everyone observes light to move at speed regardless of how they move relative to the source or one another. For a number of years, many scientists tried unsuccessfully to explain these results and still retain the general applicability of Newton’s laws.
It was not until 1905, when Einstein published his first paper on special relativity, that the currently accepted conclusion was reached. Based mostly on his analysis that the laws of electricity and magnetism would not allow another speed for light, and only slightly aware of the Michelson-Morley experiment, Einstein detailed his second postulate of special relativity.
Deceptively simple and counterintuitive, this and the first postulate leave all else open for change. Some fundamental concepts do change. Among the changes are the loss of agreement on the elapsed time for an event, the variation of distance with speed, and the realization that matter and energy can be converted into one another. You will read about these concepts in the following sections.
### Test Prep for AP Courses
### Section Summary
1. Relativity is the study of how different observers measure the same event.
2. Modern relativity is divided into two parts. Special relativity deals with observers who are in uniform (unaccelerated) motion, whereas general relativity includes accelerated relative motion and gravity. Modern relativity is correct in all circumstances and, in the limit of low velocity and weak gravitation, gives the same predictions as classical relativity.
3. An inertial frame of reference is a reference frame in which a body at rest remains at rest and a body in motion moves at a constant speed in a straight line unless acted on by an outside force.
4. Modern relativity is based on Einstein’s two postulates. The first postulate of special relativity is the idea that the laws of physics are the same and can be stated in their simplest form in all inertial frames of reference. The second postulate of special relativity is the idea that the speed of light is a constant, independent of the relative motion of the source.
5. The Michelson-Morley experiment demonstrated that the speed of light in a vacuum is independent of the motion of the Earth about the Sun.
### Conceptual Questions
|
# Special Relativity
## Simultaneity And Time Dilation
### Learning Objectives
By the end of this section, you will be able to:
1. Describe simultaneity.
2. Describe time dilation.
3. Calculate γ.
4. Compare proper time and the observer’s measured time.
5. Explain why the twin paradox is a false paradox.
Do time intervals depend on who observes them? Intuitively, we expect the time for a process, such as the elapsed time for a foot race, to be the same for all observers. Our experience has been that disagreements over elapsed time have to do with the accuracy of measuring time. When we carefully consider just how time is measured, however, we will find that elapsed time depends on the relative motion of an observer with respect to the process being measured.
### Simultaneity
Consider how we measure elapsed time. If we use a stopwatch, for example, how do we know when to start and stop the watch? One method is to use the arrival of light from the event, such as observing a light turning green to start a drag race. The timing will be more accurate if some sort of electronic detection is used, avoiding human reaction times and other complications.
Now suppose we use this method to measure the time interval between two flashes of light produced by flash lamps. (See .) Two flash lamps with observer A midway between them are on a rail car that moves to the right relative to observer B. Observer B arranges for the light flashes to be emitted just as A passes B, so that both A and B are equidistant from the lamps when the light is emitted. Observer B measures the time interval between the arrival of the light flashes. According to postulate 2, the speed of light is not affected by the motion of the lamps relative to B. Therefore, light travels equal distances to him at equal speeds. Thus observer B measures the flashes to be simultaneous.
Now consider what observer A sees happening. Since both lamps are the same distance from her in her reference frame and the train is moving to the right, she perceives the flash from the right-hand bulb occurring before the left-hand bulb. Here a relative velocity between observers affects whether two events are observed to be simultaneous. Simultaneity is not absolute..
This illustrates the power of clear thinking. We might have guessed incorrectly that if light is emitted simultaneously, then two observers halfway between the sources would see the flashes simultaneously. But careful analysis shows this not to be the case. Einstein was brilliant at this type of thought experiment (in German, “Gedankenexperiment”). He very carefully considered how an observation is made and disregarded what might seem obvious. The validity of thought experiments, of course, is determined by actual observation. The genius of Einstein is evidenced by the fact that experiments have repeatedly confirmed his theory of relativity.
In summary: Two events are defined to be simultaneous if an observer measures them as occurring at the same time (such as by receiving light from the events). Two events are not necessarily simultaneous to all observers.
### Time Dilation
The consideration of the measurement of elapsed time and simultaneity leads to an important relativistic effect.
Suppose, for example, an astronaut measures the time it takes for light to cross her ship, bounce off a mirror, and return. (See .) How does the elapsed time the astronaut measures compare with the elapsed time measured for the same event by a person on the Earth? Asking this question (another thought experiment) produces a profound result. We find that the elapsed time for a process depends on who is measuring it. In this case, the time measured by the astronaut is smaller than the time measured by the Earth-bound observer. The passage of time is different for the observers because the distance the light travels in the astronaut’s frame is smaller than in the Earth-bound frame. Light travels at the same speed in each frame, and so it will take longer to travel the greater distance in the Earth-bound frame.
To quantitatively verify that time depends on the observer, consider the paths followed by light as seen by each observer. (See (c).) The astronaut sees the light travel straight across and back for a total distance of , twice the width of her ship. The Earth-bound observer sees the light travel a total distance . Since the ship is moving at speed to the right relative to the Earth, light moving to the right hits the mirror in this frame. Light travels at a speed in both frames, and because time is the distance divided by speed, the time measured by the astronaut is
This time has a separate name to distinguish it from the time measured by the Earth-bound observer.
In the case of the astronaut observe the reflecting light, the astronaut measures proper time. The time measured by the Earth-bound observer is
To find the relationship between and , consider the triangles formed by and . (See (c).) The third side of these similar triangles is , the distance the astronaut moves as the light goes across her ship. In the frame of the Earth-bound observer,
Using the Pythagorean Theorem, the distance is found to be
Substituting into the expression for the time interval gives
We square this equation, which yields
Note that if we square the first expression we had for , we get . This term appears in the preceding equation, giving us a means to relate the two time intervals. Thus,
Gathering terms, we solve for :
Thus,
Taking the square root yields an important relationship between elapsed times:
where
This equation for is truly remarkable. First, as contended, elapsed time is not the same for different observers moving relative to one another, even though both are in inertial frames. Proper time measured by an observer, like the astronaut moving with the apparatus, is smaller than time measured by other observers. Since those other observers measure a longer time , the effect is called time dilation. The Earth-bound observer sees time dilate (get longer) for a system moving relative to the Earth. Alternatively, according to the Earth-bound observer, time slows in the moving frame, since less time passes there. All clocks moving relative to an observer, including biological clocks such as aging, are observed to run slow compared with a clock stationary relative to the observer.
Note that if the relative velocity is much less than the speed of light (), then is extremely small, and the elapsed times and are nearly equal. At low velocities, modern relativity approaches classical physics—our everyday experiences have very small relativistic effects.
The equation also implies that relative velocity cannot exceed the speed of light. As
approaches ,
approaches infinity. This would imply that time in the astronaut’s frame stops at the speed of light. If exceeded , then we would be taking the square root of a negative number, producing an imaginary value for .
There is considerable experimental evidence that the equation is correct. One example is found in cosmic ray particles that continuously rain down on the Earth from deep space. Some collisions of these particles with nuclei in the upper atmosphere result in short-lived particles called muons. The half-life (amount of time for half of a material to decay) of a muon is
when it is at rest relative to the observer who measures the half-life. This is the proper time
. Muons produced by cosmic ray particles have a range of velocities, with some moving near the speed of light. It has been found that the muon’s half-life as measured by an Earth-bound observer () varies with velocity exactly as predicted by the equation . The faster the muon moves, the longer it lives. We on the Earth see the muon’s half-life time dilated—as viewed from our frame, the muon decays more slowly than it does when at rest relative to us.
Another implication of the preceding example is that everything an astronaut does when moving at of the speed of light relative to the Earth takes 3.20 times longer when observed from the Earth. Does the astronaut sense this? Only if she looks outside her spaceship. All methods of measuring time in her frame will be affected by the same factor of 3.20. This includes her wristwatch, heart rate, cell metabolism rate, nerve impulse rate, and so on. She will have no way of telling, since all of her clocks will agree with one another because their relative velocities are zero. Motion is relative, not absolute. But what if she does look out the window?
### The Twin Paradox
An intriguing consequence of time dilation is that a space traveler moving at a high velocity relative to the Earth would age less than her Earth-bound twin. Imagine the astronaut moving at such a velocity that , as in . A trip that takes 2.00 years in her frame would take 60.0 years in her Earth-bound twin’s frame. Suppose the astronaut traveled 1.00 year to another star system. She briefly explored the area, and then traveled 1.00 year back. If the astronaut was 40 years old when she left, she would be 42 upon her return. Everything on the Earth, however, would have aged 60.0 years. Her twin, if still alive, would be 100 years old.
The situation would seem different to the astronaut. Because motion is relative, the spaceship would seem to be stationary and the Earth would appear to move. (This is the sensation you have when flying in a jet.) If the astronaut looks out the window of the spaceship, she will see time slow down on the Earth by a factor of . To her, the Earth-bound sister will have aged only 2/30 (1/15) of a year, while she aged 2.00 years. The two sisters cannot both be correct.
As with all paradoxes, the premise is faulty and leads to contradictory conclusions. In fact, the astronaut’s motion is significantly different from that of the Earth-bound twin. The astronaut accelerates to a high velocity and then decelerates to view the star system. To return to the Earth, she again accelerates and decelerates. The Earth-bound twin does not experience these accelerations. So the situation is not symmetric, and it is not correct to claim that the astronaut will observe the same effects as her Earth-bound twin. If you use special relativity to examine the twin paradox, you must keep in mind that the theory is expressly based on inertial frames, which by definition are not accelerated or rotating. Einstein developed general relativity to deal with accelerated frames and with gravity, a prime source of acceleration. You can also use general relativity to address the twin paradox and, according to general relativity, the astronaut will age less. Some important conceptual aspects of general relativity are discussed in General Relativity and Quantum Gravity of this course.
In 1971, American physicists Joseph Hafele and Richard Keating verified time dilation at low relative velocities by flying extremely accurate atomic clocks around the Earth on commercial aircraft. They measured elapsed time to an accuracy of a few nanoseconds and compared it with the time measured by clocks left behind. Hafele and Keating’s results were within experimental uncertainties of the predictions of relativity. Both special and general relativity had to be taken into account, since gravity and accelerations were involved as well as relative motion.
### Section Summary
1. Two events are defined to be simultaneous if an observer measures them as occurring at the same time. They are not necessarily simultaneous to all observers—simultaneity is not absolute.
2. Time dilation is the phenomenon of time passing slower for an observer who is moving relative to another observer.
3. Observers moving at a relative velocity do not measure the same elapsed time for an event. Proper time is the time measured by an observer at rest relative to the event being observed. Proper time is related to the time measured by an Earth-bound observer by the equation
where
4. The equation relating proper time and time measured by an Earth-bound observer implies that relative velocity cannot exceed the speed of light.
5. The twin paradox asks why a twin traveling at a relativistic speed away and then back towards the Earth ages less than the Earth-bound twin. The premise to the paradox is faulty because the traveling twin is accelerating. Special relativity does not apply to accelerating frames of reference.
6. Time dilation is usually negligible at low relative velocities, but it does occur, and it has been verified by experiment.
### Conceptual Questions
### Problems & Exercises
|
# Special Relativity
## Length Contraction
### Learning Objectives
By the end of this section, you will be able to:
1. Describe proper length.
2. Calculate length contraction.
3. Explain why we don’t notice these effects at everyday scales.
Have you ever driven on a road that seems like it goes on forever? If you look ahead, you might say you have about 10 km left to go. Another traveler might say the road ahead looks like it’s about 15 km long. If you both measured the road, however, you would agree. Traveling at everyday speeds, the distance you both measure would be the same. You will read in this section, however, that this is not true at relativistic speeds. Close to the speed of light, distances measured are not the same when measured by different observers.
### Proper Length
One thing all observers agree upon is relative speed. Even though clocks measure different elapsed times for the same process, they still agree that relative speed, which is distance divided by elapsed time, is the same. This implies that distance, too, depends on the observer’s relative motion. If two observers see different times, then they must also see different distances for relative speed to be the same to each of them.
The muon discussed in illustrates this concept. To an observer on the Earth, the muon travels at for from the time it is produced until it decays. Thus it travels a distance
relative to the Earth. In the muon’s frame of reference, its lifetime is only . It has enough time to travel only
The distance between the same two events (production and decay of a muon) depends on who measures it and how they are moving relative to it.
The Earth-bound observer measures the proper length , because the points at which the muon is produced and decays are stationary relative to the Earth. To the muon, the Earth, air, and clouds are moving, and so the distance it sees is not the proper length.
### Length Contraction
To develop an equation relating distances measured by different observers, we note that the velocity relative to the Earth-bound observer in our muon example is given by
The time relative to the Earth-bound observer is , since the object being timed is moving relative to this observer. The velocity relative to the moving observer is given by
The moving observer travels with the muon and therefore observes the proper time . The two velocities are identical; thus,
We know that . Substituting this equation into the relationship above gives
Substituting for gives an equation relating the distances measured by different observers.
If we measure the length of anything moving relative to our frame, we find its length to be smaller than the proper length that would be measured if the object were stationary. For example, in the muon’s reference frame, the distance between the points where it was produced and where it decayed is shorter. Those points are fixed relative to the Earth but moving relative to the muon. Clouds and other objects are also contracted along the direction of motion in the muon’s reference frame.
People could be sent very large distances (thousands or even millions of light years) and age only a few years on the way if they traveled at extremely high velocities. But, like emigrants of centuries past, they would leave the Earth they know forever. Even if they returned, thousands to millions of years would have passed on the Earth, obliterating most of what now exists. There is also a more serious practical obstacle to traveling at such velocities; immensely greater energies than classical physics predicts would be needed to achieve such high velocities. This will be discussed in Relatavistic Energy.
Why don’t we notice length contraction in everyday life? The distance to the grocery shop does not seem to depend on whether we are moving or not. Examining the equation , we see that at low velocities () the lengths are nearly equal, the classical expectation. But length contraction is real, if not commonly experienced. For example, a charged particle, like an electron, traveling at relativistic velocity has electric field lines that are compressed along the direction of motion as seen by a stationary observer. (See .) As the electron passes a detector, such as a coil of wire, its field interacts much more briefly, an effect observed at particle accelerators such as the 3 km long Stanford Linear Accelerator (SLAC). In fact, to an electron traveling down the beam pipe at SLAC, the accelerator and the Earth are all moving by and are length contracted. The relativistic effect is so great than the accelerator is only 0.5 m long to the electron. It is actually easier to get the electron beam down the pipe, since the beam does not have to be as precisely aimed to get down a short pipe as it would down one 3 km long. This, again, is an experimental verification of the Special Theory of Relativity.
### Summary
1. All observers agree upon relative speed.
2. Distance depends on an observer’s motion. Proper length is the distance between two points measured by an observer who is at rest relative to both of the points. Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth.
3. Length contraction is the shortening of the measured length of an object moving relative to the observer’s frame:
### Conceptual Questions
### Problems & Exercises
|
# Special Relativity
## Relativistic Addition of Velocities
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate relativistic velocity addition.
2. Explain when relativistic velocity addition should be used instead of classical addition of velocities.
3. Calculate relativistic Doppler shift.
If you’ve ever seen a kayak move down a fast-moving river, you know that remaining in the same place would be hard. The river current pulls the kayak along. Pushing the oars back against the water can move the kayak forward in the water, but that only accounts for part of the velocity. The kayak’s motion is an example of classical addition of velocities. In classical physics, velocities add as vectors. The kayak’s velocity is the vector sum of its velocity relative to the water and the water’s velocity relative to the riverbank.
### Classical Velocity Addition
For simplicity, we restrict our consideration of velocity addition to one-dimensional motion. Classically, velocities add like regular numbers in one-dimensional motion. (See .) Suppose, for example, a girl is riding in a sled at a speed 1.0 m/s relative to an observer. She throws a snowball first forward, then backward at a speed of 1.5 m/s relative to the sled. We denote direction with plus and minus signs in one dimension; in this example, forward is positive. Let be the velocity of the sled relative to the Earth, the velocity of the snowball relative to the Earth-bound observer, and the velocity of the snowball relative to the sled.
Thus, when the girl throws the snowball forward, . It makes good intuitive sense that the snowball will head towards the Earth-bound observer faster, because it is thrown forward from a moving vehicle. When the girl throws the snowball backward, . The minus sign means the snowball moves away from the Earth-bound observer.
### Relativistic Velocity Addition
The second postulate of relativity (verified by extensive experimental observation) says that classical velocity addition does not apply to light. Imagine a car traveling at night along a straight road, as in . If classical velocity addition applied to light, then the light from the car’s headlights would approach the observer on the sidewalk at a speed . But we know that light will move away from the car at speed relative to the driver of the car, and light will move towards the observer on the sidewalk at speed , too.
Velocities cannot add to greater than the speed of light, provided that is less than and does not exceed . The following example illustrates that relativistic velocity addition is not as symmetric as classical velocity addition.
### Doppler Shift
Although the speed of light does not change with relative velocity, the frequencies and wavelengths of light do. First discussed for sound waves, a Doppler shift occurs in any wave when there is relative motion between source and observer.
In the Doppler equation, is the observed wavelength, is the source wavelength, and is the relative velocity of the source to the observer. The velocity is positive for motion away from an observer and negative for motion toward an observer. In terms of source frequency and observed frequency, this equation can be written
Notice that the – and + signs are different than in the wavelength equation.
The relativistic Doppler shift is easy to observe. This equation has everyday applications ranging from Doppler-shifted radar velocity measurements of transportation to Doppler-radar storm monitoring. In astronomical observations, the relativistic Doppler shift provides velocity information such as the motion and distance of stars.
### Test Prep for AP Courses
### Section Summary
1. With classical velocity addition, velocities add like regular numbers in one-dimensional motion: , where is the velocity between two observers,
is the velocity of an object relative to one observer, and is the velocity relative to the other observer.
2. Velocities cannot add to be greater than the speed of light. Relativistic velocity addition describes the velocities of an object moving at a relativistic speed:
3. An observer of electromagnetic radiation sees relativistic Doppler effects if the source of the radiation is moving relative to the observer. The wavelength of the radiation is longer (called a red shift) than that emitted by the source when the source moves away from the observer and shorter (called a blue shift) when the source moves toward the observer. The shifted wavelength is described by the equation
is the observed wavelength, is the source wavelength, and is the relative velocity of the source to the observer.
### Conceptual Questions
### Problems & Exercises
|
# Special Relativity
## Relativistic Momentum
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate relativistic momentum.
2. Explain why the only mass it makes sense to talk about is rest mass.
In classical physics, momentum is a simple product of mass and velocity. However, we saw in the last section that when special relativity is taken into account, massive objects have a speed limit. What effect do you think mass and velocity have on the momentum of objects moving at relativistic speeds?
Momentum is one of the most important concepts in physics. The broadest form of Newton’s second law is stated in terms of momentum. Momentum is conserved whenever the net external force on a system is zero. This makes momentum conservation a fundamental tool for analyzing collisions. All of Work, Energy, and Energy Resources is devoted to momentum, and momentum has been important for many other topics as well, particularly where collisions were involved. We will see that momentum has the same importance in modern physics. Relativistic momentum is conserved, and much of what we know about subatomic structure comes from the analysis of collisions of accelerator-produced relativistic particles.
The first postulate of relativity states that the laws of physics are the same in all inertial frames. Does the law of conservation of momentum survive this requirement at high velocities? The answer is yes, provided that the momentum is defined as follows.
Note that we use for velocity here to distinguish it from relative velocity between observers. Only one observer is being considered here. With defined in this way, total momentum is conserved whenever the net external force is zero, just as in classical physics. Again we see that the relativistic quantity becomes virtually the same as the classical at low velocities. That is, relativistic momentum becomes the classical at low velocities, because is very nearly equal to 1 at low velocities.
Relativistic momentum has the same intuitive feel as classical momentum. It is greatest for large masses moving at high velocities, but, because of the factor , relativistic momentum approaches infinity as approaches . (See .) This is another indication that an object with mass cannot reach the speed of light. If it did, its momentum would become infinite, an unreasonable value.
Relativistic momentum is defined in such a way that the conservation of momentum will hold in all inertial frames. Whenever the net external force on a system is zero, relativistic momentum is conserved, just as is the case for classical momentum. This has been verified in numerous experiments.
In Relativistic Energy, the relationship of relativistic momentum to energy is explored. That subject will produce our first inkling that objects without mass may also have momentum.
### Section Summary
1. The law of conservation of momentum is valid whenever the net external force is zero and for relativistic momentum. Relativistic momentum is classical momentum multiplied by the relativistic factor .
2. , where is the rest mass of the object, is its velocity relative to an observer, and the relativistic factor .
3. At low velocities, relativistic momentum is equivalent to classical momentum.
4. Relativistic momentum approaches infinity as approaches . This implies that an object with mass cannot reach the speed of light.
5. Relativistic momentum is conserved, just as classical momentum is conserved.
### Conceptual Questions
### Problem Exercises
|
# Special Relativity
## Relativistic Energy
### Learning Objectives
By the end of this section, you will be able to:
1. Compute total energy of a relativistic object.
2. Compute the kinetic energy of a relativistic object.
3. Describe rest energy, and explain how it can be converted to other forms.
4. Explain why massive particles cannot reach C.
A tokamak is a form of experimental fusion reactor, which can change mass to energy. Accomplishing this requires an understanding of relativistic energy. Nuclear reactors are proof of the conservation of relativistic energy.
Conservation of energy is one of the most important laws in physics. Not only does energy have many important forms, but each form can be converted to any other. We know that classically the total amount of energy in a system remains constant. Relativistically, energy is still conserved, provided its definition is altered to include the possibility of mass changing to energy, as in the reactions that occur within a nuclear reactor. Relativistic energy is intentionally defined so that it will be conserved in all inertial frames, just as is the case for relativistic momentum. As a consequence, we learn that several fundamental quantities are related in ways not known in classical physics. All of these relationships are verified by experiment and have fundamental consequences. The altered definition of energy contains some of the most fundamental and spectacular new insights into nature found in recent history.
### Total Energy and Rest Energy
The first postulate of relativity states that the laws of physics are the same in all inertial frames. Einstein showed that the law of conservation of energy is valid relativistically, if we define energy to include a relativistic factor.
This is the correct form of Einstein’s most famous equation, which for the first time showed that energy is related to the mass of an object at rest. For example, if energy is stored in the object, its rest mass increases. This also implies that mass can be destroyed to release energy. The implications of these first two equations regarding relativistic energy are so broad that they were not completely recognized for some years after Einstein published them in 1907, nor was the experimental proof that they are correct widely recognized at first. Einstein, it should be noted, did understand and describe the meanings and implications of his theory.
Today, the practical applications of the conversion of mass into another form of energy, such as in nuclear weapons and nuclear power plants, are well known. But examples also existed when Einstein first proposed the correct form of relativistic energy, and he did describe some of them. Nuclear radiation had been discovered in the previous decade, and it had been a mystery as to where its energy originated. The explanation was that, in certain nuclear processes, a small amount of mass is destroyed and energy is released and carried by nuclear radiation. But the amount of mass destroyed is so small that it is difficult to detect that any is missing. Although Einstein proposed this as the source of energy in the radioactive salts then being studied, it was many years before there was broad recognition that mass could be and, in fact, commonly is converted to energy. (See .)
Because of the relationship of rest energy to mass, we now consider mass to be a form of energy rather than something separate. There had not even been a hint of this prior to Einstein’s work. Such conversion is now known to be the source of the Sun’s energy, the energy of nuclear decay, and even the source of energy keeping Earth’s interior hot.
### Stored Energy and Potential Energy
What happens to energy stored in an object at rest, such as the energy put into a battery by charging it, or the energy stored in a toy gun’s compressed spring? The energy input becomes part of the total energy of the object and, thus, increases its rest mass. All stored and potential energy becomes mass in a system. Why is it we don’t ordinarily notice this? In fact, conservation of mass (meaning total mass is constant) was one of the great laws verified by 19th-century science. Why was it not noticed to be incorrect? The following example helps answer these questions.
### Kinetic Energy and the Ultimate Speed Limit
Kinetic energy is energy of motion. Classically, kinetic energy has the familiar expression . The relativistic expression for kinetic energy is obtained from the work-energy theorem. This theorem states that the net work on a system goes into kinetic energy. If our system starts from rest, then the work-energy theorem is
Relativistically, at rest we have rest energy . The work increases this to the total energy . Thus,
Relativistically, we have .
When motionless, we have and
so that at rest, as expected. But the expression for relativistic kinetic energy (such as total energy and rest energy) does not look much like the classical . To show that the classical expression for kinetic energy is obtained at low velocities, we note that the binomial expansion for at low velocities gives
A binomial expansion is a way of expressing an algebraic quantity as a sum of an infinite series of terms. In some cases, as in the limit of small velocity here, most terms are very small. Thus the expression derived for here is not exact, but it is a very accurate approximation. Thus, at low velocities,
Entering this into the expression for relativistic kinetic energy gives
So, in fact, relativistic kinetic energy does become the same as classical kinetic energy when .
It is even more interesting to investigate what happens to kinetic energy when the velocity of an object approaches the speed of light. We know that becomes infinite as approaches , so that KErel also becomes infinite as the velocity approaches the speed of light. (See .) An infinite amount of work (and, hence, an infinite amount of energy input) is required to accelerate a mass to the speed of light.
So the speed of light is the ultimate speed limit for any particle having mass. All of this is consistent with the fact that velocities less than always add to less than . Both the relativistic form for kinetic energy and the ultimate speed limit being have been confirmed in detail in numerous experiments. No matter how much energy is put into accelerating a mass, its velocity can only approach—not reach—the speed of light.
### Relativistic Energy and Momentum
We know classically that kinetic energy and momentum are related to each other, since
Relativistically, we can obtain a relationship between energy and momentum by algebraically manipulating their definitions. This produces
where is the relativistic total energy and is the relativistic momentum. This relationship between relativistic energy and relativistic momentum is more complicated than the classical, but we can gain some interesting new insights by examining it. First, total energy is related to momentum and rest mass. At rest, momentum is zero, and the equation gives the total energy to be the rest energy (so this equation is consistent with the discussion of rest energy above). However, as the mass is accelerated, its momentum increases, thus increasing the total energy. At sufficiently high velocities, the rest energy term becomes negligible compared with the momentum term
; thus,
at extremely relativistic velocities.
If we consider momentum to be distinct from mass, we can determine the implications of the equation for a particle that has no mass. If we take to be zero in this equation, then , or . Massless particles have this momentum. There are several massless particles found in nature, including photons (these are quanta of electromagnetic radiation). Another implication is that a massless particle must travel at speed and only at speed . While it is beyond the scope of this text to examine the relationship in the equation in detail, we can see that the relationship has important implications in special relativity.
### Test Prep for AP Courses
### Section Summary
1. Relativistic energy is conserved as long as we define it to include the possibility of mass changing to energy.
2. Total Energy is defined as:
, where
.
3. Rest energy is
, meaning that mass is a form of energy. If energy is stored in an object, its mass increases. Mass can be destroyed to release energy.
4. We do not ordinarily notice the increase or decrease in mass of an object because the change in mass is so small for a large increase in energy.
5. The relativistic work-energy theorem is .
6. Relativistically,
,
where is the relativistic kinetic energy.
7. Relativistic kinetic energy is , where . At low velocities, relativistic kinetic energy reduces to classical kinetic energy.
8. No object with mass can attain the speed of light because an infinite amount of work and an infinite amount of energy input is required to accelerate a mass to the speed of light.
9. The equation relates the relativistic total energy
and the relativistic momentum
. At extremely high velocities, the rest energy
becomes negligible, and
.
### Conceptual Questions
### Problems & Exercises
|
# Quantum Physics
## Connection for AP® Courses
In this chapter, the basic principles of quantum mechanics are introduced. Quantum mechanics is the branch of physics needed to deal with submicroscopic objects. Because these objects are smaller than those, such as computers, books, or cars, that we can observe directly with our senses, and so generally must be observed with the aid of instruments, parts of quantum mechanics seem as foreign and bizarre as the effects of relative motion near the speed of light. Yet through experimental results, quantum mechanics has been shown to be valid. Truth is often stranger than fiction.
Quantum theory was developed initially to explain the behavior of electromagnetic energy in certain situations, such as blackbody radiation or the photoelectric effect, which could not be understood in terms of classical electrodynamics (Essential Knowledge 1.D.2). In the quantum model, light is treated as a packet of energy called a photon, which has both the properties of a wave and a particle (Essential Knowledge 6.F.3). The energy of a photon is directly proportional to its frequency.
This new model for light provided the foundation for one of the most important ideas in quantum theory: wave-particle duality. Just as light has properties of both waves and particles, matter also has the properties of waves and particles (Essential Knowledge 1.D.1). This interpretation of matter and energy explained observations at the atomic level that could not be explained by classical mechanics or electromagnetic theory (Enduring Understanding 1.D). The quantum interpretation of energy and matter at the atomic level, most notably the internal structure of atoms, supports Big Idea 1 of the AP Physics Curriculum Framework.
Big Idea 1 is also supported by the correspondence principle. Classical mechanics cannot accurately describe systems at the atomic level, whereas quantum mechanics is able to describe systems at both levels. However, the properties of matter that are described by waves become insignificant at the macroscopic level, so that for large systems of matter, the quantum description closely approaches, or corresponds to, the classical description (Essential Knowledge 6.G.1, Essential Knowledge 6.G.2, Essential Knowledge 6.F.3).
Big Ideas 5 and 6 are supported by the descriptions of energy and momentum transfer at the quantum level. Although quantum mechanics overturned a number of fundamental ideas of classical physics, the most important principles, such as energy conservation and momentum conservation, remained intact (Enduring Understanding 5.B, Enduring Understanding 5.D). Quantum mechanics expands on these principles, so that the particle-like behavior of electromagnetic energy describes momentum transfer, while the wave-like behavior of matter accounts for why electrons produce diffraction patterns when they pass through the atomic lattices of crystals.
At the quantum level, the effects of measurement are very different from those at the macroscopic level. Because the wave properties of matter are more prominent for small particles, such as electrons, and a wave does not have a specific location, the position and momentum of matter cannot be measured with absolute precision (Essential Knowledge 1.D.3). Rather, the particle has a certain probability of being in a location interval for a specific momentum, or being located within a particular interval of time for a specific energy (Enduring Understanding 7.C, Essential Knowledge 7.C.1). These probabilistic limits on measurement are described by Heisenberg’s uncertainty principle, which connects wave-particle duality to the non-absolute properties of space and time. At the quantum level, measurements affect the system being measured, and so restrict the degree to which properties can be known. The discussion of this probabilistic interpretation supports Big Idea 7 of the AP Physics Curriculum Framework.
The concepts in this chapter support:
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.D Classical mechanics cannot describe all properties of objects.
Essential Knowledge 1.D.1 Objects classically thought of as particles can exhibit properties of waves.
Essential Knowledge 1.D.2 Certain phenomena classically thought of as waves can exhibit properties of particles.
Essential Knowledge 1.D.3 Properties of space and time cannot always be treated as absolute.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.8 Energy transfer occurs when photons are absorbed or emitted, for example, by atoms or nuclei.
Enduring Understanding 5.D The linear momentum of a system is conserved.
Essential Knowledge 5.D.1 In a collision between objects, linear momentum is conserved. In an elastic collision, kinetic energy is the same before and after.
Big Idea 6 Waves can transfer energy and momentum from one location to another without the permanent transfer of mass and serve as a mathematical model for the description of other phenomena.
Enduring Understanding 6.F Electromagnetic radiation can be modeled as waves or as fundamental particles.
Essential Knowledge 6.F.3 Photons are individual energy packets of electromagnetic waves, with Ephoton = hf, where h is Planck’s constant and f is the frequency of the associated light wave.
Essential Knowledge 6.F.4 The nature of light requires that different models of light are most appropriate at different scales.
Enduring Understanding 6.G All matter can be modeled as waves or as particles.
Essential Knowledge 6.G.1 Under certain regimes of energy or distance, matter can be modeled as a classical particle.
Essential Knowledge 6.G.2 Under certain regimes of energy or distance, matter can be modeled as a wave. The behavior in these regimes is described by quantum mechanics.
Big Idea 7. The mathematics of probability can be used to describe the behavior of complex systems and to interpret the behavior of quantum mechanical systems.
Enduring Understanding 7.C At the quantum scale, matter is described by a wave function, which leads to a probabilistic description of the microscopic world.
Essential Knowledge 7.C.1 The probabilistic description of matter is modeled by a wave function, which can be assigned to an object and used to describe its motion and interactions. The absolute value of the wave function is related to the probability of finding a particle in some spatial region. (Qualitative treatment only, using graphical analysis.)
Atoms, molecules, and fundamental electron and proton charges are all examples of physical entities that are quantized—that is, they appear only in certain discrete values and do not have every conceivable value. Quantized is the opposite of continuous. We cannot have a fraction of an atom, or part of an electron’s charge, or 14-1/3 cents, for example. Rather, everything is built of integral multiples of these substructures. Quantum physics is the branch of physics that deals with small objects and the quantization of various entities, including energy and angular momentum. Just as with classical physics, quantum physics has several subfields, such as mechanics and the study of electromagnetic forces. The correspondence principle states that in the classical limit (large, slow-moving objects), quantum mechanics becomes the same as classical physics. In this chapter, we begin the development of quantum mechanics and its description of the strange submicroscopic world. In later chapters, we will examine many areas, such as atomic and nuclear physics, in which quantum mechanics is crucial. |
# Quantum Physics
## Quantization of Energy
### Learning Objectives
By the end of this section, you will be able to:
1. Explain Max Planck’s contribution to the development of quantum mechanics.
2. Explain why atomic spectra indicate quantization.
### Planck’s Contribution
Energy is quantized in some systems, meaning that the system can have only certain energies and not a continuum of energies, unlike the classical case. This would be like having only certain speeds at which a car can travel because its kinetic energy can have only certain values. We also find that some forms of energy transfer take place with discrete lumps of energy. While most of us are familiar with the quantization of matter into lumps called atoms, molecules, and the like, we are less aware that energy, too, can be quantized. Some of the earliest clues about the necessity of quantum mechanics over classical physics came from the quantization of energy.
Where is the quantization of energy observed? Let us begin by considering the emission and absorption of electromagnetic (EM) radiation. The EM spectrum radiated by a hot solid is linked directly to the solid’s temperature. (See .) An ideal radiator is one that has an emissivity of 1 at all wavelengths and, thus, is jet black. Ideal radiators are therefore called blackbodies, and their EM radiation is called blackbody radiation. It was discussed that the total intensity of the radiation varies as the fourth power of the absolute temperature of the body, and that the peak of the spectrum shifts to shorter wavelengths at higher temperatures. All of this seems quite continuous, but it was the curve of the spectrum of intensity versus wavelength that gave a clue that the energies of the atoms in the solid are quantized. In fact, providing a theoretical explanation for the experimentally measured shape of the spectrum was a mystery at the turn of the century. When this “ultraviolet catastrophe” was eventually solved, the answers led to new technologies such as computers and the sophisticated imaging techniques described in earlier chapters. Once again, physics as an enabling science changed the way we live.
The German physicist Max Planck (1858–1947) used the idea that atoms and molecules in a body act like oscillators to absorb and emit radiation. The energies of the oscillating atoms and molecules had to be quantized to correctly describe the shape of the blackbody spectrum. Planck deduced that the energy of an oscillator having a frequency is given by
Here is any nonnegative integer (0, 1, 2, 3, …). The symbol stands for Planck’s constant, given by
The equation means that an oscillator having a frequency (emitting and absorbing EM radiation of frequency ) can have its energy increase or decrease only in discrete steps of size
It might be helpful to mention some macroscopic analogies of this quantization of energy phenomena. This is like a pendulum that has a characteristic oscillation frequency but can swing with only certain amplitudes. Quantization of energy also resembles a standing wave on a string that allows only particular harmonics described by integers. It is also similar to going up and down a hill using discrete stair steps rather than being able to move up and down a continuous slope. Your potential energy takes on discrete values as you move from step to step.
Using the quantization of oscillators, Planck was able to correctly describe the experimentally known shape of the blackbody spectrum. This was the first indication that energy is sometimes quantized on a small scale and earned him the Nobel Prize in Physics in 1918. Although Planck’s theory comes from observations of a macroscopic object, its analysis is based on atoms and molecules. It was such a revolutionary departure from classical physics that Planck himself was reluctant to accept his own idea that energy states are not continuous. The general acceptance of Planck’s energy quantization was greatly enhanced by Einstein’s explanation of the photoelectric effect (discussed in the next section), which took energy quantization a step further. Planck was fully involved in the development of both early quantum mechanics and relativity. He quickly embraced Einstein’s special relativity, published in 1905, and in 1906 Planck was the first to suggest the correct formula for relativistic momentum, .
Note that Planck’s constant is a very small number. So for an infrared frequency of being emitted by a blackbody, for example, the difference between energy levels is only or about 0.4 eV. This 0.4 eV of energy is significant compared with typical atomic energies, which are on the order of an electron volt, or thermal energies, which are typically fractions of an electron volt. But on a macroscopic or classical scale, energies are typically on the order of joules. Even if macroscopic energies are quantized, the quantum steps are too small to be noticed. This is an example of the correspondence principle. For a large object, quantum mechanics produces results indistinguishable from those of classical physics.
### Atomic Spectra
Now let us turn our attention to the emission and absorption of EM radiation by gases. The Sun is the most common example of a body containing gases emitting an EM spectrum that includes visible light. We also see examples in neon signs and candle flames. Studies of emissions of hot gases began more than two centuries ago, and it was soon recognized that these emission spectra contained huge amounts of information. The type of gas and its temperature, for example, could be determined. We now know that these EM emissions come from electrons transitioning between energy levels in individual atoms and molecules; thus, they are called atomic spectra. Atomic spectra remain an important analytical tool today. shows an example of an emission spectrum obtained by passing an electric discharge through a material. One of the most important characteristics of these spectra is that they are discrete. By this we mean that only certain wavelengths, and hence frequencies, are emitted. This is called a line spectrum. If frequency and energy are associated as the energies of the electrons in the emitting atoms and molecules are quantized. This is discussed in more detail later in this chapter.
It was a major puzzle that atomic spectra are quantized. Some of the best minds of 19th-century science failed to explain why this might be. Not until the second decade of the 20th century did an answer based on quantum mechanics begin to emerge. Again a macroscopic or classical body of gas was involved in the studies, but the effect, as we shall see, is due to individual atoms and molecules.
### Test Prep for AP Courses
### Section Summary
1. The first indication that energy is sometimes quantized came from blackbody radiation, which is the emission of EM radiation by an object with an emissivity of 1.
2. Planck recognized that the energy levels of the emitting atoms and molecules were quantized, with only the allowed values of where is any non-negative integer (0, 1, 2, 3, …).
3. is Planck’s constant, whose value is
4. Thus, the oscillatory absorption and emission energies of atoms and molecules in a blackbody could increase or decrease only in steps of size where is the frequency of the oscillatory nature of the absorption and emission of EM radiation.
5. Another indication of energy levels being quantized in atoms and molecules comes from the lines in atomic spectra, which are the EM emissions of individual atoms and molecules.
### Conceptual Questions
### Problems & Exercises
|
# Quantum Physics
## The Photoelectric Effect
### Learning Objectives
By the end of this section, you will be able to:
1. Describe a typical photoelectric-effect experiment.
2. Determine the maximum kinetic energy of photoelectrons ejected by photons of one energy or wavelength, when given the maximum kinetic energy of photoelectrons for a different photon energy or wavelength.
When light strikes materials, it can eject electrons from them. This is called the photoelectric effect, meaning that light (photo) produces electricity. One common use of the photoelectric effect is in light meters, such as those that adjust the automatic iris on various types of cameras. In a similar way, another use is in solar cells, as you probably have in your calculator or have seen on a roof top or a roadside sign. These make use of the photoelectric effect to convert light into electricity for running different devices.
This effect has been known for more than a century and can be studied using a device such as that shown in . This figure shows an evacuated tube with a metal plate and a collector wire that are connected by a variable voltage source, with the collector more negative than the plate. When light (or other EM radiation) strikes the plate in the evacuated tube, it may eject electrons. If the electrons have energy in electron volts (eV) greater than the potential difference between the plate and the wire in volts, some electrons will be collected on the wire. Since the electron energy in eV is , where is the electron charge and is the potential difference, the electron energy can be measured by adjusting the retarding voltage between the wire and the plate. The voltage that stops the electrons from reaching the wire equals the energy in eV. For example, if barely stops the electrons, their energy is 3.00 eV. The number of electrons ejected can be determined by measuring the current between the wire and plate. The more light, the more electrons; a little circuitry allows this device to be used as a light meter.
What is really important about the photoelectric effect is what Albert Einstein deduced from it. Einstein realized that there were several characteristics of the photoelectric effect that could be explained only if EM radiation is itself quantized: the apparently continuous stream of energy in an EM wave is actually composed of energy quanta called photons. In his explanation of the photoelectric effect, Einstein defined a quantized unit or quantum of EM energy, which we now call a photon, with an energy proportional to the frequency of EM radiation. In equation form, the photon energy is
where is the energy of a photon of frequency and is Planck’s constant. This revolutionary idea looks similar to Planck’s quantization of energy states in blackbody oscillators, but it is quite different. It is the quantization of EM radiation itself. EM waves are composed of photons and are not continuous smooth waves as described in previous chapters on optics. Their energy is absorbed and emitted in lumps, not continuously. This is exactly consistent with Planck’s quantization of energy levels in blackbody oscillators, since these oscillators increase and decrease their energy in steps of by absorbing and emitting photons having . We do not observe this with our eyes, because there are so many photons in common light sources that individual photons go unnoticed. (See .) The next section of the text (Photon Energies and the Electromagnetic Spectrum) is devoted to a discussion of photons and some of their characteristics and implications. For now, we will use the photon concept to explain the photoelectric effect, much as Einstein did.
The photoelectric effect has the properties discussed below. All these properties are consistent with the idea that individual photons of EM radiation are absorbed by individual electrons in a material, with the electron gaining the photon’s energy. Some of these properties are inconsistent with the idea that EM radiation is a simple wave. For simplicity, let us consider what happens with monochromatic EM radiation in which all photons have the same energy .
1. If we vary the frequency of the EM radiation falling on a material, we find the following: For a given material, there is a threshold frequency for the EM radiation below which no electrons are ejected, regardless of intensity. Individual photons interact with individual electrons. Thus if the photon energy is too small to break an electron away, no electrons will be ejected. If EM radiation was a simple wave, sufficient energy could be obtained by increasing the intensity.
2. Once EM radiation falls on a material, electrons are ejected without delay. As soon as an individual photon of a sufficiently high frequency is absorbed by an individual electron, the electron is ejected. If the EM radiation were a simple wave, several minutes would be required for sufficient energy to be deposited to the metal surface to eject an electron.
3. The number of electrons ejected per unit time is proportional to the intensity of the EM radiation and to no other characteristic. High-intensity EM radiation consists of large numbers of photons per unit area, with all photons having the same characteristic energy .
4. If we vary the intensity of the EM radiation and measure the energy of ejected electrons, we find the following: The maximum kinetic energy of ejected electrons is independent of the intensity of the EM radiation. Since there are so many electrons in a material, it is extremely unlikely that two photons will interact with the same electron at the same time, thereby increasing the energy given it. Instead (as noted in 3 above), increased intensity results in more electrons of the same energy being ejected. If EM radiation were a simple wave, a higher intensity could give more energy, and higher-energy electrons would be ejected.
5. The kinetic energy of an ejected electron equals the photon energy minus the binding energy of the electron in the specific material. An individual photon can give all of its energy to an electron. The photon’s energy is partly used to break the electron away from the material. The remainder goes into the ejected electron’s kinetic energy. In equation form, this is given by
where is the maximum kinetic energy of the ejected electron, is the photon’s energy, and BE is the binding energy of the electron to the particular material. (BE is sometimes called the work function of the material.) This equation, due to Einstein in 1905, explains the properties of the photoelectric effect quantitatively. An individual photon of EM radiation (it does not come any other way) interacts with an individual electron, supplying enough energy, BE, to break it away, with the remainder going to kinetic energy. The binding energy is , where is the threshold frequency for the particular material. shows a graph of maximum versus the frequency of incident EM radiation falling on a particular material.
Einstein’s idea that EM radiation is quantized was crucial to the beginnings of quantum mechanics. It is a far more general concept than its explanation of the photoelectric effect might imply. All EM radiation can also be modeled in the form of photons, and the characteristics of EM radiation are entirely consistent with this fact. (As we will see in the next section, many aspects of EM radiation, such as the hazards of ultraviolet (UV) radiation, can be explained only by photon properties.) More famous for modern relativity, Einstein planted an important seed for quantum mechanics in 1905, the same year he published his first paper on special relativity. His explanation of the photoelectric effect was the basis for the Nobel Prize awarded to him in 1921. Although his other contributions to theoretical physics were also noted in that award, special and general relativity were not fully recognized in spite of having been partially verified by experiment by 1921. Although hero-worshipped, this great man never received Nobel recognition for his most famous work—relativity.
### Test Prep for AP Courses
### Section Summary
1. The photoelectric effect is the process in which EM radiation ejects electrons from a material.
2. Einstein proposed photons to be quanta of EM radiation having energy , where is the frequency of the radiation.
3. All EM radiation is composed of photons. As Einstein explained, all characteristics of the photoelectric effect are due to the interaction of individual photons with individual electrons.
4. The maximum kinetic energy of ejected electrons (photoelectrons) is given by , where is the photon energy and BE is the binding energy (or work function) of the electron to the particular material.
### Conceptual Questions
### Problems & Exercises
|
# Quantum Physics
## Photon Energies and the Electromagnetic Spectrum
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the relationship between the energy of a photon in joules or electron volts and its wavelength or frequency.
2. Calculate the number of photons per second emitted by a monochromatic source of specific wavelength and power.
### Ionizing Radiation
A photon is a quantum of EM radiation. Its energy is given by and is related to the frequency and wavelength of the radiation by
where is the energy of a single photon and is the speed of light. When working with small systems, energy in eV is often useful. Note that Planck’s constant in these units is
Since many wavelengths are stated in nanometers (nm), it is also useful to know that
These will make many calculations a little easier.
All EM radiation is composed of photons. shows various divisions of the EM spectrum plotted against wavelength, frequency, and photon energy. Previously in this book, photon characteristics were alluded to in the discussion of some of the characteristics of UV, x rays, and rays, the first of which start with frequencies just above violet in the visible spectrum. It was noted that these types of EM radiation have characteristics much different than visible light. We can now see that such properties arise because photon energy is larger at high frequencies.
Photons act as individual quanta and interact with individual electrons, atoms, molecules, and so on. The energy a photon carries is, thus, crucial to the effects it has. lists representative submicroscopic energies in eV. When we compare photon energies from the EM spectrum in with energies in the table, we can see how effects vary with the type of EM radiation.
Gamma rays, a form of nuclear and cosmic EM radiation, can have the highest frequencies and, hence, the highest photon energies in the EM spectrum. For example, a -ray photon with has an energy This is sufficient energy to ionize thousands of atoms and molecules, since only 10 to 1000 eV are needed per ionization. In fact, rays are one type of ionizing radiation, as are x rays and UV, because they produce ionization in materials that absorb them. Because so much ionization can be produced, a single -ray photon can cause significant damage to biological tissue, killing cells or damaging their ability to properly reproduce. When cell reproduction is disrupted, the result can be cancer, one of the known effects of exposure to ionizing radiation. Since cancer cells are rapidly reproducing, they are exceptionally sensitive to the disruption produced by ionizing radiation. This means that ionizing radiation has positive uses in cancer treatment as well as risks in producing cancer.
High photon energy also enables rays to penetrate materials, since a collision with a single atom or molecule is unlikely to absorb all the ray’s energy. This can make rays useful as a probe, and they are sometimes used in medical imaging. x rays, as you can see in , overlap with the low-frequency end of the ray range. Since x rays have energies of keV and up, individual x-ray photons also can produce large amounts of ionization. At lower photon energies, x rays are not as penetrating as rays and are slightly less hazardous. X rays are ideal for medical imaging, their most common use, and a fact that was recognized immediately upon their discovery in 1895 by the German physicist W. C. Roentgen (1845–1923). (See .) Within one year of their discovery, x rays (for a time called Roentgen rays) were used for medical diagnostics. Roentgen received the 1901 Nobel Prize for the discovery of x rays.
While rays originate in nuclear decay, x rays are produced by the process shown in . Electrons ejected by thermal agitation from a hot filament in a vacuum tube are accelerated through a high voltage, gaining kinetic energy from the electrical potential energy. When they strike the anode, the electrons convert their kinetic energy to a variety of forms, including thermal energy. But since an accelerated charge radiates EM waves, and since the electrons act individually, photons are also produced. Some of these x-ray photons obtain the kinetic energy of the electron. The accelerated electrons originate at the cathode, so such a tube is called a cathode ray tube (CRT), and various versions of them are found in older TV and computer screens as well as in x-ray machines.
shows the spectrum of x rays obtained from an x-ray tube. There are two distinct features to the spectrum. First, the smooth distribution results from electrons being decelerated in the anode material. A curve like this is obtained by detecting many photons, and it is apparent that the maximum energy is unlikely. This decelerating process produces radiation that is called bremsstrahlung (German for braking radiation). The second feature is the existence of sharp peaks in the spectrum; these are called characteristic x rays, since they are characteristic of the anode material. Characteristic x rays come from atomic excitations unique to a given type of anode material. They are akin to lines in atomic spectra, implying the energy levels of atoms are quantized. Phenomena such as discrete atomic spectra and characteristic x rays are explored further in Atomic Physics.
Ultraviolet radiation (approximately 4 eV to 300 eV) overlaps with the low end of the energy range of x rays, but UV is typically lower in energy. UV comes from the de-excitation of atoms that may be part of a hot solid or gas. These atoms can be given energy that they later release as UV by numerous processes, including electric discharge, nuclear explosion, thermal agitation, and exposure to x rays. A UV photon has sufficient energy to ionize atoms and molecules, which makes its effects different from those of visible light. UV thus has some of the same biological effects as rays and x rays. For example, it can cause skin cancer and is used as a sterilizer. The major difference is that several UV photons are required to disrupt cell reproduction or kill a bacterium, whereas single -ray and X-ray photons can do the same damage. But since UV does have the energy to alter molecules, it can do what visible light cannot. One of the beneficial aspects of UV is that it triggers the production of vitamin D in the skin, whereas visible light has insufficient energy per photon to alter the molecules that trigger this production. Infantile jaundice is treated by exposing the baby to UV (with eye protection), called phototherapy, the beneficial effects of which are thought to be related to its ability to help prevent the buildup of potentially toxic bilirubin in the blood.
### Visible Light
The range of photon energies for visible light from red to violet is 1.63 to 3.26 eV, respectively (left for this chapter’s Problems and Exercises to verify). These energies are on the order of those between outer electron shells in atoms and molecules. This means that these photons can be absorbed by atoms and molecules. A single photon can actually stimulate the retina, for example, by altering a receptor molecule that then triggers a nerve impulse. Photons can be absorbed or emitted only by atoms and molecules that have precisely the correct quantized energy step to do so. For example, if a red photon of frequency encounters a molecule that has an energy step, equal to then the photon can be absorbed. Violet flowers absorb red and reflect violet; this implies there is no energy step between levels in the receptor molecule equal to the violet photon’s energy, but there is an energy step for the red.
There are some noticeable differences in the characteristics of light between the two ends of the visible spectrum that are due to photon energies. Red light has insufficient photon energy to expose most black-and-white film, and it is thus used to illuminate darkrooms where such film is developed. Since violet light has a higher photon energy, dyes that absorb violet tend to fade more quickly than those that do not. (See .) Take a look at some faded color posters in a storefront some time, and you will notice that the blues and violets are the last to fade. This is because other dyes, such as red and green dyes, absorb blue and violet photons, the higher energies of which break up their weakly bound molecules. (Complex molecules such as those in dyes and DNA tend to be weakly bound.) Blue and violet dyes reflect those colors and, therefore, do not absorb these more energetic photons, thus suffering less molecular damage.
Transparent materials, such as some glasses, do not absorb any visible light, because there is no energy step in the atoms or molecules that could absorb the light. Since individual photons interact with individual atoms, it is nearly impossible to have two photons absorbed simultaneously to reach a large energy step. Because of its lower photon energy, visible light can sometimes pass through many kilometers of a substance, while higher frequencies like UV, x ray, and rays are absorbed, because they have sufficient photon energy to ionize the material.
### Lower-Energy Photons
Infrared radiation (IR) has even lower photon energies than visible light and cannot significantly alter atoms and molecules. IR can be absorbed and emitted by atoms and molecules, particularly between closely spaced states. IR is extremely strongly absorbed by water, for example, because water molecules have many states separated by energies on the order of to well within the IR and microwave energy ranges. This is why in the IR range, skin is almost jet black, with an emissivity near 1—there are many states in water molecules in the skin that can absorb a large range of IR photon energies. Not all molecules have this property. Air, for example, is nearly transparent to many IR frequencies.
Microwaves are the highest frequencies that can be produced by electronic circuits, although they are also produced naturally. Thus microwaves are similar to IR but do not extend to as high frequencies. There are states in water and other molecules that have the same frequency and energy as microwaves, typically about This is one reason why food absorbs microwaves more strongly than many other materials, making microwave ovens an efficient way of putting energy directly into food.
Photon energies for both IR and microwaves are so low that huge numbers of photons are involved in any significant energy transfer by IR or microwaves (such as warming yourself with a heat lamp or cooking pizza in the microwave). Visible light, IR, microwaves, and all lower frequencies cannot produce ionization with single photons and do not ordinarily have the hazards of higher frequencies. When visible, IR, or microwave radiation is hazardous, such as the inducement of cataracts by microwaves, the hazard is due to huge numbers of photons acting together (not to an accumulation of photons, such as sterilization by weak UV). The negative effects of visible, IR, or microwave radiation can be thermal effects, which could be produced by any heat source. But one difference is that at very high intensity, strong electric and magnetic fields can be produced by photons acting together. Such electromagnetic fields (EMF) can actually ionize materials.
It is virtually impossible to detect individual photons having frequencies below microwave frequencies, because of their low photon energy. But the photons are there. A continuous EM wave can be modeled as photons. At low frequencies, EM waves are generally treated as time- and position-varying electric and magnetic fields with no discernible quantization. This is another example of the correspondence principle in situations involving huge numbers of photons.
### Test Prep for AP Courses
### Section Summary
1. Photon energy is responsible for many characteristics of EM radiation, being particularly noticeable at high frequencies.
2. Photons have both wave and particle characteristics.
### Conceptual Questions
### Problems & Exercises
|
# Quantum Physics
## Photon Momentum
### Learning Objectives
By the end of this section, you will be able to:
1. Relate the linear momentum of a photon to its energy or wavelength, and apply linear momentum conservation to simple processes involving the emission, absorption, or reflection of photons.
2. Account qualitatively for the increase of photon wavelength that is observed, and explain the significance of the Compton wavelength.
### Measuring Photon Momentum
The quantum of EM radiation we call a photon has properties analogous to those of particles we can see, such as grains of sand. A photon interacts as a unit in collisions or when absorbed, rather than as an extensive wave. Massive quanta, like electrons, also act like macroscopic particles—something we expect, because they are the smallest units of matter. Particles carry momentum as well as energy. Despite photons having no mass, there has long been evidence that EM radiation carries momentum. (Maxwell and others who studied EM waves predicted that they would carry momentum.) It is now a well-established fact that photons do have momentum. In fact, photon momentum is suggested by the photoelectric effect, where photons knock electrons out of a substance. shows macroscopic evidence of photon momentum.
shows a comet with two prominent tails. What most people do not know about the tails is that they always point away from the Sun rather than trailing behind the comet (like the tail of Bo Peep’s sheep). Comet tails are composed of gases and dust evaporated from the body of the comet and ionized gas. The dust particles recoil away from the Sun when photons scatter from them. Evidently, photons carry momentum in the direction of their motion (away from the Sun), and some of this momentum is transferred to dust particles in collisions. Gas atoms and molecules in the blue tail are most affected by other particles of radiation, such as protons and electrons emanating from the Sun, rather than by the momentum of photons.
Momentum is conserved in quantum mechanics just as it is in relativity and classical physics. Some of the earliest direct experimental evidence of this came from scattering of x-ray photons by electrons in substances, named Compton scattering after the American physicist Arthur H. Compton (1892–1962). Around 1923, Compton observed that x rays scattered from materials had a decreased energy and correctly analyzed this as being due to the scattering of photons from electrons. This phenomenon could be handled as a collision between two particles—a photon and an electron at rest in the material. Energy and momentum are conserved in the collision. (See ) He won a Nobel Prize in 1929 for the discovery of this scattering, now called the Compton effect, because it helped prove that photon momentum is given by
where is Planck’s constant and is the photon wavelength. (Note that relativistic momentum given as is valid only for particles having mass.)
We can see that photon momentum is small, since and is very small. It is for this reason that we do not ordinarily observe photon momentum. Our mirrors do not recoil when light reflects from them (except perhaps in cartoons). Compton saw the effects of photon momentum because he was observing x rays, which have a small wavelength and a relatively large momentum, interacting with the lightest of particles, the electron.
### Relativistic Photon Momentum
There is a relationship between photon momentum and photon energy that is consistent with the relation given previously for the relativistic total energy of a particle as . We know is zero for a photon, but is not, so that
becomes
or
To check the validity of this relation, note that for a photon. Substituting this into yields
as determined experimentally and discussed above. Thus, is equivalent to Compton’s result . For a further verification of the relationship between photon energy and momentum, see .
### Test Prep for AP Courses
### Section Summary
1. Photons have momentum, given by , where is the photon wavelength.
2. Photon energy and momentum are related by
, where for a photon.
### Conceptual Questions
### Problems & Exercises
|
# Quantum Physics
## The Particle-Wave Duality
### Learning Objectives
By the end of this section, you will be able to:
1. Explain what the term particle-wave duality means, and why it is applied to EM radiation.
We have long known that EM radiation is a wave, capable of interference and diffraction. We now see that light can be modeled as photons, which are massless particles. This may seem contradictory, since we ordinarily deal with large objects that never act like both wave and particle. An ocean wave, for example, looks nothing like a rock. To understand small-scale phenomena, we make analogies with the large-scale phenomena we observe directly. When we say something behaves like a wave, we mean it shows interference effects analogous to those seen in overlapping water waves. (See .) Two examples of waves are sound and EM radiation. When we say something behaves like a particle, we mean that it interacts as a discrete unit with no interference effects. Examples of particles include electrons, atoms, and photons of EM radiation. How do we talk about a phenomenon that acts like both a particle and a wave?
There is no doubt that EM radiation interferes and has the properties of wavelength and frequency. There is also no doubt that it behaves as particles—photons with discrete energy. We call this twofold nature the particle-wave duality, meaning that EM radiation has both particle and wave properties. This so-called duality is simply a term for properties of the photon analogous to phenomena we can observe directly, on a macroscopic scale. If this term seems strange, it is because we do not ordinarily observe details on the quantum level directly, and our observations yield either particle or wavelike properties, but never both simultaneously.
Since we have a particle-wave duality for photons, and since we have seen connections between photons and matter in that both have momentum, it is reasonable to ask whether there is a particle-wave duality for matter as well. If the EM radiation we once thought to be a pure wave has particle properties, is it possible that matter has wave properties? The answer is yes. The consequences are tremendous, as we will begin to see in the next section.
### Test Prep for AP Courses
### Section Summary
1. EM radiation can behave like either a particle or a wave.
2. This is termed particle-wave duality. |
# Quantum Physics
## The Wave Nature of Matter
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the Davisson-Germer experiment, and explain how it provides evidence for the wave nature of electrons.
### De Broglie Wavelength
In 1923 a French physics graduate student named Prince Louis-Victor de Broglie (1892–1987) made a radical proposal based on the hope that nature is symmetric. If EM radiation has both particle and wave properties, then nature would be symmetric if matter also had both particle and wave properties. If what we once thought of as an unequivocal wave (EM radiation) is also a particle, then what we think of as an unequivocal particle (matter) may also be a wave. De Broglie’s suggestion, made as part of his doctoral thesis, was so radical that it was greeted with some skepticism. A copy of his thesis was sent to Einstein, who said it was not only probably correct, but that it might be of fundamental importance. With the support of Einstein and a few other prominent physicists, de Broglie was awarded his doctorate.
De Broglie took both relativity and quantum mechanics into account to develop the proposal that all particles have a wavelength, given by
where is Planck’s constant and is momentum. This is defined to be the de Broglie wavelength. (Note that we already have this for photons, from the equation .) The hallmark of a wave is interference. If matter is a wave, then it must exhibit constructive and destructive interference. Why isn’t this ordinarily observed? The answer is that in order to see significant interference effects, a wave must interact with an object about the same size as its wavelength. Since is very small, is also small, especially for macroscopic objects. A 3-kg bowling ball moving at 10 m/s, for example, has This means that to see its wave characteristics, the bowling ball would have to interact with something about in size—far smaller than anything known. When waves interact with objects much larger than their wavelength, they show negligible interference effects and move in straight lines (such as light rays in geometric optics). To get easily observed interference effects from particles of matter, the longest wavelength and hence smallest mass possible would be useful. Therefore, this effect was first observed with electrons.
American physicists Clinton J. Davisson and Lester H. Germer in 1925 and, independently, British physicist G. P. Thomson (son of J. J. Thomson, discoverer of the electron) in 1926 scattered electrons from crystals and found diffraction patterns. These patterns are exactly consistent with interference of electrons having the de Broglie wavelength and are somewhat analogous to light interacting with a diffraction grating. (See .)
De Broglie’s proposal of a wave nature for all particles initiated a remarkably productive era in which the foundations for quantum mechanics were laid. In 1926, the Austrian physicist Erwin Schrödinger (1887–1961) published four papers in which the wave nature of particles was treated explicitly with wave equations. At the same time, many others began important work. Among them was German physicist Werner Heisenberg (1901–1976) who, among many other contributions to quantum mechanics, formulated a mathematical treatment of the wave nature of matter that used matrices rather than wave equations. We will deal with some specifics in later sections, but it is worth noting that de Broglie’s work was a watershed for the development of quantum mechanics. De Broglie was awarded the Nobel Prize in 1929 for his vision, as were Davisson and G. P. Thomson in 1937 for their experimental verification of de Broglie’s hypothesis.
### Electron Microscopes
One consequence or use of the wave nature of matter is found in the electron microscope. As we have discussed, there is a limit to the detail observed with any probe having a wavelength. Resolution, or observable detail, is limited to about one wavelength. Since a potential of only 54 V can produce electrons with sub-nanometer wavelengths, it is easy to get electrons with much smaller wavelengths than those of visible light (hundreds of nanometers). Electron microscopes can, thus, be constructed to detect much smaller details than optical microscopes. (See .)
There are basically two types of electron microscopes. The transmission electron microscope (TEM) accelerates electrons that are emitted from a hot filament (the cathode). The beam is broadened and then passes through the sample. A magnetic lens focuses the beam image onto a fluorescent screen, a photographic plate, or (most probably) a CCD (light sensitive camera), from which it is transferred to a computer. The TEM is similar to the optical microscope, but it requires a thin sample examined in a vacuum. However it can resolve details as small as 0.1 nm (), providing magnifications of 100 million times the size of the original object. The TEM has allowed us to see individual atoms and structure of cell nuclei.
The scanning electron microscope (SEM) provides images by using secondary electrons produced by the primary beam interacting with the surface of the sample (see ). The SEM also uses magnetic lenses to focus the beam onto the sample. However, it moves the beam around electrically to “scan” the sample in the x and y directions. A CCD detector is used to process the data for each electron position, producing images like the one at the beginning of this chapter. The SEM has the advantage of not requiring a thin sample and of providing a 3-D view. However, its resolution is about ten times less than a TEM.
Electrons were the first particles with mass to be directly confirmed to have the wavelength proposed by de Broglie. Subsequently, protons, helium nuclei, neutrons, and many others have been observed to exhibit interference when they interact with objects having sizes similar to their de Broglie wavelength. The de Broglie wavelength for massless particles was well established in the 1920s for photons, and it has since been observed that all massless particles have a de Broglie wavelength The wave nature of all particles is a universal characteristic of nature. We shall see in following sections that implications of the de Broglie wavelength include the quantization of energy in atoms and molecules, and an alteration of our basic view of nature on the microscopic scale. The next section, for example, shows that there are limits to the precision with which we may make predictions, regardless of how hard we try. There are even limits to the precision with which we may measure an object’s location or energy.
### Test Prep for AP Courses
### Section Summary
1. Particles of matter also have a wavelength, called the de Broglie wavelength, given by , where is momentum.
2. Matter is found to have the same interference characteristics as any other wave.
### Conceptual Questions
### Problems & Exercises
|
# Quantum Physics
## Probability: The Heisenberg Uncertainty Principle
### Learning Objectives
By the end of this section, you will be able to:
1. Use both versions of Heisenberg’s uncertainty principle in calculations.
2. Explain the implications of Heisenberg’s uncertainty principle for measurements.
### Probability Distribution
Matter and photons are waves, implying they are spread out over some distance. What is the position of a particle, such as an electron? Is it at the center of the wave? The answer lies in how you measure the position of an electron. Experiments show that you will find the electron at some definite location, unlike a wave. But if you set up exactly the same situation and measure it again, you will find the electron in a different location, often far outside any experimental uncertainty in your measurement. Repeated measurements will display a statistical distribution of locations that appears wavelike. (See .)
After de Broglie proposed the wave nature of matter, many physicists, including Schrödinger and Heisenberg, explored the consequences. The idea quickly emerged that, because of its wave character, a particle’s trajectory and destination cannot be precisely predicted for each particle individually. However, each particle goes to a definite place (as illustrated in ). After compiling enough data, you get a distribution related to the particle’s wavelength and diffraction pattern. There is a certain probability of finding the particle at a given location, and the overall pattern is called a probability distribution. Those who developed quantum mechanics devised equations that predicted the probability distribution in various circumstances.
It is somewhat disquieting to think that you cannot predict exactly where an individual particle will go, or even follow it to its destination. Let us explore what happens if we try to follow a particle. Consider the double-slit patterns obtained for electrons and photons in . First, we note that these patterns are identical, following , the equation for double-slit constructive interference developed in Photon Energies and the Electromagnetic Spectrum, where is the slit separation and is the electron or photon wavelength.
Both patterns build up statistically as individual particles fall on the detector. This can be observed for photons or electrons—for now, let us concentrate on electrons. You might imagine that the electrons are interfering with one another as any waves do. To test this, you can lower the intensity until there is never more than one electron between the slits and the screen. The same interference pattern builds up! This implies that a particle’s probability distribution spans both slits, and the particles actually interfere with themselves. Does this also mean that the electron goes through both slits? An electron is a basic unit of matter that is not divisible. But it is a fair question, and so we should look to see if the electron traverses one slit or the other, or both. One possibility is to have coils around the slits that detect charges moving through them. What is observed is that an electron always goes through one slit or the other; it does not split to go through both. But there is a catch. If you determine that the electron went through one of the slits, you no longer get a double slit pattern—instead, you get single slit interference. There is no escape by using another method of determining which slit the electron went through. Knowing the particle went through one slit forces a single-slit pattern. If you do not observe which slit the electron goes through, you obtain a double-slit pattern.
### Heisenberg Uncertainty
How does knowing which slit the electron passed through change the pattern? The answer is fundamentally important—measurement affects the system being observed. Information can be lost, and in some cases it is impossible to measure two physical quantities simultaneously to exact precision. For example, you can measure the position of a moving electron by scattering light or other electrons from it. Those probes have momentum themselves, and by scattering from the electron, they change its momentum in a manner that loses information. There is a limit to absolute knowledge, even in principle.
It was Werner Heisenberg who first stated this limit to knowledge in 1929 as a result of his work on quantum mechanics and the wave characteristics of all particles. (See ). Specifically, consider simultaneously measuring the position and momentum of an electron (it could be any particle). There is an uncertainty in position that is approximately equal to the wavelength of the particle. That is,
As discussed above, a wave is not located at one point in space. If the electron’s position is measured repeatedly, a spread in locations will be observed, implying an uncertainty in position . To detect the position of the particle, we must interact with it, such as having it collide with a detector. In the collision, the particle will lose momentum. This change in momentum could be anywhere from close to zero to the total momentum of the particle, . It is not possible to tell how much momentum will be transferred to a detector, and so there is an uncertainty in momentum , too. In fact, the uncertainty in momentum may be as large as the momentum itself, which in equation form means that
The uncertainty in position can be reduced by using a shorter-wavelength electron, since . But shortening the wavelength increases the uncertainty in momentum, since . Conversely, the uncertainty in momentum can be reduced by using a longer-wavelength electron, but this increases the uncertainty in position. Mathematically, you can express this trade-off by multiplying the uncertainties. The wavelength cancels, leaving
So if one uncertainty is reduced, the other must increase so that their product is .
With the use of advanced mathematics, Heisenberg showed that the best that can be done in a simultaneous measurement of position and momentum is
This is known as the Heisenberg uncertainty principle. It is impossible to measure position and momentum simultaneously with uncertainties and that multiply to be less than . Neither uncertainty can be zero. Neither uncertainty can become small without the other becoming large. A small wavelength allows accurate position measurement, but it increases the momentum of the probe to the point that it further disturbs the momentum of a system being measured. For example, if an electron is scattered from an atom and has a wavelength small enough to detect the position of electrons in the atom, its momentum can knock the electrons from their orbits in a manner that loses information about their original motion. It is therefore impossible to follow an electron in its orbit around an atom. If you measure the electron’s position, you will find it in a definite location, but the atom will be disrupted. Repeated measurements on identical atoms will produce interesting probability distributions for electrons around the atom, but they will not produce motion information. The probability distributions are referred to as electron clouds or orbitals. The shapes of these orbitals are often shown in general chemistry texts and are discussed in The Wave Nature of Matter Causes Quantization.
Why don’t we notice Heisenberg’s uncertainty principle in everyday life? The answer is that Planck’s constant is very small. Thus the lower limit in the uncertainty of measuring the position and momentum of large objects is negligible. We can detect sunlight reflected from Jupiter and follow the planet in its orbit around the Sun. The reflected sunlight alters the momentum of Jupiter and creates an uncertainty in its momentum, but this is totally negligible compared with Jupiter’s huge momentum. The correspondence principle tells us that the predictions of quantum mechanics become indistinguishable from classical physics for large objects, which is the case here.
### Heisenberg Uncertainty for Energy and Time
There is another form of Heisenberg’s uncertainty principle for simultaneous measurements of energy and time. In equation form,
where is the uncertainty in energy and is the uncertainty in time. This means that within a time interval , it is not possible to measure energy precisely—there will be an uncertainty in the measurement. In order to measure energy more precisely (to make smaller), we must increase . This time interval may be the amount of time we take to make the measurement, or it could be the amount of time a particular state exists, as in the next .
The uncertainty principle for energy and time can be of great significance if the lifetime of a system is very short. Then is very small, and is consequently very large. Some nuclei and exotic particles have extremely short lifetimes (as small as ), causing uncertainties in energy as great as many GeV (). Stored energy appears as increased rest mass, and so this means that there is significant uncertainty in the rest mass of short-lived particles. When measured repeatedly, a spread of masses or decay energies are obtained. The spread is . You might ask whether this uncertainty in energy could be avoided by not measuring the lifetime. The answer is no. Nature knows the lifetime, and so its brevity affects the energy of the particle. This is so well established experimentally that the uncertainty in decay energy is used to calculate the lifetime of short-lived states. Some nuclei and particles are so short-lived that it is difficult to measure their lifetime. But if their decay energy can be measured, its spread is , and this is used in the uncertainty principle () to calculate the lifetime .
There is another consequence of the uncertainty principle for energy and time. If energy is uncertain by , then conservation of energy can be violated by for a time . Neither the physicist nor nature can tell that conservation of energy has been violated, if the violation is temporary and smaller than the uncertainty in energy. While this sounds innocuous enough, we shall see in later chapters that it allows the temporary creation of matter from nothing and has implications for how nature transmits forces over very small distances.
Finally, note that in the discussion of particles and waves, we have stated that individual measurements produce precise or particle-like results. A definite position is determined each time we observe an electron, for example. But repeated measurements produce a spread in values consistent with wave characteristics. The great theoretical physicist Richard Feynman (1918–1988) commented, “What there are, are particles.” When you observe enough of them, they distribute themselves as you would expect for a wave phenomenon. However, what there are as they travel we cannot tell because, when we do try to measure, we affect the traveling.
### Section Summary
1. Matter is found to have the same interference characteristics as any other wave.
2. There is now a probability distribution for the location of a particle rather than a definite position.
3. Another consequence of the wave character of all particles is the Heisenberg uncertainty principle, which limits the precision with which certain physical quantities can be known simultaneously. For position and momentum, the uncertainty principle is , where is the uncertainty in position and is the uncertainty in momentum.
4. For energy and time, the uncertainty principle is where is the uncertainty in energy and is the uncertainty in time.
5. These small limits are fundamentally important on the quantum-mechanical scale.
### Conceptual Questions
### Problems & Exercises
|
# Quantum Physics
## The Particle-Wave Duality Reviewed
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the concept of particle-wave duality, and its scope.
Particle-wave duality—the fact that all particles have wave properties—is one of the cornerstones of quantum mechanics. We first came across it in the treatment of photons, those particles of EM radiation that exhibit both particle and wave properties, but not at the same time. Later it was noted that particles of matter have wave properties as well. The dual properties of particles and waves are found for all particles, whether massless like photons, or having a mass like electrons. (See .)
There are many submicroscopic particles in nature. Most have mass and are expected to act as particles, or the smallest units of matter. All these masses have wave properties, with wavelengths given by the de Broglie relationship . So, too, do combinations of these particles, such as nuclei, atoms, and molecules. As a combination of masses becomes large, particularly if it is large enough to be called macroscopic, its wave nature becomes difficult to observe. This is consistent with our common experience with matter.
Some particles in nature are massless. We have only treated the photon so far, but all massless entities travel at the speed of light, have a wavelength, and exhibit particle and wave behaviors. They have momentum given by a rearrangement of the de Broglie relationship, . In large combinations of these massless particles (such large combinations are common only for photons or EM waves), there is mostly wave behavior upon detection, and the particle nature becomes difficult to observe. This is also consistent with experience. (See .)
The particle-wave duality is a universal attribute. It is another connection between matter and energy. Not only has modern physics been able to describe nature for high speeds and small sizes, it has also discovered new connections and symmetries. There is greater unity and symmetry in nature than was known in the classical era—but they were dreamt of. A beautiful poem written by the English poet William Blake some two centuries ago contains the following four lines:
To see the World in a Grain of Sand
And a Heaven in a Wild Flower
Hold Infinity in the palm of your hand
And Eternity in an hour
### Integrated Concepts
The problem set for this section involves concepts from this chapter and several others. Physics is most interesting when applied to general situations involving more than a narrow set of physical principles. For example, photons have momentum, hence the relevance of Linear Momentum and Collisions. The following topics are involved in some or all of the problems in this section:
1. Dynamics: Newton’s Laws of Motion
2. Work, Energy, and Energy Resources
3. Linear Momentum and Collisions
4. Heat and Heat Transfer Methods
5. Electric Potential and Electric Field
6. Electric Current, Resistance, and Ohm’s Law
7. Wave Optics
8. Special Relativity
illustrates how these strategies are applied to an integrated-concept problem.
### Test Prep for AP Courses
### Section Summary
1. The particle-wave duality refers to the fact that all particles—those with mass and those without mass—have wave characteristics.
2. This is a further connection between mass and energy.
### Conceptual Questions
### Problems & Exercises
|
# Atomic Physics
## Connection for AP® Courses
Have you ever wondered how we know the composition of the Sun? After all, we cannot travel there to physically collect a sample due to the extreme conditions. Fortunately, our understanding of the internal structure of atoms gives us the tools to identify the elements in the Sun’s outer layers due to an atomic “fingerprint” in the Sun’s spectrum. You will learn about atoms and their substructures, as well as how these substructures determine the behavior of the atom, such as the absorption and emission of energy by electrons within an atom.
You will learn the stories of how we discovered the various properties of an atom (Essential Knowledge 1.A.4) through clever and imaginative experimentation (such as the Millikan oil drop experiment) and interpretation (such as Brownian motion). You will also learn about the probabilistic description we use to describe the nature of electrons (Essential Knowledge 7.C.1). At this scale, electrons can be thought of as discrete particles, but they also behave in a way that is consistent with a wave model of matter (Enduring Understanding 7.C). You will learn how we use the wave model to understand the energy levels in an atom (Essential Knowledge 7.C.2) and the properties of electrons.
The content in this chapter supports:
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.A The internal structure of a system determines many properties of the system.
Essential Knowledge 1.A.4 Atoms have internal structures that determine their properties.
Essential Knowledge 1.A.5 Systems have properties determined by the properties and interactions of their constituent atomic and molecular substructures.
Enduring Understanding 1.B Electric charge is a property of an object or system that affects its interactions with other objects or systems containing charge.
Essential Knowledge 1.B.3 The smallest observed unit of charge that can be isolated is the electron charge, also known as the elementary charge.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.8 Energy transfer occurs when photons are absorbed or emitted, for example, by atoms or nuclei.
Big Idea 7 The mathematics of probability can be used to describe the behavior of complex systems and to interpret the behavior of quantum mechanical systems.
Enduring Understanding 7.C At the quantum scale, matter is described by a wave function, which leads to a probabilistic description of the microscopic world.
Essential Knowledge 7.C.1 The probabilistic description of matter is modeled by a wave function, which can be assigned to an object and used to describe its motion and interactions. The absolute value of the wave function is related to the probability of finding a particle in some spatial region.
Essential Knowledge 7.C.2 The allowed states for an electron in an atom can be calculated from the wave model of an electron.
Essential Knowledge 7.C.4 Photon emission and absorption processes are described by probability. |
# Atomic Physics
## Discovery of the Atom
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the basic structure of the atom, the substructure of all matter.
How do we know that atoms are really there if we cannot see them with our eyes? A brief account of the progression from the proposal of atoms by the Greeks to the first direct evidence of their existence follows.
People have long speculated about the structure of matter and the existence of atoms. The earliest significant ideas to survive are due to the ancient Greeks in the fifth century BCE, especially those of the philosophers Leucippus and Democritus. (There is some evidence that philosophers in both India and China made similar speculations, at about the same time.) They considered the question of whether a substance can be divided without limit into ever smaller pieces. There are only a few possible answers to this question. One is that infinitesimally small subdivision is possible. Another is what Democritus in particular believed—that there is a smallest unit that cannot be further subdivided. Democritus called this the atom. We now know that atoms themselves can be subdivided, but their identity is destroyed in the process, so the Greeks were correct in a respect. The Greeks also felt that atoms were in constant motion, another correct notion.
The Greeks and others speculated about the properties of atoms, proposing that only a few types existed and that all matter was formed as various combinations of these types. The famous proposal that the basic elements were earth, air, fire, and water was brilliant, but incorrect. The Greeks had identified the most common examples of the four states of matter (solid, gas, plasma, and liquid), rather than the basic elements. More than 2000 years passed before observations could be made with equipment capable of revealing the true nature of atoms.
Over the centuries, discoveries were made regarding the properties of substances and their chemical reactions. Certain systematic features were recognized, but similarities between common and rare elements resulted in efforts to transmute them (lead into gold, in particular) for financial gain. Secrecy was endemic. Alchemists discovered and rediscovered many facts but did not make them broadly available. As the Middle Ages ended, alchemy gradually faded, and the science of chemistry arose. It was no longer possible, nor considered desirable, to keep discoveries secret. Collective knowledge grew, and by the beginning of the 19th century, an important fact was well established—the masses of reactants in specific chemical reactions always have a particular mass ratio. This is very strong indirect evidence that there are basic units (atoms and molecules) that have these same mass ratios. The English chemist John Dalton (1766–1844) did much of this work, with significant contributions by the Italian physicist Amedeo Avogadro (1776–1856). It was Avogadro who developed the idea of a fixed number of atoms and molecules in a mole, and this special number is called Avogadro’s number in his honor. The Austrian physicist Johann Josef Loschmidt was the first to measure the value of the constant in 1865 using the kinetic theory of gases.
Knowledge of the properties of elements and compounds grew, culminating in the mid-19th-century development of the periodic table of the elements by Dmitri Mendeleev (1834–1907), the great Russian chemist. Mendeleev proposed an ingenious array that highlighted the periodic nature of the properties of elements. Believing in the systematics of the periodic table, he also predicted the existence of then-unknown elements to complete it. Once these elements were discovered and determined to have properties predicted by Mendeleev, his periodic table became universally accepted.
Also during the 19th century, the kinetic theory of gases was developed. Kinetic theory is based on the existence of atoms and molecules in random thermal motion and provides a microscopic explanation of the gas laws, heat transfer, and thermodynamics (see Introduction to Temperature, Kinetic Theory, and the Gas Laws and Introduction to Laws of Thermodynamics). Kinetic theory works so well that it is another strong indication of the existence of atoms. But it is still indirect evidence—individual atoms and molecules had not been observed. There were heated debates about the validity of kinetic theory until direct evidence of atoms was obtained.
The first truly direct evidence of atoms is credited to Robert Brown, a Scottish botanist. In 1827, he noticed that tiny pollen grains suspended in still water moved about in complex paths. This can be observed with a microscope for any small particles in a fluid. The motion is caused by the random thermal motions of fluid molecules colliding with particles in the fluid, and it is now called Brownian motion. (See .) Statistical fluctuations in the numbers of molecules striking the sides of a visible particle cause it to move first this way, then that. Although the molecules cannot be directly observed, their effects on the particle can be. By examining Brownian motion, the size of molecules can be calculated. The smaller and more numerous they are, the smaller the fluctuations in the numbers striking different sides.
It was Albert Einstein who, starting in his epochal year of 1905, published several papers that explained precisely how Brownian motion could be used to measure the size of atoms and molecules. (In 1905 Einstein created special relativity, proposed photons as quanta of EM radiation, and produced a theory of Brownian motion that allowed the size of atoms to be determined. All of this was done in his spare time, since he worked days as a patent examiner. Any one of these very basic works could have been the crowning achievement of an entire career—yet Einstein did even more in later years.) Their sizes were only approximately known to be
, based on a comparison of latent heat of vaporization and surface tension made in about 1805 by Thomas Young of double-slit fame and the famous astronomer and mathematician Simon Laplace.
Using Einstein’s ideas, the French physicist Jean-Baptiste Perrin (1870–1942) carefully observed Brownian motion; not only did he confirm Einstein’s theory, he also produced accurate sizes for atoms and molecules. Since molecular weights and densities of materials were well established, knowing atomic and molecular sizes allowed a precise value for Avogadro’s number to be obtained. (If we know how big an atom is, we know how many fit into a certain volume.) Perrin also used these ideas to explain atomic and molecular agitation effects in sedimentation, and he received the 1926 Nobel Prize for his achievements. Most scientists were already convinced of the existence of atoms, but the accurate observation and analysis of Brownian motion was conclusive—it was the first truly direct evidence.
A huge array of direct and indirect evidence for the existence of atoms now exists. For example, it has become possible to accelerate ions (much as electrons are accelerated in cathode-ray tubes) and to detect them individually as well as measure their masses (see More Applications of Magnetism for a discussion of mass spectrometers). Other devices that observe individual atoms, such as the scanning tunneling electron microscope, will be discussed elsewhere. (See .) All of our understanding of the properties of matter is based on and consistent with the atom. The atom’s substructures, such as electron shells and the nucleus, are both interesting and important. The nucleus in turn has a substructure, as do the particles of which it is composed. These topics, and the question of whether there is a smallest basic structure to matter, will be explored in later parts of the text.
### Section Summary
1. Atoms are the smallest unit of elements; atoms combine to form molecules, the smallest unit of compounds.
2. The first direct observation of atoms was in Brownian motion.
3. Analysis of Brownian motion gave accurate sizes for atoms (
on average) and a precise value for Avogadro’s number.
### Conceptual Questions
### Problems & Exercises
|
# Atomic Physics
## Discovery of the Parts of the Atom: Electrons and Nuclei
### Learning Objectives
By the end of this section, you will be able to:
1. Describe how electrons were discovered.
2. Explain the Millikan oil drop experiment.
3. Describe Rutherford’s gold foil experiment.
4. Describe Rutherford’s planetary model of the atom.
Just as atoms are a substructure of matter, electrons and nuclei are substructures of the atom. The experiments that were used to discover electrons and nuclei reveal some of the basic properties of atoms and can be readily understood using ideas such as electrostatic and magnetic force, already covered in previous chapters.
### The Electron
Gas discharge tubes, such as that shown in , consist of an evacuated glass tube containing two metal electrodes and a rarefied gas. When a high voltage is applied to the electrodes, the gas glows. These tubes were the precursors to today’s neon lights. They were first studied seriously by Heinrich Geissler, a German inventor and glassblower, starting in the 1860s. The English scientist William Crookes, among others, continued to study what for some time were called Crookes tubes, wherein electrons are freed from atoms and molecules in the rarefied gas inside the tube and are accelerated from the cathode (negative) to the anode (positive) by the high potential. These “cathode rays” collide with the gas atoms and molecules and excite them, resulting in the emission of electromagnetic (EM) radiation that makes the electrons’ path visible as a ray that spreads and fades as it moves away from the cathode.
Gas discharge tubes today are most commonly called cathode-ray tubes, because the rays originate at the cathode. Crookes showed that the electrons carry momentum (they can make a small paddle wheel rotate). He also found that their normally straight path is bent by a magnet in the direction expected for a negative charge moving away from the cathode. These were the first direct indications of electrons and their charge.
The English physicist J. J. Thomson (1856–1940) improved and expanded the scope of experiments with gas discharge tubes. (See and .) He verified the negative charge of the cathode rays with both magnetic and electric fields. Additionally, he collected the rays in a metal cup and found an excess of negative charge. Thomson was also able to measure the ratio of the charge of the electron to its mass, —an important step to finding the actual values of both and . shows a cathode-ray tube, which produces a narrow beam of electrons that passes through charging plates connected to a high-voltage power supply. An electric field is produced between the charging plates, and the cathode-ray tube is placed between the poles of a magnet so that the electric field is perpendicular to the magnetic field of the magnet. These fields, being perpendicular to each other, produce opposing forces on the electrons. As discussed for mass spectrometers in More Applications of Magnetism, if the net force due to the fields vanishes, then the velocity of the charged particle is . In this manner, Thomson determined the velocity of the electrons and then moved the beam up and down by adjusting the electric field.
To see how the amount of deflection is used to calculate , note that the deflection is proportional to the electric force on the electron:
But the vertical deflection is also related to the electron’s mass, since the electron’s acceleration is
The value of is not known, since was not yet known. Substituting the expression for electric force into the expression for acceleration yields
Gathering terms, we have
The deflection is analyzed to get , and is determined from the applied voltage and distance between the plates; thus, can be determined. With the velocity known, another measurement of can be obtained by bending the beam of electrons with the magnetic field. Since , we have . Consistent results are obtained using magnetic deflection.
What is so important about , the ratio of the electron’s charge to its mass? The value obtained is
This is a huge number, as Thomson realized, and it implies that the electron has a very small mass. It was known from electroplating that about is needed to plate a material, a factor of about 1000 less than the charge per kilogram of electrons. Thomson went on to do the same experiment for positively charged hydrogen ions (now known to be bare protons) and found a charge per kilogram about 1000 times smaller than that for the electron, implying that the proton is about 1000 times more massive than the electron. Today, we know more precisely that
where is the charge of the proton and is its mass. This ratio (to four significant figures) is 1836 times less charge per kilogram than for the electron. Since the charges of electrons and protons are equal in magnitude, this implies .
Thomson performed a variety of experiments using differing gases in discharge tubes and employing other methods, such as the photoelectric effect, for freeing electrons from atoms. He always found the same properties for the electron, proving it to be an independent particle. For his work, the important pieces of which he began to publish in 1897, Thomson was awarded the 1906 Nobel Prize in Physics. In retrospect, it is difficult to appreciate how astonishing it was to find that the atom has a substructure. Thomson himself said, “It was only when I was convinced that the experiment left no escape from it that I published my belief in the existence of bodies smaller than atoms.”
Thomson attempted to measure the charge of individual electrons, but his method could determine its charge only to the order of magnitude expected.
Since Faraday’s experiments with electroplating in the 1830s, it had been known that about 100,000 C per mole was needed to plate singly ionized ions. Dividing this by the number of ions per mole (that is, by Avogadro’s number), which was approximately known, the charge per ion was calculated to be about , close to the actual value.
An American physicist, Robert Millikan (1868–1953) (see ), decided to improve upon Thomson’s experiment for measuring and was eventually forced to try another approach, which is now a classic experiment performed by students. The Millikan oil drop experiment is shown in .
In the Millikan oil drop experiment, fine drops of oil are sprayed from an atomizer. Some of these are charged by the process and can then be suspended between metal plates by a voltage between the plates. In this situation, the weight of the drop is balanced by the electric force:
The electric field is produced by the applied voltage, hence, , and is adjusted to just balance the drop’s weight. The drops can be seen as points of reflected light using a microscope, but they are too small to directly measure their size and mass. The mass of the drop is determined by observing how fast it falls when the voltage is turned off. Since air resistance is very significant for these submicroscopic drops, the more massive drops fall faster than the less massive, and sophisticated sedimentation calculations can reveal their mass. Oil is used rather than water, because it does not readily evaporate, and so mass is nearly constant. Once the mass of the drop is known, the charge of the electron is given by rearranging the previous equation:
where is the separation of the plates and is the voltage that holds the drop motionless. (The same drop can be observed for several hours to see that it really is motionless.) By 1913 Millikan had measured the charge of the electron to an accuracy of 1%, and he improved this by a factor of 10 within a few years to a value of . He also observed that all charges were multiples of the basic electron charge and that sudden changes could occur in which electrons were added or removed from the drops. For this very fundamental direct measurement of and for his studies of the photoelectric effect, Millikan was awarded the 1923 Nobel Prize in Physics.
With the charge of the electron known and the charge-to-mass ratio known, the electron’s mass can be calculated. It is
Substituting known values yields
or
where the round-off errors have been corrected. The mass of the electron has been verified in many subsequent experiments and is now known to an accuracy of better than one part in one million. It is an incredibly small mass and remains the smallest known mass of any particle that has mass. (Some particles, such as photons, are massless and cannot be brought to rest, but travel at the speed of light.) A similar calculation gives the masses of other particles, including the proton. To three digits, the mass of the proton is now known to be
which is nearly identical to the mass of a hydrogen atom. What Thomson and Millikan had done was to prove the existence of one substructure of atoms, the electron, and further to show that it had only a tiny fraction of the mass of an atom. The nucleus of an atom contains most of its mass, and the nature of the nucleus was completely unanticipated.
Another important characteristic of quantum mechanics was also beginning to emerge. All electrons are identical to one another. The charge and mass of electrons are not average values; rather, they are unique values that all electrons have. This is true of other fundamental entities at the submicroscopic level. All protons are identical to one another, and so on.
### The Nucleus
Here, we examine the first direct evidence of the size and mass of the nucleus. In later chapters, we will examine many other aspects of nuclear physics, but the basic information on nuclear size and mass is so important to understanding the atom that we consider it here.
Nuclear radioactivity was discovered in 1896, and it was soon the subject of intense study by a number of the best scientists in the world. Among them was New Zealander Lord Ernest Rutherford, who made numerous fundamental discoveries and earned the title of “father of nuclear physics.” Born in Nelson, Rutherford did his postgraduate studies at the Cavendish Laboratories in England before taking up a position at McGill University in Canada where he did the work that earned him a Nobel Prize in Chemistry in 1908. In the area of atomic and nuclear physics, there is much overlap between chemistry and physics, with physics providing the fundamental enabling theories. He returned to England in later years and had six future Nobel Prize winners as students. Rutherford used nuclear radiation to directly examine the size and mass of the atomic nucleus. The experiment he devised is shown in . A radioactive source that emits alpha radiation was placed in a lead container with a hole in one side to produce a beam of alpha particles, which are a type of ionizing radiation ejected by the nuclei of a radioactive source. A thin gold foil was placed in the beam, and the scattering of the alpha particles was observed by the glow they caused when they struck a phosphor screen.
Alpha particles were known to be the doubly charged positive nuclei of helium atoms that had kinetic energies on the order of when emitted in nuclear decay, which is the disintegration of the nucleus of an unstable nuclide by the spontaneous emission of charged particles. These particles interact with matter mostly via the Coulomb force, and the manner in which they scatter from nuclei can reveal nuclear size and mass. This is analogous to observing how a bowling ball is scattered by an object you cannot see directly. Because the alpha particle’s energy is so large compared with the typical energies associated with atoms ( versus ), you would expect the alpha particles to simply crash through a thin foil much like a supersonic bowling ball would crash through a few dozen rows of bowling pins. Thomson had envisioned the atom to be a small sphere in which equal amounts of positive and negative charge were distributed evenly. The incident massive alpha particles would suffer only small deflections in such a model. Instead, Rutherford and his collaborators found that alpha particles occasionally were scattered to large angles, some even back in the direction from which they came! Detailed analysis using conservation of momentum and energy—particularly of the small number that came straight back—implied that gold nuclei are very small compared with the size of a gold atom, contain almost all of the atom’s mass, and are tightly bound. Since the gold nucleus is several times more massive than the alpha particle, a head-on collision would scatter the alpha particle straight back toward the source. In addition, the smaller the nucleus, the fewer alpha particles that would hit one head on.
Although the results of the experiment were published by his colleagues in 1909, it took Rutherford two years to convince himself of their meaning. Like Thomson before him, Rutherford was reluctant to accept such radical results. Nature on a small scale is so unlike our classical world that even those at the forefront of discovery are sometimes surprised. Rutherford later wrote: “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backwards ... [meant] ... the greatest part of the mass of the atom was concentrated in a tiny nucleus.” In 1911, Rutherford published his analysis together with a proposed model of the atom. The size of the nucleus was determined to be about , or 100,000 times smaller than the atom. This implies a huge density, on the order of , vastly unlike any macroscopic matter. Also implied is the existence of previously unknown nuclear forces to counteract the huge repulsive Coulomb forces among the positive charges in the nucleus. Huge forces would also be consistent with the large energies emitted in nuclear radiation.
The small size of the nucleus also implies that the atom is mostly empty inside. In fact, in Rutherford’s experiment, most alphas went straight through the gold foil with very little scattering, since electrons have such small masses and since the atom was mostly empty with nothing for the alpha to hit. There were already hints of this at the time Rutherford performed his experiments, since energetic electrons had been observed to penetrate thin foils more easily than expected. shows a schematic of the atoms in a thin foil with circles representing the size of the atoms (about ) and dots representing the nuclei. (The dots are not to scale—if they were, you would need a microscope to see them.) Most alpha particles miss the small nuclei and are only slightly scattered by electrons. Occasionally, (about once in 8000 times in Rutherford’s experiment), an alpha hits a nucleus head-on and is scattered straight backward.
Based on the size and mass of the nucleus revealed by his experiment, as well as the mass of electrons, Rutherford proposed the planetary model of the atom. The planetary model of the atom pictures low-mass electrons orbiting a large-mass nucleus. The sizes of the electron orbits are large compared with the size of the nucleus, with mostly vacuum inside the atom. This picture is analogous to how low-mass planets in our solar system orbit the large-mass Sun at distances large compared with the size of the sun. In the atom, the attractive Coulomb force is analogous to gravitation in the planetary system. (See .) Note that a model or mental picture is needed to explain experimental results, since the atom is too small to be directly observed with visible light.
Rutherford’s planetary model of the atom was crucial to understanding the characteristics of atoms, and their interactions and energies, as we shall see in the next few sections. Also, it was an indication of how different nature is from the familiar classical world on the small, quantum mechanical scale. The discovery of a substructure to all matter in the form of atoms and molecules was now being taken a step further to reveal a substructure of atoms that was simpler than the 92 elements then known. We have continued to search for deeper substructures, such as those inside the nucleus, with some success. In later chapters, we will follow this quest in the discussion of quarks and other elementary particles, and we will look at the direction the search seems now to be heading.
### Test Prep for AP Courses
### Section Summary
1. Atoms are composed of negatively charged electrons, first proved to exist in cathode-ray-tube experiments, and a positively charged nucleus.
2. All electrons are identical and have a charge-to-mass ratio of
3. The positive charge in the nuclei is carried by particles called protons, which have a charge-to-mass ratio of
4. Mass of electron,
5. Mass of proton,
6. The planetary model of the atom pictures electrons orbiting the nucleus in the same way that planets orbit the sun.
### Conceptual Questions
### Problem Exercises
|
# Atomic Physics
## Bohr’s Theory of the Hydrogen Atom
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the mysteries of atomic spectra.
2. Explain Bohr’s theory of the hydrogen atom.
3. Explain Bohr’s planetary model of the atom.
4. Illustrate energy state using the energy-level diagram.
5. Describe the triumphs and limits of Bohr’s theory.
The great Danish physicist Niels Bohr (1885–1962) made immediate use of Rutherford’s planetary model of the atom. (). Bohr became convinced of its validity and spent part of 1912 at Rutherford’s laboratory. In 1913, after returning to Copenhagen, he began publishing his theory of the simplest atom, hydrogen, based on the planetary model of the atom. For decades, many questions had been asked about atomic characteristics. From their sizes to their spectra, much was known about atoms, but little had been explained in terms of the laws of physics. Bohr’s theory explained the atomic spectrum of hydrogen and established new and broadly applicable principles in quantum mechanics.
### Mysteries of Atomic Spectra
As noted in Quantization of Energy , the energies of some small systems are quantized. Atomic and molecular emission and absorption spectra have been known for over a century to be discrete (or quantized). (See .) Maxwell and others had realized that there must be a connection between the spectrum of an atom and its structure, something like the resonant frequencies of musical instruments. But, in spite of years of efforts by many great minds, no one had a workable theory. (It was a running joke that any theory of atomic and molecular spectra could be destroyed by throwing a book of data at it, so complex were the spectra.) Following Einstein’s proposal of photons with quantized energies directly proportional to their wavelengths, it became even more evident that electrons in atoms can exist only in discrete orbits.
In some cases, it had been possible to devise formulas that described the emission spectra. As you might expect, the simplest atom—hydrogen, with its single electron—has a relatively simple spectrum. The hydrogen spectrum had been observed in the infrared (IR), visible, and ultraviolet (UV), and several series of spectral lines had been observed. (See .) These series are named after early researchers who studied them in particular depth.
The observed hydrogen-spectrum wavelengths can be calculated using the following formula:
where is the wavelength of the emitted EM radiation and is the Rydberg constant, determined by the experiment to be
The constant is a positive integer associated with a specific series. For the Lyman series, ; for the Balmer series, ; for the Paschen series, ; and so on. The Lyman series is entirely in the UV, while part of the Balmer series is visible with the remainder UV. The Paschen series and all the rest are entirely IR. There are apparently an unlimited number of series, although they lie progressively farther into the infrared and become difficult to observe as increases. The constant is a positive integer, but it must be greater than . Thus, for the Balmer series, and
. Note that
can approach infinity. While the formula in the wavelengths equation was just a recipe designed to fit data and was not based on physical principles, it did imply a deeper meaning. Balmer first devised the formula for his series alone, and it was later found to describe all the other series by using different values of . Bohr was the first to comprehend the deeper meaning. Again, we see the interplay between experiment and theory in physics. Experimentally, the spectra were well established, an equation was found to fit the experimental data, but the theoretical foundation was missing.
### Bohr’s Solution for Hydrogen
Bohr was able to derive the formula for the hydrogen spectrum using basic physics, the planetary model of the atom, and some very important new proposals. His first proposal is that only certain orbits are allowed: we say that the orbits of electrons in atoms are quantized. Each orbit has a different energy, and electrons can move to a higher orbit by absorbing energy and drop to a lower orbit by emitting energy. If the orbits are quantized, the amount of energy absorbed or emitted is also quantized, producing discrete spectra. Photon absorption and emission are among the primary methods of transferring energy into and out of atoms. The energies of the photons are quantized, and their energy is explained as being equal to the change in energy of the electron when it moves from one orbit to another. In equation form, this is
Here, is the change in energy between the initial and final orbits, and is the energy of the absorbed or emitted photon. It is quite logical (that is, expected from our everyday experience) that energy is involved in changing orbits. A blast of energy is required for the space shuttle, for example, to climb to a higher orbit. What is not expected is that atomic orbits should be quantized. This is not observed for satellites or planets, which can have any orbit given the proper energy. (See .)
shows an energy-level diagram, a convenient way to display energy states. In the present discussion, we take these to be the allowed energy levels of the electron. Energy is plotted vertically with the lowest or ground state at the bottom and with excited states above. Given the energies of the lines in an atomic spectrum, it is possible (although sometimes very difficult) to determine the energy levels of an atom. Energy-level diagrams are used for many systems, including molecules and nuclei. A theory of the atom or any other system must predict its energies based on the physics of the system.
Bohr was clever enough to find a way to calculate the electron orbital energies in hydrogen. This was an important first step that has been improved upon, but it is well worth repeating here, because it does correctly describe many characteristics of hydrogen. Assuming circular orbits, Bohr proposed that the angular momentum , that is, it has only specific, discrete values. The value for is given by the formula
where is the angular momentum, is the electron’s mass, is the radius of the th orbit, and is Planck’s constant. Note that angular momentum is . For a small object at a radius and , so that . Quantization says that this value of can only be equal to , etc. At the time, Bohr himself did not know why angular momentum should be quantized, but using this assumption he was able to calculate the energies in the hydrogen spectrum, something no one else had done at the time.
From Bohr’s assumptions, we will now derive a number of important properties of the hydrogen atom from the classical physics we have covered in the text. We start by noting the centripetal force causing the electron to follow a circular path is supplied by the Coulomb force. To be more general, we note that this analysis is valid for any single-electron atom. So, if a nucleus has protons ( for hydrogen, 2 for helium, etc.) and only one electron, that atom is called a hydrogen-like atom. The spectra of hydrogen-like ions are similar to hydrogen, but shifted to higher energy by the greater attractive force between the electron and nucleus. The magnitude of the centripetal force is , while the Coulomb force is . The tacit assumption here is that the nucleus is more massive than the stationary electron, and the electron orbits about it. This is consistent with the planetary model of the atom. Equating these,
Angular momentum quantization is stated in an earlier equation. We solve that equation for , substitute it into the above, and rearrange the expression to obtain the radius of the orbit. This yields:
where is defined to be the Bohr radius, since for the lowest orbit and for hydrogen , . It is left for this chapter’s Problems and Exercises to show that the Bohr radius is
These last two equations can be used to calculate the radii of the allowed (quantized) electron orbits in any hydrogen-like atom. It is impressive that the formula gives the correct size of hydrogen, which is measured experimentally to be very close to the Bohr radius. The earlier equation also tells us that the orbital radius is proportional to , as illustrated in .
To get the electron orbital energies, we start by noting that the electron energy is the sum of its kinetic and potential energy:
Kinetic energy is the familiar , assuming the electron is not moving at relativistic speeds. Potential energy for the electron is electrical, or , where is the potential due to the nucleus, which looks like a point charge. The nucleus has a positive charge ; thus,
, recalling an earlier equation for the potential due to a point charge. Since the electron’s charge is negative, we see that . Entering the expressions for and , we find
Now we substitute and from earlier equations into the above expression for energy. Algebraic manipulation yields
for the orbital energies of hydrogen-like atoms. Here, is the ground-state energy for hydrogen and is given by
Thus, for hydrogen,
shows an energy-level diagram for hydrogen that also illustrates how the various spectral series for hydrogen are related to transitions between energy levels.
Electron total energies are negative, since the electron is bound to the nucleus, analogous to being in a hole without enough kinetic energy to escape. As approaches infinity, the total energy becomes zero. This corresponds to a free electron with no kinetic energy, since gets very large for large , and the electric potential energy thus becomes zero. Thus, 13.6 eV is needed to ionize hydrogen (to go from –13.6 eV to 0, or unbound), an experimentally verified number. Given more energy, the electron becomes unbound with some kinetic energy. For example, giving 15.0 eV to an electron in the ground state of hydrogen strips it from the atom and leaves it with 1.4 eV of kinetic energy.
Finally, let us consider the energy of a photon emitted in a downward transition, given by the equation to be
Substituting , we see that
Dividing both sides of this equation by gives an expression for :
It can be shown that
is the Rydberg constant. Thus, we have used Bohr’s assumptions to derive the formula first proposed by Balmer years earlier as a recipe to fit experimental data.
We see that Bohr’s theory of the hydrogen atom answers the question as to why this previously known formula describes the hydrogen spectrum. It is because the energy levels are proportional to , where is a non-negative integer. A downward transition releases energy, and so must be greater than . The various series are those where the transitions end on a certain level. For the Lyman series, — that is, all the transitions end in the ground state (see also ). For the Balmer series, , or all the transitions end in the first excited state; and so on. What was once a recipe is now based in physics, and something new is emerging—angular momentum is quantized.
### Triumphs and Limits of the Bohr Theory
Bohr did what no one had been able to do before. Not only did he explain the spectrum of hydrogen, he correctly calculated the size of the atom from basic physics. Some of his ideas are broadly applicable. Electron orbital energies are quantized in all atoms and molecules. Angular momentum is quantized. The electrons do not spiral into the nucleus, as expected classically (accelerated charges radiate, so that the electron orbits classically would decay quickly, and the electrons would sit on the nucleus—matter would collapse). These are major triumphs.
But there are limits to Bohr’s theory. It cannot be applied to multielectron atoms, even one as simple as a two-electron helium atom. Bohr’s model is what we call semiclassical. The orbits are quantized (nonclassical) but are assumed to be simple circular paths (classical). As quantum mechanics was developed, it became clear that there are no well-defined orbits; rather, there are clouds of probability. Bohr’s theory also did not explain that some spectral lines are doublets (split into two) when examined closely. We shall examine many of these aspects of quantum mechanics in more detail, but it should be kept in mind that Bohr did not fail. Rather, he made very important steps along the path to greater knowledge and laid the foundation for all of atomic physics that has since evolved.
### Test Prep for AP Courses
### Section Summary
1. The planetary model of the atom pictures electrons orbiting the nucleus in the way that planets orbit the sun. Bohr used the planetary model to develop the first reasonable theory of hydrogen, the simplest atom. Atomic and molecular spectra are quantized, with hydrogen spectrum wavelengths given by the formula
where is the wavelength of the emitted EM radiation and is the Rydberg constant, which has the value
2. The constants and are positive integers, and must be greater than .
3. Bohr correctly proposed that the energy and radii of the orbits of electrons in atoms are quantized, with energy for transitions between orbits given by
where is the change in energy between the initial and final orbits and is the energy of an absorbed or emitted photon. It is useful to plot orbital energies on a vertical graph called an energy-level diagram.
4. Bohr proposed that the allowed orbits are circular and must have quantized orbital angular momentum given by where is the angular momentum, is the radius of the orbit, and is Planck’s constant. For all one-electron (hydrogen-like) atoms, the radius of an orbit is given by
is the atomic number of an element (the number of electrons is has when neutral) and is defined to be the Bohr radius, which is
5. Furthermore, the energies of hydrogen-like atoms are given by
where is the ground-state energy and is given by
Thus, for hydrogen,
6. The Bohr Theory gives accurate values for the energy levels in hydrogen-like atoms, but it has been improved upon in several respects.
### Conceptual Questions
### Problems & Exercises
|
# Atomic Physics
## X Rays: Atomic Origins and Applications
### Learning Objectives
By the end of this section, you will be able to:
1. Define x-ray tube and its spectrum.
2. Show the x-ray characteristic energy.
3. Specify the use of x rays in medical observations.
4. Explain the use of x rays in CT scanners in diagnostics.
Each type of atom (or element) has its own characteristic electromagnetic spectrum. X rays lie at the high-frequency end of an atom’s spectrum and are characteristic of the atom as well. In this section, we explore characteristic x rays and some of their important applications.
We have previously discussed x rays as a part of the electromagnetic spectrum in Photon Energies and the Electromagnetic Spectrum. That module illustrated how an x-ray tube (a specialized CRT) produces x rays. Electrons emitted from a hot filament are accelerated with a high voltage, gaining significant kinetic energy and striking the anode.
There are two processes by which x rays are produced in the anode of an x-ray tube. In one process, the deceleration of electrons produces x rays, and these x rays are called bremsstrahlung, or braking radiation. The second process is atomic in nature and produces characteristic x rays, so called because they are characteristic of the anode material. The x-ray spectrum in is typical of what is produced by an x-ray tube, showing a broad curve of bremsstrahlung radiation with characteristic x-ray peaks on it.
The spectrum in is collected over a period of time in which many electrons strike the anode, with a variety of possible outcomes for each hit. The broad range of x-ray energies in the bremsstrahlung radiation indicates that an incident electron’s energy is not usually converted entirely into photon energy. The highest-energy x ray produced is one for which all of the electron’s energy was converted to photon energy. Thus the accelerating voltage and the maximum x-ray energy are related by conservation of energy. Electric potential energy is converted to kinetic energy and then to photon energy, so that Units of electron volts are convenient. For example, a 100-kV accelerating voltage produces x-ray photons with a maximum energy of 100 keV.
Some electrons excite atoms in the anode. Part of the energy that they deposit by collision with an atom results in one or more of the atom’s inner electrons being knocked into a higher orbit or the atom being ionized. When the anode’s atoms de-excite, they emit characteristic electromagnetic radiation. The most energetic of these are produced when an inner-shell vacancy is filled—that is, when an or shell electron has been excited to a higher level, and another electron falls into the vacant spot. A characteristic x ray (see Photon Energies and the Electromagnetic Spectrum) is electromagnetic (EM) radiation emitted by an atom when an inner-shell vacancy is filled. shows a representative energy-level diagram that illustrates the labeling of characteristic x rays. X rays created when an electron falls into an shell vacancy are called when they come from the next higher level; that is, an to transition. The labels come from the older alphabetical labeling of shells starting with rather than using the principal quantum numbers 1, 2, 3, …. A more energetic x ray is produced when an electron falls into an
shell vacancy from the
shell; that is, an to transition. Similarly, when an electron falls into the
shell from the shell, an x ray is created. The energies of these x rays depend on the energies of electron states in the particular atom and, thus, are characteristic of that element: every element has it own set of x-ray energies. This property can be used to identify elements, for example, to find trace (small) amounts of an element in an environmental or biological sample.
### Medical and Other Diagnostic Uses of X-rays
All of us can identify diagnostic uses of x-ray photons. Among these are the universal dental and medical x rays that have become an essential part of medical diagnostics. (See and .) X rays are also used to inspect our luggage at airports, as shown in , and for early detection of cracks in crucial aircraft components. An x ray is not only a noun meaning high-energy photon, it also is an image produced by x rays, and it has been made into a familiar verb—to be x-rayed.
The most common x-ray images are simple shadows. Since x-ray photons have high energies, they penetrate materials that are opaque to visible light. The more energy an x-ray photon has, the more material it will penetrate. So an x-ray tube may be operated at 50.0 kV for a chest x ray, whereas it may need to be operated at 100 kV to examine a broken leg in a cast. The depth of penetration is related to the density of the material as well as to the energy of the photon. The denser the material, the fewer x-ray photons get through and the darker the shadow. Thus x rays excel at detecting breaks in bones and in imaging other physiological structures, such as some tumors, that differ in density from surrounding material. Because of their high photon energy, x rays produce significant ionization in materials and damage cells in biological organisms. Modern uses minimize exposure to the patient and eliminate exposure to others. Biological effects of x rays will be explored in the next chapter along with other types of ionizing radiation such as those produced by nuclei.
As the x-ray energy increases, the Compton effect (see Photon Momentum) becomes more important in the attenuation of the x rays. Here, the x ray scatters from an outer electron shell of the atom, giving the ejected electron some kinetic energy while losing energy itself. The probability for attenuation of the x rays depends upon the number of electrons present (the material’s density) as well as the thickness of the material. Chemical composition of the medium, as characterized by its atomic number , is not important here. Low-energy x rays provide better contrast (sharper images). However, due to greater attenuation and less scattering, they are more absorbed by thicker materials. Greater contrast can be achieved by injecting a substance with a large atomic number, such as barium or iodine. The structure of the part of the body that contains the substance (e.g., the gastro-intestinal tract or the abdomen) can easily be seen this way.
Breast cancer is the second-leading cause of death among women worldwide. Early detection can be very effective, hence the importance of x-ray diagnostics. A mammogram cannot diagnose a malignant tumor, only give evidence of a lump or region of increased density within the breast. X-ray absorption by different types of soft tissue is very similar, so contrast is difficult; this is especially true for younger women, who typically have denser breasts. For older women who are at greater risk of developing breast cancer, the presence of more fat in the breast gives the lump or tumor more contrast. MRI (Magnetic resonance imaging) has recently been used as a supplement to conventional x rays to improve detection and eliminate false positives. The subject’s radiation dose from x rays will be treated in a later chapter.
A standard x ray gives only a two-dimensional view of the object. Dense bones might hide images of soft tissue or organs. If you took another x ray from the side of the person (the first one being from the front), you would gain additional information. While shadow images are sufficient in many applications, far more sophisticated images can be produced with modern technology. shows the use of a computed tomography (CT) scanner, also called computed axial tomography (CAT) scanner. X rays are passed through a narrow section (called a slice) of the patient’s body (or body part) over a range of directions. An array of many detectors on the other side of the patient registers the x rays. The system is then rotated around the patient and another image is taken, and so on. The x-ray tube and detector array are mechanically attached and so rotate together. Complex computer image processing of the relative absorption of the x rays along different directions produces a highly-detailed image. Different slices are taken as the patient moves through the scanner on a table. Multiple images of different slices can also be computer analyzed to produce three-dimensional information, sometimes enhancing specific types of tissue, as shown in . G. Hounsfield (UK) and A. Cormack (US) won the Nobel Prize in Medicine in 1979 for their development of computed tomography.
### X-Ray Diffraction and Crystallography
Since x-ray photons are very energetic, they have relatively short wavelengths. For example, the 54.4-keV x ray of has a wavelength . Thus, typical x-ray photons act like rays when they encounter macroscopic objects, like teeth, and produce sharp shadows; however, since atoms are on the order of 0.1 nm in size, x rays can be used to detect the location, shape, and size of atoms and molecules. The process is called x-ray diffraction, because it involves the diffraction and interference of x rays to produce patterns that can be analyzed for information about the structures that scattered the x rays. Perhaps the most famous example of x-ray diffraction is the discovery of the double-helix structure of DNA in 1953 by an international team of scientists working at the Cavendish Laboratory—American James Watson, Englishman Francis Crick, and New Zealand–born Maurice Wilkins. Using x-ray diffraction data produced by Rosalind Franklin, they were the first to discern the structure of DNA that is so crucial to life. For this, Watson, Crick, and Wilkins were awarded the 1962 Nobel Prize in Physiology or Medicine. There is much debate and controversy over the issue that Rosalind Franklin was not included in the prize.
shows a diffraction pattern produced by the scattering of x rays from a crystal. This process is known as x-ray crystallography because of the information it can yield about crystal structure, and it was the type of data Rosalind Franklin supplied to Watson and Crick for DNA. Not only do x rays confirm the size and shape of atoms, they give information on the atomic arrangements in materials. For example, current research in high-temperature superconductors involves complex materials whose lattice arrangements are crucial to obtaining a superconducting material. These can be studied using x-ray crystallography.
Historically, the scattering of x rays from crystals was used to prove that x rays are energetic EM waves. This was suspected from the time of the discovery of x rays in 1895, but it was not until 1912 that the German Max von Laue (1879–1960) convinced two of his colleagues to scatter x rays from crystals. If a diffraction pattern is obtained, he reasoned, then the x rays must be waves, and their wavelength could be determined. (The spacing of atoms in various crystals was reasonably well known at the time, based on good values for Avogadro’s number.) The experiments were convincing, and the 1914 Nobel Prize in Physics was given to von Laue for his suggestion leading to the proof that x rays are EM waves. In 1915, the unique father-and-son team of Sir William Henry Bragg and his son Sir William Lawrence Bragg were awarded a joint Nobel Prize for inventing the x-ray spectrometer and the then-new science of x-ray analysis. The elder Bragg had migrated to Australia from England just after graduating in mathematics. He learned physics and chemistry during his career at the University of Adelaide. The younger Bragg was born in Adelaide but went back to the Cavendish Laboratories in England to a career in x-ray and neutron crystallography; he provided support for Watson, Crick, and Wilkins for their work on unraveling the mysteries of DNA and to Max Perutz for his 1962 Nobel Prize-winning work on the structure of hemoglobin. Here again, we witness the enabling nature of physics—establishing instruments and designing experiments as well as solving mysteries in the biomedical sciences.
Certain other uses for x rays will be studied in later chapters. X rays are useful in the treatment of cancer because of the inhibiting effect they have on cell reproduction. X rays observed coming from outer space are useful in determining the nature of their sources, such as neutron stars and possibly black holes. Created in nuclear bomb explosions, x rays can also be used to detect clandestine atmospheric tests of these weapons. X rays can cause excitations of atoms, which then fluoresce (emitting characteristic EM radiation), making x-ray-induced fluorescence a valuable analytical tool in a range of fields from art to archaeology.
### Section Summary
1. X rays are relatively high-frequency EM radiation. They are produced by transitions between inner-shell electron levels, which produce x rays characteristic of the atomic element, or by decelerating electrons.
2. X rays have many uses, including medical diagnostics and x-ray diffraction.
### Conceptual Questions
### Problem Exercises
|
# Atomic Physics
## Applications of Atomic Excitations and De-Excitations
### Learning Objectives
By the end of this section, you will be able to:
1. Define and discuss fluorescence.
2. Define metastable.
3. Describe how laser emission is produced.
4. Explain population inversion.
5. Define and discuss holography.
Many properties of matter and phenomena in nature are directly related to atomic energy levels and their associated excitations and de-excitations. The color of a rose, the output of a laser, and the transparency of air are but a few examples. (See .) While it may not appear that glow-in-the-dark pajamas and lasers have much in common, they are in fact different applications of similar atomic de-excitations.
The color of a material is due to the ability of its atoms to absorb certain wavelengths while reflecting or reemitting others. A simple red material, for example a tomato, absorbs all visible wavelengths except red. This is because the atoms of its hydrocarbon pigment (lycopene) have levels separated by a variety of energies corresponding to all visible photon energies except red. Air is another interesting example. It is transparent to visible light, because there are few energy levels that visible photons can excite in air molecules and atoms. Visible light, thus, cannot be absorbed. Furthermore, visible light is only weakly scattered by air, because visible wavelengths are so much greater than the sizes of the air molecules and atoms. Light must pass through kilometers of air to scatter enough to cause red sunsets and blue skies.
### Fluorescence and Phosphorescence
The ability of a material to emit various wavelengths of light is similarly related to its atomic energy levels. shows a scorpion illuminated by a UV lamp, sometimes called a black light. Some rocks also glow in black light, the particular colors being a function of the rock’s mineral composition. Black lights are also used to make certain posters glow.
In the fluorescence process, an atom is excited to a level several steps above its ground state by the absorption of a relatively high-energy UV photon. This is called atomic excitation. Once it is excited, the atom can de-excite in several ways, one of which is to re-emit a photon of the same energy as excited it, a single step back to the ground state. This is called atomic de-excitation. All other paths of de-excitation involve smaller steps, in which lower-energy (longer wavelength) photons are emitted. Some of these may be in the visible range, such as for the scorpion in . Fluorescence is defined to be any process in which an atom or molecule, excited by a photon of a given energy, and de-excites by emission of a lower-energy photon.
Fluorescence can be induced by many types of energy input. Fluorescent paint, dyes, and even soap residues in clothes make colors seem brighter in sunlight by converting some UV into visible light. X rays can induce fluorescence, as is done in x-ray fluoroscopy to make brighter visible images. Electric discharges can induce fluorescence, as in so-called neon lights and in gas-discharge tubes that produce atomic and molecular spectra. Common fluorescent lights use an electric discharge in mercury vapor to cause atomic emissions from mercury atoms. The inside of a fluorescent light is coated with a fluorescent material that emits visible light over a broad spectrum of wavelengths. By choosing an appropriate coating, fluorescent lights can be made more like sunlight or like the reddish glow of candlelight, depending on needs. Fluorescent lights are more efficient in converting electrical energy into visible light than incandescent filaments (about four times as efficient), the blackbody radiation of which is primarily in the infrared due to temperature limitations.
This atom is excited to one of its higher levels by absorbing a UV photon. It can de-excite in a single step, re-emitting a photon of the same energy, or in several steps. The process is called fluorescence if the atom de-excites in smaller steps, emitting energy different from that which excited it. Fluorescence can be induced by a variety of energy inputs, such as UV, x-rays, and electrical discharge.
The spectacular Waitomo caves on North Island in New Zealand provide a natural habitat for glow-worms. The glow-worms hang up to 70 silk threads of about 30 or 40 cm each to trap prey that fly towards them in the dark. The fluorescence process is very efficient, with nearly 100% of the energy input turning into light. (In comparison, fluorescent lights are about 20% efficient.)
Fluorescence has many uses in biology and medicine. It is commonly used to label and follow a molecule within a cell. Such tagging allows one to study the structure of DNA and proteins. Fluorescent dyes and antibodies are usually used to tag the molecules, which are then illuminated with UV light and their emission of visible light is observed. Since the fluorescence of each element is characteristic, identification of elements within a sample can be done this way.
shows a commonly used fluorescent dye called fluorescein. Below that, reveals the diffusion of a fluorescent dye in water by observing it under UV light.
Once excited, an atom or molecule will usually spontaneously de-excite quickly. (The electrons raised to higher levels are attracted to lower ones by the positive charge of the nucleus.) Spontaneous de-excitation has a very short mean lifetime of typically about . However, some levels have significantly longer lifetimes, ranging up to milliseconds to minutes or even hours. These energy levels are inhibited and are slow in de-exciting because their quantum numbers differ greatly from those of available lower levels. Although these level lifetimes are short in human terms, they are many orders of magnitude longer than is typical and, thus, are said to be metastable, meaning relatively stable. Phosphorescence is the de-excitation of a metastable state. Glow-in-the-dark materials, such as luminous dials on some watches and clocks and on children’s toys and pajamas, are made of phosphorescent substances. Visible light excites the atoms or molecules to metastable states that decay slowly, releasing the stored excitation energy partially as visible light. In some ceramics, atomic excitation energy can be frozen in after the ceramic has cooled from its firing. It is very slowly released, but the ceramic can be induced to phosphoresce by heating—a process called “thermoluminescence.” Since the release is slow, thermoluminescence can be used to date antiquities. The less light emitted, the older the ceramic. (See .)
### Lasers
Lasers today are commonplace. Lasers are used to read bar codes at stores and in libraries, laser shows are staged for entertainment, laser printers produce high-quality images at relatively low cost, and lasers send prodigious numbers of telephone messages through optical fibers. Among other things, lasers are also employed in surveying, weapons guidance, tumor eradication, retinal welding, and for reading DVDs, Blu-rays, and computer or game console CD-ROMs.
Why do lasers have so many varied applications? The answer is that lasers produce single-wavelength EM radiation that is also very coherent—that is, the emitted photons are in phase. Laser output can, thus, be more precisely manipulated than incoherent mixed-wavelength EM radiation from other sources. The reason laser output is so pure and coherent is based on how it is produced, which in turn depends on a metastable state in the lasing material. Suppose a material had the energy levels shown in . When energy is put into a large collection of these atoms, electrons are raised to all possible levels. Most return to the ground state in less than about , but those in the metastable state linger. This includes those electrons originally excited to the metastable state and those that fell into it from above. It is possible to get a majority of the atoms into the metastable state, a condition called a population inversion.
Once a population inversion is achieved, a very interesting thing can happen, as shown in . An electron spontaneously falls from the metastable state, emitting a photon. This photon finds another atom in the metastable state and stimulates it to decay, emitting a second photon of the same wavelength and in phase with the first, and so on. Stimulated emission is the emission of electromagnetic radiation in the form of photons of a given frequency, triggered by photons of the same frequency. For example, an excited atom, with an electron in an energy orbit higher than normal, releases a photon of a specific frequency when the electron drops back to a lower energy orbit. If this photon then strikes another electron in the same high-energy orbit in another atom, another photon of the same frequency is released. The emitted photons and the triggering photons are always in phase, have the same polarization, and travel in the same direction. The probability of absorption of a photon is the same as the probability of stimulated emission, and so a majority of atoms must be in the metastable state to produce energy. Einstein (again Einstein, and back in 1917!) was one of the important contributors to the understanding of stimulated emission of radiation. Decades before the technology was invented to even experiment with laser generation, Einstein was the first to realize that stimulated emission and absorption are equally probable. The laser acts as a temporary energy storage device that subsequently produces a massive energy output of single-wavelength, in-phase photons.
The name laser is an acronym for light amplification by stimulated emission of radiation, the process just described. The process was proposed and developed following the advances in quantum physics. A joint Nobel Prize was awarded in 1964 to American Charles Townes (1915–), and Nikolay Basov (1922–2001) and Aleksandr Prokhorov (1916–2002), from the Soviet Union, for the development of lasers. The Nobel Prize in 1981 went to Arthur Schawlow (1921-1999) for pioneering laser applications. The original devices were called masers, because they produced microwaves. The first working laser was created in 1960 at Hughes Research labs (CA) by T. Maiman. It used a pulsed high-powered flash lamp and a ruby rod to produce red light. Today the name laser is used for all such devices developed to produce a variety of wavelengths, including microwave, infrared, visible, and ultraviolet radiation. shows how a laser can be constructed to enhance the stimulated emission of radiation. Energy input can be from a flash tube, electrical discharge, or other sources, in a process sometimes called optical pumping. A large percentage of the original pumping energy is dissipated in other forms, but a population inversion must be achieved. Mirrors can be used to enhance stimulated emission by multiple passes of the radiation back and forth through the lasing material. One of the mirrors is semitransparent to allow some of the light to pass through. The laser output from a laser is a mere 1% of the light passing back and forth in a laser.
As described earlier in the section on laser vision correction, Donna Strickland and Gérard Mourou, working at University of Rochester, developed a method to greatly increase the power of lasers, while also enabling them to be miniaturized. By passing the light over a specific type of grating, their method segments (or chirps) the delivery of the beam components in a matter that generates little heat at the source. Chirped pulse amplification is now used in some of the world’s most powerful lasers as well as those commonly used to make precise microcuts or burns in medical applications. Strickland and Mourou were awarded the Nobel Prize in 1918.
Lasers are constructed from many types of lasing materials, including gases, liquids, solids, and semiconductors. But all lasers are based on the existence of a metastable state or a phosphorescent material. Some lasers produce continuous output; others are pulsed in bursts as brief as . Some laser outputs are fantastically powerful—some greater than —but the more common, everyday lasers produce something on the order of . The helium-neon laser that produces a familiar red light is very common. shows the energy levels of helium and neon, a pair of noble gases that work well together. An electrical discharge is passed through a helium-neon gas mixture in which the number of atoms of helium is ten times that of neon. The first excited state of helium is metastable and, thus, stores energy. This energy is easily transferred by collision to neon atoms, because they have an excited state at nearly the same energy as that in helium. That state in neon is also metastable, and this is the one that produces the laser output. (The most likely transition is to the nearby state, producing 1.96 eV photons, which have a wavelength of 633 nm and appear red.) A population inversion can be produced in neon, because there are so many more helium atoms and these put energy into the neon. Helium-neon lasers often have continuous output, because the population inversion can be maintained even while lasing occurs. Probably the most common lasers in use today, including the common laser pointer, are semiconductor or diode lasers, made of silicon. Here, energy is pumped into the material by passing a current in the device to excite the electrons. Special coatings on the ends and fine cleavings of the semiconductor material allow light to bounce back and forth and a tiny fraction to emerge as laser light. Diode lasers can usually run continually and produce outputs in the milliwatt range.
There are many medical applications of lasers. Lasers have the advantage that they can be focused to a small spot. They also have a well-defined wavelength. Many types of lasers are available today that provide wavelengths from the ultraviolet to the infrared. This is important, as one needs to be able to select a wavelength that will be preferentially absorbed by the material of interest. Objects appear a certain color because they absorb all other visible colors incident upon them. What wavelengths are absorbed depends upon the energy spacing between electron orbitals in that molecule. Unlike the hydrogen atom, biological molecules are complex and have a variety of absorption wavelengths or lines. But these can be determined and used in the selection of a laser with the appropriate wavelength. Water is transparent to the visible spectrum but will absorb light in the UV and IR regions. Blood (hemoglobin) strongly reflects red but absorbs most strongly in the UV.
Laser surgery uses a wavelength that is strongly absorbed by the tissue it is focused upon. One example of a medical application of lasers is shown in . A detached retina can result in total loss of vision. Burns made by a laser focused to a small spot on the retina form scar tissue that can hold the retina in place, salvaging the patient’s vision. Other light sources cannot be focused as precisely as a laser due to refractive dispersion of different wavelengths. Similarly, laser surgery in the form of cutting or burning away tissue is made more accurate because laser output can be very precisely focused and is preferentially absorbed because of its single wavelength. Depending upon what part or layer of the retina needs repairing, the appropriate type of laser can be selected. For the repair of tears in the retina, a green argon laser is generally used. This light is absorbed well by tissues containing blood, so coagulation or “welding” of the tear can be done.
In dentistry, the use of lasers is rising. Lasers are most commonly used for surgery on the soft tissue of the mouth. They can be used to remove ulcers, stop bleeding, and reshape gum tissue. Their use in cutting into bones and teeth is not quite so common; here the erbium YAG (yttrium aluminum garnet) laser is used.
The massive combination of lasers shown in can be used to induce nuclear fusion, the energy source of the sun and hydrogen bombs. Since lasers can produce very high power in very brief pulses, they can be used to focus an enormous amount of energy on a small glass sphere containing fusion fuel. Not only does the incident energy increase the fuel temperature significantly so that fusion can occur, it also compresses the fuel to great density, enhancing the probability of fusion. The compression or implosion is caused by the momentum of the impinging laser photons.
Before being largely replaced by streaming services and other storage methods, music CDs and DVDs were extremely common. They store information digitally and have a much larger information-storage capacity than their predecessors, audio and video cassette tapes. An entire encyclopedia can be stored on a single CD. illustrates how the information is stored and read from the CD. Pits made in the CD by a laser can be tiny and very accurately spaced to record digital information. These are read by having an inexpensive solid-state infrared laser beam scatter from pits as the CD spins, revealing their digital pattern and the information encoded upon them.
Holograms, such as those in , are true three-dimensional images recorded on film by lasers. Holograms are used for amusement, decoration on novelty items and magazine covers, security on credit cards and driver’s licenses (a laser and other equipment is needed to reproduce them), and for serious three-dimensional information storage. You can see that a hologram is a true three-dimensional image, because objects change relative position in the image when viewed from different angles.
The name hologram means “entire picture” (from the Greek holo, as in holistic), because the image is three-dimensional. Holography is the process of producing holograms and, although they are recorded on photographic film, the process is quite different from normal photography. Holography uses light interference or wave optics, whereas normal photography uses geometric optics. shows one method of producing a hologram. Coherent light from a laser is split by a mirror, with part of the light illuminating the object. The remainder, called the reference beam, shines directly on a piece of film. Light scattered from the object interferes with the reference beam, producing constructive and destructive interference. As a result, the exposed film looks foggy, but close examination reveals a complicated interference pattern stored on it. Where the interference was constructive, the film (a negative actually) is darkened. Holography is sometimes called lensless photography, because it uses the wave characteristics of light as contrasted to normal photography, which uses geometric optics and so requires lenses.
Light falling on a hologram can form a three-dimensional image. The process is complicated in detail, but the basics can be understood as shown in , in which a laser of the same type that exposed the film is now used to illuminate it. The myriad tiny exposed regions of the film are dark and block the light, while less exposed regions allow light to pass. The film thus acts much like a collection of diffraction gratings with various spacings. Light passing through the hologram is diffracted in various directions, producing both real and virtual images of the object used to expose the film. The interference pattern is the same as that produced by the object. Moving your eye to various places in the interference pattern gives you different perspectives, just as looking directly at the object would. The image thus looks like the object and is three-dimensional like the object.
The hologram illustrated in is a transmission hologram. Holograms that are viewed with reflected light, such as the white light holograms on credit cards, are reflection holograms and are more common. White light holograms often appear a little blurry with rainbow edges, because the diffraction patterns of various colors of light are at slightly different locations due to their different wavelengths. Further uses of holography include all types of 3-D information storage, such as of statues in museums and engineering studies of structures and 3-D images of human organs. Invented in the late 1940s by Dennis Gabor (1900–1979), who won the 1971 Nobel Prize in Physics for his work, holography became far more practical with the development of the laser. Since lasers produce coherent single-wavelength light, their interference patterns are more pronounced. The precision is so great that it is even possible to record numerous holograms on a single piece of film by just changing the angle of the film for each successive image. This is how the holograms that move as you walk by them are produced—a kind of lensless movie.
In a similar way, in the medical field, holograms have allowed complete 3-D holographic displays of objects from a stack of images. Storing these images for future use is relatively easy. With the use of an endoscope, high-resolution 3-D holographic images of internal organs and tissues can be made.
### Test Prep for AP Courses
### Section Summary
1. An important atomic process is fluorescence, defined to be any process in which an atom or molecule is excited by absorbing a photon of a given energy and de-excited by emitting a photon of a lower energy.
2. Some states live much longer than others and are termed metastable.
3. Phosphorescence is the de-excitation of a metastable state.
4. Lasers produce coherent single-wavelength EM radiation by stimulated emission, in which a metastable state is stimulated to decay.
5. Lasing requires a population inversion, in which a majority of the atoms or molecules are in their metastable state.
### Conceptual Questions
### Problem Exercises
|
# Atomic Physics
## The Wave Nature of Matter Causes Quantization
### Learning Objectives
By the end of this section, you will be able to:
1. Explain Bohr’s model of atom.
2. Define and describe quantization of angular momentum.
3. Calculate the angular momentum for an orbit of atom.
4. Define and describe the wave-like properties of matter.
After visiting some of the applications of different aspects of atomic physics, we now return to the basic theory that was built upon Bohr’s atom. Einstein once said it was important to keep asking the questions we eventually teach children not to ask. Why is angular momentum quantized? You already know the answer. Electrons have wave-like properties, as de Broglie later proposed. They can exist only where they interfere constructively, and only certain orbits meet proper conditions, as we shall see in the next module.
Following Bohr’s initial work on the hydrogen atom, a decade was to pass before de Broglie proposed that matter has wave properties. The wave-like properties of matter were subsequently confirmed by observations of electron interference when scattered from crystals. Electrons can exist only in locations where they interfere constructively. How does this affect electrons in atomic orbits? When an electron is bound to an atom, its wavelength must fit into a small space, something like a standing wave on a string. (See .) Allowed orbits are those orbits in which an electron constructively interferes with itself. Not all orbits produce constructive interference. Thus only certain orbits are allowed—the orbits are quantized.
For a circular orbit, constructive interference occurs when the electron’s wavelength fits neatly into the circumference, so that wave crests always align with crests and wave troughs align with troughs, as shown in (b). More precisely, when an integral multiple of the electron’s wavelength equals the circumference of the orbit, constructive interference is obtained. In equation form, the condition for constructive interference and an allowed electron orbit is
where is the electron’s wavelength and
is the radius of that circular orbit. The de Broglie wavelength is
, and so here
. Substituting this into the previous condition for constructive interference produces an interesting result:
Rearranging terms, and noting that for a circular orbit, we obtain the quantization of angular momentum as the condition for allowed orbits:
This is what Bohr was forced to hypothesize as the rule for allowed orbits, as stated earlier. We now realize that it is the condition for constructive interference of an electron in a circular orbit. illustrates this for and
Because of the wave character of matter, the idea of well-defined orbits gives way to a model in which there is a cloud of probability, consistent with Heisenberg’s uncertainty principle. shows how this applies to the ground state of hydrogen. If you try to follow the electron in some well-defined orbit using a probe that has a small enough wavelength to get some details, you will instead knock the electron out of its orbit. Each measurement of the electron’s position will find it to be in a definite location somewhere near the nucleus. Repeated measurements reveal a cloud of probability like that in the figure, with each speck the location determined by a single measurement. There is not a well-defined, circular-orbit type of distribution. Nature again proves to be different on a small scale than on a macroscopic scale.
There are many examples in which the wave nature of matter causes quantization in bound systems such as the atom. Whenever a particle is confined or bound to a small space, its allowed wavelengths are those which fit into that space. For example, the particle in a box model describes a particle free to move in a small space surrounded by impenetrable barriers. This is true in blackbody radiators (atoms and molecules) as well as in atomic and molecular spectra. Various atoms and molecules will have different sets of electron orbits, depending on the size and complexity of the system. When a system is large, such as a grain of sand, the tiny particle waves in it can fit in so many ways that it becomes impossible to see that the allowed states are discrete. Thus the correspondence principle is satisfied. As systems become large, they gradually look less grainy, and quantization becomes less evident. Unbound systems (small or not), such as an electron freed from an atom, do not have quantized energies, since their wavelengths are not constrained to fit in a certain volume.
### Test Prep for AP Courses
### Section Summary
1. Quantization of orbital energy is caused by the wave nature of matter. Allowed orbits in atoms occur for constructive interference of electrons in the orbit, requiring an integral number of wavelengths to fit in an orbit’s circumference; that is,
where is the electron’s de Broglie wavelength.
2. Owing to the wave nature of electrons and the Heisenberg uncertainty principle, there are no well-defined orbits; rather, there are clouds of probability.
3. Bohr correctly proposed that the energy and radii of the orbits of electrons in atoms are quantized, with energy for transitions between orbits given by
where is the change in energy between the initial and final orbits and is the energy of an absorbed or emitted photon.
4. It is useful to plot orbit energies on a vertical graph called an energy-level diagram.
5. The allowed orbits are circular, Bohr proposed, and must have quantized orbital angular momentum given by
where is the angular momentum, is the radius of orbit , and is Planck’s constant.
### Conceptual Questions
|
# Atomic Physics
## Patterns in Spectra Reveal More Quantization
### Learning Objectives
By the end of this section, you will be able to:
1. State and discuss the Zeeman effect.
2. Define orbital magnetic field.
3. Define orbital angular momentum.
4. Define space quantization.
High-resolution measurements of atomic and molecular spectra show that the spectral lines are even more complex than they first appear. In this section, we will see that this complexity has yielded important new information about electrons and their orbits in atoms.
In order to explore the substructure of atoms (and knowing that magnetic fields affect moving charges), the Dutch physicist Hendrik Lorentz (1853–1930) suggested that his student Pieter Zeeman (1865–1943) study how spectra might be affected by magnetic fields. What they found became known as the Zeeman effect, which involved spectral lines being split into two or more separate emission lines by an external magnetic field, as shown in . For their discoveries, Zeeman and Lorentz shared the 1902 Nobel Prize in Physics.
Zeeman splitting is complex. Some lines split into three lines, some into five, and so on. But one general feature is that the amount the split lines are separated is proportional to the applied field strength, indicating an interaction with a moving charge. The splitting means that the quantized energy of an orbit is affected by an external magnetic field, causing the orbit to have several discrete energies instead of one. Even without an external magnetic field, very precise measurements showed that spectral lines are doublets (split into two), apparently by magnetic fields within the atom itself.
Bohr’s theory of circular orbits is useful for visualizing how an electron’s orbit is affected by a magnetic field. The circular orbit forms a current loop, which creates a magnetic field of its own, as seen in . Note that the orbital magnetic field and the orbital angular momentum are along the same line. The external magnetic field and the orbital magnetic field interact; a torque is exerted to align them. A torque rotating a system through some angle does work so that there is energy associated with this interaction. Thus, orbits at different angles to the external magnetic field have different energies. What is remarkable is that the energies are quantized—the magnetic field splits the spectral lines into several discrete lines that have different energies. This means that only certain angles are allowed between the orbital angular momentum and the external field, as seen in .
We already know that the magnitude of angular momentum is quantized for electron orbits in atoms. The new insight is that the direction of the orbital angular momentum is also quantized. The fact that the orbital angular momentum can have only certain directions is called space quantization. Like many aspects of quantum mechanics, this quantization of direction is totally unexpected. On the macroscopic scale, orbital angular momentum, such as that of the moon around the earth, can have any magnitude and be in any direction.
Detailed treatment of space quantization began to explain some complexities of atomic spectra, but certain patterns seemed to be caused by something else. As mentioned, spectral lines are actually closely spaced doublets, a characteristic called fine structure, as shown in . The doublet changes when a magnetic field is applied, implying that whatever causes the doublet interacts with a magnetic field. In 1925, Sem Goudsmit and George Uhlenbeck, two Dutch physicists, successfully argued that electrons have properties analogous to a macroscopic charge spinning on its axis. Electrons, in fact, have an internal or intrinsic angular momentum called intrinsic spin . Since electrons are charged, their intrinsic spin creates an intrinsic magnetic field , which interacts with their orbital magnetic field . Furthermore, electron, analogous to the situation for orbital angular momentum. The spin of the electron can have only one magnitude, and its direction can be at only one of two angles relative to a magnetic field, as seen in . We refer to this as spin up or spin down for the electron. Each spin direction has a different energy; hence, spectroscopic lines are split into two. Spectral doublets are now understood as being due to electron spin.
These two new insights—that the direction of angular momentum, whether orbital or spin, is quantized, and that electrons have intrinsic spin—help to explain many of the complexities of atomic and molecular spectra. In magnetic resonance imaging, it is the way that the intrinsic magnetic field of hydrogen and biological atoms interact with an external field that underlies the diagnostic fundamentals.
### Section Summary
1. The Zeeman effect—the splitting of lines when a magnetic field is applied—is caused by other quantized entities in atoms.
2. Both the magnitude and direction of orbital angular momentum are quantized.
3. The same is true for the magnitude and direction of the intrinsic spin of electrons.
### Conceptual Questions
|
# Atomic Physics
## Quantum Numbers and Rules
### Learning Objectives
By the end of this section, you will be able to:
1. Define quantum number.
2. Calculate angle of angular momentum vector with an axis.
3. Define spin quantum number.
Physical characteristics that are quantized—such as energy, charge, and angular momentum—are of such importance that names and symbols are given to them. The values of quantized entities are expressed in terms of quantum numbers, and the rules governing them are of the utmost importance in determining what nature is and does. This section covers some of the more important quantum numbers and rules—all of which apply in chemistry, material science, and far beyond the realm of atomic physics, where they were first discovered. Once again, we see how physics makes discoveries which enable other fields to grow.
The energy states of bound systems are quantized, because the particle wavelength can fit into the bounds of the system in only certain ways. This was elaborated for the hydrogen atom, for which the allowed energies are expressed as , where . We define
to be the principal quantum number that labels the basic states of a system. The lowest-energy state has
, the first excited state has
, and so on. Thus the allowed values for the principal quantum number are
This is more than just a numbering scheme, since the energy of the system, such as the hydrogen atom, can be expressed as some function of , as can other characteristics (such as the orbital radii of the hydrogen atom).
The fact that the magnitude of angular momentum is quantized was first recognized by Bohr in relation to the hydrogen atom; it is now known to be true in general. With the development of quantum mechanics, it was found that the magnitude of angular momentum can have only the values
where is defined to be the angular momentum quantum number. The rule for in atoms is given in the parentheses. Given , the value of can be any integer from zero up to . For example, if , then can be 0, 1, 2, or 3.
Note that for , can only be zero. This means that the ground-state angular momentum for hydrogen is actually zero, not
as Bohr proposed. The picture of circular orbits is not valid, because there would be angular momentum for any circular orbit. A more valid picture is the cloud of probability shown for the ground state of hydrogen in . The electron actually spends time in and near the nucleus. The reason the electron does not remain in the nucleus is related to Heisenberg’s uncertainty principle—the electron’s energy would have to be much too large to be confined to the small space of the nucleus. Now the first excited state of hydrogen has , so that can be either 0 or 1, according to the rule in . Similarly, for , can be 0, 1, or 2. It is often most convenient to state the value of , a simple integer, rather than calculating the value of from . For example, for , we see that
It is much simpler to state .
As recognized in the Zeeman effect, the direction of angular momentum is quantized. We now know this is true in all circumstances. It is found that the component of angular momentum along one direction in space, usually called the -axis, can have only certain values of . The direction in space must be related to something physical, such as the direction of the magnetic field at that location. This is an aspect of relativity. Direction has no meaning if there is nothing that varies with direction, as does magnetic force. The allowed values of are
where is the and is the angular momentum projection quantum number. The rule in parentheses for the values of is that it can range from to in steps of one. For example, if , then can have the five values –2, –1, 0, 1, and 2. Each corresponds to a different energy in the presence of a magnetic field, so that they are related to the splitting of spectral lines into discrete parts, as discussed in the preceding section. If the -component of angular momentum can have only certain values, then the angular momentum can have only certain directions, as illustrated in .
### Intrinsic Spin Angular Momentum Is Quantized in Magnitude and Direction
There are two more quantum numbers of immediate concern. Both were first discovered for electrons in conjunction with fine structure in atomic spectra. It is now well established that electrons and other fundamental particles have intrinsic spin, roughly analogous to a planet spinning on its axis. This spin is a fundamental characteristic of particles, and only one magnitude of intrinsic spin is allowed for a given type of particle. Intrinsic angular momentum is quantized independently of orbital angular momentum. Additionally, the direction of the spin is also quantized. It has been found that the magnitude of the intrinsic (internal) spin angular momentum, , of an electron is given by
where is defined to be the spin quantum number. This is very similar to the quantization of given in , except that the only value allowed for for electrons is 1/2.
The direction of intrinsic spin is quantized, just as is the direction of orbital angular momentum. The direction of spin angular momentum along one direction in space, again called the -axis, can have only the values
for electrons. is the and is the spin projection quantum number. For electrons, can only be 1/2, and can be either +1/2 or –1/2. Spin projection is referred to as spin up, whereas is called spin down. These are illustrated in .
To summarize, the state of a system, such as the precise nature of an electron in an atom, is determined by its particular quantum numbers. These are expressed in the form —see For electrons in atoms, the principal quantum number can have the values . Once is known, the values of the angular momentum quantum number are limited to . For a given value of , the angular momentum projection quantum number can have only the values . Electron spin is independent of and , always having . The spin projection quantum number can have two values, .
shows several hydrogen states corresponding to different sets of quantum numbers. Note that these clouds of probability are the locations of electrons as determined by making repeated measurements—each measurement finds the electron in a definite location, with a greater chance of finding the electron in some places rather than others. With repeated measurements, the pattern of probability shown in the figure emerges. The clouds of probability do not look like nor do they correspond to classical orbits. The uncertainty principle actually prevents us and nature from knowing how the electron gets from one place to another, and so an orbit really does not exist as such. Nature on a small scale is again much different from that on the large scale.
We will see that the quantum numbers discussed in this section are valid for a broad range of particles and other systems, such as nuclei. Some quantum numbers, such as intrinsic spin, are related to fundamental classifications of subatomic particles, and they obey laws that will give us further insight into the substructure of matter and its interactions.
### Section Summary
1. Quantum numbers are used to express the allowed values of quantized entities. The principal quantum number labels the basic states of a system and is given by
2. The magnitude of angular momentum is given by
where is the angular momentum quantum number. The direction of angular momentum is quantized, in that its component along an axis defined by a magnetic field, called the -axis is given by
is the -component of the angular momentum and is the angular momentum projection quantum number. Similarly, the electron’s intrinsic spin angular momentum is given by
is defined to be the spin quantum number. Finally, the direction of the electron’s spin along the -axis is given by
where is the -component of spin angular momentum and is the spin projection quantum number. Spin projection is referred to as spin up, whereas is called spin down. summarizes the atomic quantum numbers and their allowed values.
### Conceptual Questions
### Problem Exercises
|
# Atomic Physics
## The Pauli Exclusion Principle
### Learning Objectives
By the end of this section, you will be able to:
1. Define the composition of an atom along with its electrons, neutrons, and protons.
2. Explain the Pauli exclusion principle and its application to the atom.
3. Specify the shell and subshell symbols and their positions.
4. Define the position of electrons in different shells of an atom.
5. State the position of each element in the periodic table according to shell filling.
### Multiple-Electron Atoms
All atoms except hydrogen are multiple-electron atoms. The physical and chemical properties of elements are directly related to the number of electrons a neutral atom has. The periodic table of the elements groups elements with similar properties into columns. This systematic organization is related to the number of electrons in a neutral atom, called the atomic number, . We shall see in this section that the exclusion principle is key to the underlying explanations, and that it applies far beyond the realm of atomic physics.
In 1925, the Austrian physicist Wolfgang Pauli (see ) proposed the following rule: No two electrons can have the same set of quantum numbers. That is, no two electrons can be in the same state. This statement is known as the Pauli exclusion principle, because it excludes electrons from being in the same state. The Pauli exclusion principle is extremely powerful and very broadly applicable. It applies to any identical particles with half-integral intrinsic spin—that is, having Thus no two electrons can have the same set of quantum numbers.
Let us examine how the exclusion principle applies to electrons in atoms. The quantum numbers involved were defined in Quantum Numbers and Rules as , and . Since is always for electrons, it is redundant to list , and so we omit it and specify the state of an electron by a set of four numbers . For example, the quantum numbers completely specify the state of an electron in an atom.
Since no two electrons can have the same set of quantum numbers, there are limits to how many of them can be in the same energy state. Note that determines the energy state in the absence of a magnetic field. So we first choose , and then we see how many electrons can be in this energy state or energy level. Consider the level, for example. The only value can have is 0 (see for a list of possible values once is known), and thus can only be 0. The spin projection can be either or , and so there can be two electrons in the state. One has quantum numbers , and the other has . illustrates that there can be one or two electrons having , but not three.
### Shells and Subshells
Because of the Pauli exclusion principle, only hydrogen and helium can have all of their electrons in the state. Lithium (see the periodic table) has three electrons, and so one must be in the level. This leads to the concept of shells and shell filling. As we progress up in the number of electrons, we go from hydrogen to helium, lithium, beryllium, boron, and so on, and we see that there are limits to the number of electrons for each value of . Higher values of the shell correspond to higher energies, and they can allow more electrons because of the various combinations of , and that are possible. Each value of the principal quantum number thus corresponds to an atomic shell into which a limited number of electrons can go. Shells and the number of electrons in them determine the physical and chemical properties of atoms, since it is the outermost electrons that interact most with anything outside the atom.
The probability clouds of electrons with the lowest value of are closest to the nucleus and, thus, more tightly bound. Thus when shells fill, they start with , progress to , and so on. Each value of thus corresponds to a subshell.
The table given below lists symbols traditionally used to denote shells and subshells.
To denote shells and subshells, we write with a number for and a letter for . For example, an electron in the state must have , and it is denoted as a electron. Two electrons in the state is denoted as . Another example is an electron in the state with , written as . The case of three electrons with these quantum numbers is written . This notation, called spectroscopic notation, is generalized as shown in .
Counting the number of possible combinations of quantum numbers allowed by the exclusion principle, we can determine how many electrons it takes to fill each subshell and shell.
The number of electrons that can be in a subshell depends entirely on the value of . Once is known, there are a fixed number of values of , each of which can have two values for First, since goes from to l in steps of 1, there are possibilities. This number is multiplied by 2, since each electron can be spin up or spin down. Thus the maximum number of electrons that can be in a subshell is .
For example, the subshell in has a maximum of 2 electrons in it, since for this subshell. Similarly, the subshell has a maximum of 6 electrons, since . For a shell, the maximum number is the sum of what can fit in the subshells. Some algebra shows that the maximum number of electrons that can be in a shell is .
For example, for the first shell , and so . We have already seen that only two electrons can be in the shell. Similarly, for the second shell, , and so . As found in , the total number of electrons in the shell is 8.
### Shell Filling and the Periodic Table
shows electron configurations for the first 20 elements in the periodic table, starting with hydrogen and its single electron and ending with calcium. The Pauli exclusion principle determines the maximum number of electrons allowed in each shell and subshell. But the order in which the shells and subshells are filled is complicated because of the large numbers of interactions between electrons.
Examining the above table, you can see that as the number of electrons in an atom increases from 1 in hydrogen to 2 in helium and so on, the lowest-energy shell gets filled first—that is, the shell fills first, and then the shell begins to fill. Within a shell, the subshells fill starting with the lowest , or with the subshell, then the , and so on, usually until all subshells are filled. The first exception to this occurs for potassium, where the subshell begins to fill before any electrons go into the subshell. The next exception is not shown in ; it occurs for rubidium, where the subshell starts to fill before the subshell. The reason for these exceptions is that electrons have probability clouds that penetrate closer to the nucleus and, thus, are more tightly bound (lower in energy).
shows the periodic table of the elements, through element 118. Of special interest are elements in the main groups, namely, those in the columns numbered 1, 2, 13, 14, 15, 16, 17, and 18.
The number of electrons in the outermost subshell determines the atom’s chemical properties, since it is these electrons that are farthest from the nucleus and thus interact most with other atoms. If the outermost subshell can accept or give up an electron easily, then the atom will be highly reactive chemically. Each group in the periodic table is characterized by its outermost electron configuration. Perhaps the most familiar is Group 18 (Group VIII), the noble gases (helium, neon, argon, etc.). These gases are all characterized by a filled outer subshell that is particularly stable. This means that they have large ionization energies and do not readily give up an electron. Furthermore, if they were to accept an extra electron, it would be in a significantly higher level and thus loosely bound. Chemical reactions often involve sharing electrons. Noble gases can be forced into unstable chemical compounds only under high pressure and temperature.
Group 17 (Group VII) contains the halogens, such as fluorine, chlorine, iodine and bromine, each of which has one less electron than a neighboring noble gas. Each halogen has 5 electrons (a configuration), while the subshell can hold 6 electrons. This means the halogens have one vacancy in their outermost subshell. They thus readily accept an extra electron (it becomes tightly bound, closing the shell as in noble gases) and are highly reactive chemically. The halogens are also likely to form singly negative ions, such as , fitting an extra electron into the vacancy in the outer subshell. In contrast, alkali metals, such as sodium and potassium, all have a single electron in their outermost subshell (an
configuration) and are members of Group 1 (Group I). These elements easily give up their extra electron and are thus highly reactive chemically. As you might expect, they also tend to form singly positive ions, such as , by losing their loosely bound outermost electron. They are metals (conductors), because the loosely bound outer electron can move freely.
Of course, other groups are also of interest. Carbon, silicon, and germanium, for example, have similar chemistries and are in Group 4 (Group IV). Carbon, in particular, is extraordinary in its ability to form many types of bonds and to be part of long chains, such as inorganic molecules. The large group of what are called transitional elements is characterized by the filling of the subshells and crossing of energy levels. Heavier groups, such as the lanthanide series, are more complex—their shells do not fill in simple order. But the groups recognized by chemists such as Mendeleev have an explanation in the substructure of atoms.
### Section Summary
1. The state of a system is completely described by a complete set of quantum numbers. This set is written as .
2. The Pauli exclusion principle says that no two electrons can have the same set of quantum numbers; that is, no two electrons can be in the same state.
3. This exclusion limits the number of electrons in atomic shells and subshells. Each value of corresponds to a shell, and each value of corresponds to a subshell.
4. The maximum number of electrons that can be in a subshell is .
5. The maximum number of electrons that can be in a shell is .
### Conceptual Questions
### Problem Exercises
|
# Radioactivity and Nuclear Physics
## Connection for AP® Courses
In this chapter, students will explore radioactivity and nuclear physics. Students will learn about the structure and properties of a nucleus (Enduring Understanding 1.A, Essential Knowledge 1.A.3), supporting Big Idea 1. Students will also study the forces that govern the behavior of the nucleus, including the weak force and the strong force (Enduring Understanding 3.G). This supports Big Idea 3 by explaining that interactions can be described by forces, such as the strong force between nucleons holding the nucleus together.
Students will also learn the conservation laws associated with nuclear physics, such as conservation of energy (Enduring Understanding 5.B), conservation of charge (Enduring Understanding 5.C) and conservation of nucleon number (Enduring Understanding 5.G). Students will study the processes that can be described using conservation laws (Big Idea 5), such as radioactive decay, nuclear absorption and emission of nuclear energy, usually regulated by photons (Essential Knowledge 5.B.8). As part of the study of conservation laws, students will explore the consequences of charge conservation (Essential Knowledge 5.C.1) during radioactive decay and during interactions between nuclei (Essential Knowledge 5.C.2). Students will also learn how conservation of nucleon number determines which nuclear reactions can occur (Essential Knowledge 5.G.1). Students will also study types of nuclear radiation, radioactivity, and the binding energy of a nucleus.
This chapter also supports Big Idea 7 by exploring how probability can describe the behavior of quantum mechanical systems. Students will study the process of radioactive decay, which can be described by probability theory. Students will also explore examples demonstrating spontaneous radioactive decay as a probabilistic statistical process (Essential Knowledge 7.C.3), thus making a connection between modeling matter with a wave function and probabilistic description of the microscopic world (Enduring Understanding 7.C).
The content in this chapter supports:
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.A The internal structure of a system determines many properties of the system.
Essential Knowledge 1.A.3 Nuclei have internal structures that determine their properties.
Big Idea 3 The interactions of an object with other objects can be described by forces.
Enduring Understanding 3.G Certain types of forces are considered fundamental.
Essential Knowledge 3.G.3 The strong force is exerted at nuclear scales and dominates the interactions of nucleons.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.8 Energy transfer occurs when photons are absorbed or emitted, for example, by atoms or nuclei.
Enduring Understanding 5.C The electric charge of a system is conserved.
Essential Knowledge 5.C.1 Electric charge is conserved in nuclear and elementary particle reactions, even when elementary particles are produced or destroyed. Examples should include equations representing nuclear decay.
Essential Knowledge 5.C.2 The exchange of electric charges among a set of objects in a system conserves electric charge.
Enduring Understanding 5.G Nucleon number is conserved.
Essential Knowledge 5.G.1 The possible nuclear reactions are constrained by the law of conservation of nucleon number.
Big Idea 7 The mathematics of probability can be used to describe the behavior of complex systems and to interpret the behavior of quantum mechanical systems.
Enduring Understanding 7.C At the quantum scale, matter is described by a wave function, which leads to a probabilistic description of the microscopic world.
Essential Knowledge 7.C.3 The spontaneous radioactive decay of an individual nucleus is described by probability. |
# Radioactivity and Nuclear Physics
## Nuclear Radioactivity
### Learning Objectives
By the end of this section, you will be able to:
1. Explain nuclear radiation.
2. Explain the types of radiation—alpha emission, beta emission, and gamma emission.
3. Explain the ionization of radiation in an atom.
4. Define the range of radiation.
The discovery and study of nuclear radioactivity quickly revealed evidence of revolutionary new physics. In addition, uses for nuclear radiation also emerged quickly—for example, people such as Ernest Rutherford used it to determine the size of the nucleus and devices were painted with radon-doped paint to make them glow in the dark (see ). We therefore begin our study of nuclear physics with the discovery and basic features of nuclear radioactivity.
### Discovery of Nuclear Radioactivity
In 1896, the French physicist Antoine Henri Becquerel (1852–1908) accidentally found that a uranium-rich mineral called pitchblende emits invisible, penetrating rays that can darken a photographic plate enclosed in an opaque envelope. The rays therefore carry energy; but amazingly, the pitchblende emits them continuously without any energy input. This is an apparent violation of the law of conservation of energy, one that we now understand is due to the conversion of a small amount of mass into energy, as related in Einstein’s famous equation . It was soon evident that Becquerel’s rays originate in the nuclei of the atoms and have other unique characteristics. The emission of these rays is called nuclear radioactivity or simply radioactivity. The rays themselves are called nuclear radiation. A nucleus that spontaneously destroys part of its mass to emit radiation is said to decay (a term also used to describe the emission of radiation by atoms in excited states). A substance or object that emits nuclear radiation is said to be radioactive.
Two types of experimental evidence imply that Becquerel’s rays originate deep in the heart (or nucleus) of an atom. First, the radiation is found to be associated with certain elements, such as uranium. Radiation does not vary with chemical state—that is, uranium is radioactive whether it is in the form of an element or compound. In addition, radiation does not vary with temperature, pressure, or ionization state of the uranium atom. Since all of these factors affect electrons in an atom, the radiation cannot come from electron transitions, as atomic spectra do. The huge energy emitted during each event is the second piece of evidence that the radiation cannot be atomic. Nuclear radiation has energies of the order of per event, which is much greater than the typical atomic energies (a few ), such as that observed in spectra and chemical reactions, and more than ten times as high as the most energetic characteristic x rays. Becquerel did not vigorously pursue his discovery for very long. In 1898, Marie Curie (1867–1934), then a graduate student married to the already well-known French physicist Pierre Curie (1859–1906), began her doctoral study of Becquerel’s rays. She and her husband soon discovered two new radioactive elements, which she named polonium (after her native land) and radium (because it radiates). These two new elements filled holes in the periodic table and, further, displayed much higher levels of radioactivity per gram of material than uranium. Over a period of four years, working under poor conditions and spending their own funds, the Curies processed more than a ton of uranium ore to isolate a gram of radium salt. Radium became highly sought after, because it was about two million times as radioactive as uranium. Curie’s radium salt glowed visibly from the radiation that took its toll on them and other unaware researchers. Shortly after completing her Ph.D., both Curies and Becquerel shared the 1903 Nobel Prize in physics for their work on radioactivity. Pierre was killed in a horse cart accident in 1906, but Marie continued her study of radioactivity for nearly 30 more years. Awarded the 1911 Nobel Prize in chemistry for her discovery of two new elements, she remains the only person to win Nobel Prizes in physics and chemistry. Marie’s radioactive fingerprints on some pages of her notebooks can still expose film, and she suffered from radiation-induced lesions. She died of leukemia likely caused by radiation, but she was active in research almost until her death in 1934. The following year, her daughter and son-in-law, Irene and Frederic Joliot-Curie, were awarded the Nobel Prize in chemistry for their discovery of artificially induced radiation, adding to a remarkable family legacy.
### Alpha, Beta, and Gamma
Research begun by people such as New Zealander Ernest Rutherford soon after the discovery of nuclear radiation indicated that different types of rays are emitted. Eventually, three types were distinguished and named alpha, beta, and gamma, because, like x-rays, their identities were initially unknown. shows what happens if the rays are passed through a magnetic field. The s are unaffected, while the s and s are deflected in opposite directions, indicating the s are positive, the s negative, and the s uncharged. Rutherford used both magnetic and electric fields to show that s have a positive charge twice the magnitude of an electron, or . In the process, he found the s charge to mass ratio to be several thousand times smaller than the electron’s. Later on, Rutherford collected s from a radioactive source and passed an electric discharge through them, obtaining the spectrum of recently discovered helium gas. Among many important discoveries made by Rutherford and his collaborators was the proof that radiation is the emission of a helium nucleus. Rutherford won the Nobel Prize in chemistry in 1908 for his early work. He continued to make important contributions until his death in 1934.
Other researchers had already proved that s are negative and have the same mass and same charge-to-mass ratio as the recently discovered electron. By 1902, it was recognized that radiation is the emission of an electron. Although s are electrons, they do not exist in the nucleus before it decays and are not ejected atomic electrons—the electron is created in the nucleus at the instant of decay.
Since s remain unaffected by electric and magnetic fields, it is natural to think they might be photons. Evidence for this grew, but it was not until 1914 that this was proved by Rutherford and collaborators. By scattering radiation from a crystal and observing interference, they demonstrated that radiation is the emission of a high-energy photon by a nucleus. In fact, radiation comes from the de-excitation of a nucleus, just as an x ray comes from the de-excitation of an atom. The names " ray" and "x ray" identify the source of the radiation. At the same energy, rays and x rays are otherwise identical.
### Ionization and Range
Two of the most important characteristics of , , and rays were recognized very early. All three types of nuclear radiation produce ionization in materials, but they penetrate different distances in materials—that is, they have different ranges. Let us examine why they have these characteristics and what are some of the consequences.
Like x rays, nuclear radiation in the form of s, s, and s has enough energy per event to ionize atoms and molecules in any material. The energy emitted in various nuclear decays ranges from a few to more than , while only a few are needed to produce ionization. The effects of x rays and nuclear radiation on biological tissues and other materials, such as solid state electronics, are directly related to the ionization they produce. All of them, for example, can damage electronics or kill cancer cells. In addition, methods for detecting x rays and nuclear radiation are based on ionization, directly or indirectly. All of them can ionize the air between the plates of a capacitor, for example, causing it to discharge. This is the basis of inexpensive personal radiation monitors, such as pictured in . Apart from , , and , there are other forms of nuclear radiation as well, and these also produce ionization with similar effects. We define ionizing radiation as any form of radiation that produces ionization whether nuclear in origin or not, since the effects and detection of the radiation are related to ionization.
The range of radiation is defined to be the distance it can travel through a material. Range is related to several factors, including the energy of the radiation, the material encountered, and the type of radiation (see ). The higher the energy, the greater the range, all other factors being the same. This makes good sense, since radiation loses its energy in materials primarily by producing ionization in them, and each ionization of an atom or a molecule requires energy that is removed from the radiation. The amount of ionization is, thus, directly proportional to the energy of the particle of radiation, as is its range.
Radiation can be absorbed or shielded by materials, such as the lead aprons dentists drape on us when taking x rays. Lead is a particularly effective shield compared with other materials, such as plastic or air. How does the range of radiation depend on material? Ionizing radiation interacts best with charged particles in a material. Since electrons have small masses, they most readily absorb the energy of the radiation in collisions. The greater the density of a material and, in particular, the greater the density of electrons within a material, the smaller the range of radiation.
Different types of radiation have different ranges when compared at the same energy and in the same material. Alphas have the shortest range, betas penetrate farther, and gammas have the greatest range. This is directly related to charge and speed of the particle or type of radiation. At a given energy, each
,
, or
will produce the same number of ionizations in a material (each ionization requires a certain amount of energy on average). The more readily the particle produces ionization, the more quickly it will lose its energy. The effect of charge is as follows: The has a charge of , the has a charge of , and the is uncharged. The electromagnetic force exerted by the is thus twice as strong as that exerted by the and it is more likely to produce ionization. Although chargeless, the does interact weakly because it is an electromagnetic wave, but it is less likely to produce ionization in any encounter. More quantitatively, the change in momentum given to a particle in the material is , where is the force the
,
, or
exerts over a time
. The smaller the charge, the smaller is and the smaller is the momentum (and energy) lost. Since the speed of alphas is about 5% to 10% of the speed of light, classical (non-relativistic) formulas apply.
The speed at which they travel is the other major factor affecting the range of s, s, and s. The faster they move, the less time they spend in the vicinity of an atom or a molecule, and the less likely they are to interact. Since s and s are particles with mass (helium nuclei and electrons, respectively), their energy is kinetic, given classically by . The mass of the particle is thousands of times less than that of the s, so that s must travel much faster than s to have the same energy. Since s move faster (most at relativistic speeds), they have less time to interact than s. Gamma rays are photons, which must travel at the speed of light. They are even less likely to interact than a , since they spend even less time near a given atom (and they have no charge). The range of s is thus greater than the range of s.
Alpha radiation from radioactive sources has a range much less than a millimeter of biological tissues, usually not enough to even penetrate the dead layers of our skin. On the other hand, the same radiation can penetrate a few centimeters of air, so mere distance from a source prevents radiation from reaching us. This makes radiation relatively safe for our body compared to and radiation. Typical radiation can penetrate a few millimeters of tissue or about a meter of air. Beta radiation is thus hazardous even when not ingested. The range of s in lead is about a millimeter, and so it is easy to store sources in lead radiation-proof containers. Gamma rays have a much greater range than either s or s. In fact, if a given thickness of material, like a lead brick, absorbs 90% of the s, then a second lead brick will only absorb 90% of what got through the first. Thus, s do not have a well-defined range; we can only cut down the amount that gets through. Typically, s can penetrate many meters of air, go right through our bodies, and are effectively shielded (that is, reduced in intensity to acceptable levels) by many centimeters of lead. One benefit of s is that they can be used as radioactive tracers (see ).
### Test Prep for AP Courses
### Section Summary
1. Some nuclei are radioactive—they spontaneously decay destroying some part of their mass and emitting energetic rays, a process called nuclear radioactivity.
2. Nuclear radiation, like x rays, is ionizing radiation, because energy sufficient to ionize matter is emitted in each decay.
3. The range (or distance traveled in a material) of ionizing radiation is directly related to the charge of the emitted particle and its energy, with greater-charge and lower-energy particles having the shortest ranges.
4. Radiation detectors are based directly or indirectly upon the ionization created by radiation, as are the effects of radiation on living and inert materials.
### Conceptual Questions
|
# Radioactivity and Nuclear Physics
## Radiation Detection and Detectors
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the working principle of a Geiger tube.
2. Define and discuss radiation detectors.
It is well known that ionizing radiation affects us but does not trigger nerve impulses. Newspapers carry stories about unsuspecting victims of radiation poisoning who fall ill with radiation sickness, such as burns and blood count changes, but who never felt the radiation directly. This makes the detection of radiation by instruments more than an important research tool. This section is a brief overview of radiation detection and some of its applications.
### Human Application
The first direct detection of radiation was Becquerel’s fogged photographic plate. Photographic film is still the most common detector of ionizing radiation, being used routinely in medical and dental x rays. Nuclear radiation is also captured on film, such as seen in . The mechanism for film exposure by ionizing radiation is similar to that by photons. A quantum of energy interacts with the emulsion and alters it chemically, thus exposing the film. The quantum come from an -particle, -particle, or photon, provided it has more than the few eV of energy needed to induce the chemical change (as does all ionizing radiation). The process is not 100% efficient, since not all incident radiation interacts and not all interactions produce the chemical change. The amount of film darkening is related to exposure, but the darkening also depends on the type of radiation, so that absorbers and other devices must be used to obtain energy, charge, and particle-identification information.
Another very common radiation detector is the Geiger tube. The clicking and buzzing sound we hear in dramatizations and documentaries, as well as in our own physics labs, is usually an audio output of events detected by a Geiger counter. These relatively inexpensive radiation detectors are based on the simple and sturdy Geiger tube, shown schematically in (b). A conducting cylinder with a wire along its axis is filled with an insulating gas so that a voltage applied between the cylinder and wire produces almost no current. Ionizing radiation passing through the tube produces free ion pairs (each pair consisting of one positively charged particle and one negatively charged particle) that are attracted to the wire and cylinder, forming a current that is detected as a count. The word count implies that there is no information on energy, charge, or type of radiation with a simple Geiger counter. They do not detect every particle, since some radiation can pass through without producing enough ionization to be detected. However, Geiger counters are very useful in producing a prompt output that reveals the existence and relative intensity of ionizing radiation.
Another radiation detection method records light produced when radiation interacts with materials. The energy of the radiation is sufficient to excite atoms in a material that may fluoresce, such as the phosphor used by Rutherford’s group. Materials called scintillators use a more complex collaborative process to convert radiation energy into light. Scintillators may be liquid or solid, and they can be very efficient. Their light output can provide information about the energy, charge, and type of radiation. Scintillator light flashes are very brief in duration, enabling the detection of a huge number of particles in short periods of time. Scintillator detectors are used in a variety of research and diagnostic applications. Among these are the detection by satellite-mounted equipment of the radiation from distant galaxies, the analysis of radiation from a person indicating body burdens, and the detection of exotic particles in accelerator laboratories.
Light from a scintillator is converted into electrical signals by devices such as the photomultiplier tube shown schematically in . These tubes are based on the photoelectric effect, which is multiplied in stages into a cascade of electrons, hence the name photomultiplier. Light entering the photomultiplier strikes a metal plate, ejecting an electron that is attracted by a positive potential difference to the next plate, giving it enough energy to eject two or more electrons, and so on. The final output current can be made proportional to the energy of the light entering the tube, which is in turn proportional to the energy deposited in the scintillator. Very sophisticated information can be obtained with scintillators, including energy, charge, particle identification, direction of motion, and so on.
Solid-state radiation detectors convert ionization produced in a semiconductor (like those found in computer chips) directly into an electrical signal. Semiconductors can be constructed that do not conduct current in one particular direction. When a voltage is applied in that direction, current flows only when ionization is produced by radiation, similar to what happens in a Geiger tube. Further, the amount of current in a solid-state detector is closely related to the energy deposited and, since the detector is solid, it can have a high efficiency (since ionizing radiation is stopped in a shorter distance in solids fewer particles escape detection). As with scintillators, very sophisticated information can be obtained from solid-state detectors.
### Section Summary
1. Radiation detectors are based directly or indirectly upon the ionization created by radiation, as are the effects of radiation on living and inert materials.
### Conceptual Questions
### Problems & Exercises
|
# Radioactivity and Nuclear Physics
## Substructure of the Nucleus
### Learning Objectives
By the end of this section, you will be able to:
1. Define and discuss the nucleus in an atom.
2. Define atomic number.
3. Define and discuss isotopes.
4. Calculate the density of the nucleus.
5. Explain nuclear force.
What is inside the nucleus? Why are some nuclei stable while others decay? (See .) Why are there different types of decay (, and )? Why are nuclear decay energies so large? Pursuing natural questions like these has led to far more fundamental discoveries than you might imagine.
We have already identified protons as the particles that carry positive charge in the nuclei. However, there are actually two types of particles in the nuclei—the proton and the neutron, referred to collectively as nucleons, the constituents of nuclei. As its name implies, the neutron is a neutral particle () that has nearly the same mass and intrinsic spin as the proton. compares the masses of protons, neutrons, and electrons. Note how close the proton and neutron masses are, but the neutron is slightly more massive once you look past the third digit. Both nucleons are much more massive than an electron. In fact, (as noted in Medical Applications of Nuclear Physics and .
also gives masses in terms of mass units that are more convenient than kilograms on the atomic and nuclear scale. The first of these is the unified (u), defined as
This unit is defined so that a neutral carbon atom has a mass of exactly 12 u. Masses are also expressed in units of . These units are very convenient when considering the conversion of mass into energy (and vice versa), as is so prominent in nuclear processes. Using and units of in , we find that cancels and comes out conveniently in MeV. For example, if the rest mass of a proton is converted entirely into energy, then
It is useful to note that 1 u of mass converted to energy produces 931.5 MeV, or
All properties of a nucleus are determined by the number of protons and neutrons it has. A specific combination of protons and neutrons is called a nuclide and is a unique nucleus. The following notation is used to represent a particular nuclide:
where the symbols , , , and are defined as follows: The number of protons in a nucleus is the atomic number , as defined in Medical Applications of Nuclear Physics. X is the symbol for the element, such as Ca for calcium. However, once is known, the element is known; hence, and are redundant. For example, is always calcium, and calcium always has . is the number of neutrons in a nucleus. In the notation for a nuclide, the subscript is usually omitted. The symbol is defined as the number of nucleons or the total number of protons and neutrons,
where is also called the mass number. This name for is logical; the mass of an atom is nearly equal to the mass of its nucleus, since electrons have so little mass. The mass of the nucleus turns out to be nearly equal to the sum of the masses of the protons and neutrons in it, which is proportional to . In this context, it is particularly convenient to express masses in units of u. Both protons and neutrons have masses close to 1 u, and so the mass of an atom is close to u. For example, in an oxygen nucleus with eight protons and eight neutrons, , and its mass is 16 u. As noticed, the unified atomic mass unit is defined so that a neutral carbon atom (actually a atom) has a mass of exactly 12 . Carbon was chosen as the standard, partly because of its importance in organic chemistry (see Appendix A).
Let us look at a few examples of nuclides expressed in the notation. The nucleus of the simplest atom, hydrogen, is a single proton, or
(the zero for no neutrons is often omitted). To check this symbol, refer to the periodic table—you see that the atomic number
of hydrogen is 1. Since you are given that there are no neutrons, the mass number
is also 1. Suppose you are told that the helium nucleus or
particle has two protons and two neutrons. You can then see that it is written
. There is a scarce form of hydrogen found in nature called deuterium; its nucleus has one proton and one neutron and, hence, twice the mass of common hydrogen. The symbol for deuterium is, thus,
(sometimes
is used, as for deuterated water
). An even rarer—and radioactive—form of hydrogen is called tritium, since it has a single proton and two neutrons, and it is written
. These three varieties of hydrogen have nearly identical chemistries, but the nuclei differ greatly in mass, stability, and other characteristics. Nuclei (such as those of hydrogen) having the same
and different s are defined to be isotopes of the same element.
There is some redundancy in the symbols , , , and . If the element is known, then can be found in a periodic table and is always the same for a given element. If both and are known, then can also be determined (first find ; then, ). Thus the simpler notation for nuclides is
which is sufficient and is most commonly used. For example, in this simpler notation, the three isotopes of hydrogen are and while the particle is . We read this backward, saying helium-4 for , or uranium-238 for . So for , should we need to know, we can determine that for uranium from the periodic table, and, thus, .
A variety of experiments indicate that a nucleus behaves something like a tightly packed ball of nucleons, as illustrated in . These nucleons have large kinetic energies and, thus, move rapidly in very close contact. Nucleons can be separated by a large force, such as in a collision with another nucleus, but resist strongly being pushed closer together. The most compelling evidence that nucleons are closely packed in a nucleus is that the radius of a nucleus, , is found to be given approximately by
where and is the mass number of the nucleus. Note that . Since many nuclei are spherical, and the volume of a sphere is , we see that —that is, the volume of a nucleus is proportional to the number of nucleons in it. This is what would happen if you pack nucleons so closely that there is no empty space between them.
Nucleons are held together by nuclear forces and resist both being pulled apart and pushed inside one another. The volume of the nucleus is the sum of the volumes of the nucleons in it, here shown in different colors to represent protons and neutrons.
### Nuclear Forces and Stability
What forces hold a nucleus together? The nucleus is very small and its protons, being positive, exert tremendous repulsive forces on one another. (The Coulomb force increases as charges get closer, since it is proportional to , even at the tiny distances found in nuclei.) The answer is that two previously unknown forces hold the nucleus together and make it into a tightly packed ball of nucleons. These forces are called the weak and strong nuclear forces. Nuclear forces are so short ranged that they fall to zero strength when nucleons are separated by only a few fm. However, like glue, they are strongly attracted when the nucleons get close to one another. The strong nuclear force is about 100 times more attractive than the repulsive EM force, easily holding the nucleons together. Nuclear forces become extremely repulsive if the nucleons get too close, making nucleons strongly resist being pushed inside one another, something like ball bearings.
The fact that nuclear forces are very strong is responsible for the very large energies emitted in nuclear decay. During decay, the forces do work, and since work is force times the distance (), a large force can result in a large emitted energy. In fact, we know that there are two distinct nuclear forces because of the different types of nuclear decay—the strong nuclear force is responsible for decay, while the weak nuclear force is responsible for decay.
The many stable and unstable nuclei we have explored, and the hundreds we have not discussed, can be arranged in a table called the chart of the nuclides, a simplified version of which is shown in . Nuclides are located on a plot of versus . Examination of a detailed chart of the nuclides reveals patterns in the characteristics of nuclei, such as stability, abundance, and types of decay, analogous to but more complex than the systematics in the periodic table of the elements.
In principle, a nucleus can have any combination of protons and neutrons, but shows a definite pattern for those that are stable. For low-mass nuclei, there is a strong tendency for and to be nearly equal. This means that the nuclear force is more attractive when . More detailed examination reveals greater stability when and are even numbers—nuclear forces are more attractive when neutrons and protons are in pairs. For increasingly higher masses, there are progressively more neutrons than protons in stable nuclei. This is due to the ever-growing repulsion between protons. Since nuclear forces are short ranged, and the Coulomb force is long ranged, an excess of neutrons keeps the protons a little farther apart, reducing Coulomb repulsion. Decay modes of nuclides out of the region of stability consistently produce nuclides closer to the region of stability. There are more stable nuclei having certain numbers of protons and neutrons, called magic numbers. Magic numbers indicate a shell structure for the nucleus in which closed shells are more stable. Nuclear shell theory has been very successful in explaining nuclear energy levels, nuclear decay, and the greater stability of nuclei with closed shells. We have been producing ever-heavier transuranic elements since the early 1940s, and we have now produced the element with . There are theoretical predictions of an island of relative stability for nuclei with such high s.
### Test Prep for AP Courses
### Section Summary
1. Two particles, both called nucleons, are found inside nuclei. The two types of nucleons are protons and neutrons; they are very similar, except that the proton is positively charged while the neutron is neutral. Some of their characteristics are given in and compared with those of the electron. A mass unit convenient to atomic and nuclear processes is the unified atomic mass unit (u), defined to be
2. A nuclide is a specific combination of protons and neutrons, denoted by
is the number of protons or atomic number, X is the symbol for the element, is the number of neutrons, and is the mass number or the total number of protons and neutrons,
3. Nuclides having the same but different are isotopes of the same element.
4. The radius of a nucleus, , is approximately
where . Nuclear volumes are proportional to . There are two nuclear forces, the weak and the strong. Systematics in nuclear stability seen on the chart of the nuclides indicate that there are shell closures in nuclei for values of and equal to the magic numbers, which correspond to highly stable nuclei.
### Conceptual Questions
### Problems & Exercises
|
# Radioactivity and Nuclear Physics
## Nuclear Decay and Conservation Laws
### Learning Objectives
By the end of this section, you will be able to:
1. Define and discuss nuclear decay.
2. State the conservation laws.
3. Explain parent and daughter nucleus.
4. Calculate the energy emitted during nuclear decay.
Nuclear decay has provided an amazing window into the realm of the very small. Nuclear decay gave the first indication of the connection between mass and energy, and it revealed the existence of two of the four basic forces in nature. In this section, we explore the major modes of nuclear decay; and, like those who first explored them, we will discover evidence of previously unknown particles and conservation laws.
Some nuclides are stable, apparently living forever. Unstable nuclides decay (that is, they are radioactive), eventually producing a stable nuclide after many decays. We call the original nuclide the parent and its decay products the daughters. Some radioactive nuclides decay in a single step to a stable nucleus. For example, is unstable and decays directly to , which is stable. Others, such as , decay to another unstable nuclide, resulting in a decay series in which each subsequent nuclide decays until a stable nuclide is finally produced. The decay series that starts from is of particular interest, since it produces the radioactive isotopes and , which the Curies first discovered (see ). Radon gas is also produced ( in the series), an increasingly recognized naturally occurring hazard. Since radon is a noble gas, it emanates from materials, such as soil, containing even trace amounts of and can be inhaled. The decay of radon and its daughters produces internal damage. The decay series ends with , a stable isotope of lead.
Note that the daughters of decay shown in always have two fewer protons and two fewer neutrons than the parent. This seems reasonable, since we know that decay is the emission of a nucleus, which has two protons and two neutrons. The daughters of decay have one less neutron and one more proton than their parent. Beta decay is a little more subtle, as we shall see. No decays are shown in the figure, because they do not produce a daughter that differs from the parent.
### Alpha Decay
In alpha decay, a nucleus simply breaks away from the parent nucleus, leaving a daughter with two fewer protons and two fewer neutrons than the parent (see ). One example of decay is shown in for . Another nuclide that undergoes decay is The decay equations for these two nuclides are
and
If you examine the periodic table of the elements, you will find that Th has , two fewer than U, which has . Similarly, in the second decay equation, we see that U has two fewer protons than Pu, which has . The general rule for decay is best written in the format . If a certain nuclide is known to decay (generally this information must be looked up in a table of isotopes, such as in Appendix B), its decay equation is
where Y is the nuclide that has two fewer protons than X, such as Th having two fewer than U. So if you were told that decays and were asked to write the complete decay equation, you would first look up which element has two fewer protons (an atomic number two lower) and find that this is uranium. Then since four nucleons have broken away from the original 239, its atomic mass would be 235.
It is instructive to examine conservation laws related to decay. You can see from the equation that total charge is conserved. Linear and angular momentum are conserved, too. Although conserved angular momentum is not of great consequence in this type of decay, conservation of linear momentum has interesting consequences. If the nucleus is at rest when it decays, its momentum is zero. In that case, the fragments must fly in opposite directions with equal-magnitude momenta so that total momentum remains zero. This results in the particle carrying away most of the energy, as a bullet from a heavy rifle carries away most of the energy of the powder burned to shoot it. Total mass–energy is also conserved: the energy produced in the decay comes from conversion of a fraction of the original mass. As discussed in Atomic Physics, the general relationship is
Here, is the nuclear reaction energy (the reaction can be nuclear decay or any other reaction), and is the difference in mass between initial and final products. When the final products have less total mass, is positive, and the reaction releases energy (is exothermic). When the products have greater total mass, the reaction is endothermic ( is negative) and must be induced with an energy input. For decay to be spontaneous, the decay products must have smaller mass than the parent.
### Beta Decay
There are actually three types of beta decay. The first discovered was “ordinary” beta decay and is called decay or electron emission. The symbol represents an electron emitted in nuclear beta decay. Cobalt-60 is a nuclide that decays in the following manner:
The neutrino is a particle emitted in beta decay that was unanticipated and is of fundamental importance. The neutrino was not even proposed in theory until more than 20 years after beta decay was known to involve electron emissions. Neutrinos are so difficult to detect that the first direct evidence of them was not obtained until 1953. Neutrinos are nearly massless, have no charge, and do not interact with nucleons via the strong nuclear force. Traveling approximately at the speed of light, they have little time to affect any nucleus they encounter. This is, owing to the fact that they have no charge (and they are not EM waves), they do not interact through the EM force. They do interact via the relatively weak and very short range weak nuclear force. Consequently, neutrinos escape almost any detector and penetrate almost any shielding. However, neutrinos do carry energy, angular momentum (they are fermions with half-integral spin), and linear momentum away from a beta decay. When accurate measurements of beta decay were made, it became apparent that energy, angular momentum, and linear momentum were not accounted for by the daughter nucleus and electron alone. Either a previously unsuspected particle was carrying them away, or three conservation laws were being violated. Wolfgang Pauli made a formal proposal for the existence of neutrinos in 1930. The Italian-born American physicist Enrico Fermi (1901–1954) gave neutrinos their name, meaning little neutral ones, when he developed a sophisticated theory of beta decay (see ). Part of Fermi’s theory was the identification of the weak nuclear force as being distinct from the strong nuclear force and in fact responsible for beta decay. Chinese-born physicist Chien-Shiung Wu, who had developed a number of processes critical to the Manhattan Project and related research, set out to investigate Fermi’s theory and some experiments whose failures had cast the theory in doubt. She first identified a number of flaws in her contemporaries’ methods and materials, and then designed an experimental method that would avoid the same errors. Wu verified Fermi’s theory and went on to establish the core principles of beta decay, which would become critical to further work in nuclear physics.
The neutrino also reveals a new conservation law. There are various families of particles, one of which is the electron family. We propose that the number of members of the electron family is constant in any process or any closed system. In our example of beta decay, there are no members of the electron family present before the decay, but after, there is an electron and a neutrino. So electrons are given an electron family number of . The neutrino in decay is an electron’s antineutrino, given the symbol , where is the Greek letter nu, and the subscript e means this neutrino is related to the electron. The bar indicates this is a particle of antimatter. (All particles have antimatter counterparts that are nearly identical except that they have the opposite charge. Antimatter is almost entirely absent on Earth, but it is found in nuclear decay and other nuclear and particle reactions as well as in outer space.) The electron’s antineutrino , being antimatter, has an electron family number of . The total is zero, before and after the decay. The new conservation law, obeyed in all circumstances, states that the total electron family number is constant. An electron cannot be created without also creating an antimatter family member. This law is analogous to the conservation of charge in a situation where total charge is originally zero, and equal amounts of positive and negative charge must be created in a reaction to keep the total zero.
If a nuclide is known to decay, then its decay equation is
where Y is the nuclide having one more proton than X (see ). So if you know that a certain nuclide decays, you can find the daughter nucleus by first looking up
for the parent and then determining which element has atomic number
. In the example of the
decay of
given earlier, we see that for Co and
is Ni. It is as if one of the neutrons in the parent nucleus decays into a proton, electron, and neutrino. In fact, neutrons outside of nuclei do just that—they live only an average of a few minutes and
decay in the following manner:
We see that charge is conserved in decay, since the total charge is before and after the decay. For example, in decay, total charge is 27 before decay, since cobalt has
. After decay, the daughter nucleus is Ni, which has
, and there is an electron, so that the total charge is also or 27. Angular momentum is conserved, but not obviously (you have to examine the spins and angular momenta of the final products in detail to verify this). Linear momentum is also conserved, again imparting most of the decay energy to the electron and the antineutrino, since they are of low and zero mass, respectively. Another new conservation law is obeyed here and elsewhere in nature. The total number of nucleons . In
decay, for example, there are 60 nucleons before and after the decay. Note that total
is also conserved in
decay. Also note that the total number of protons changes, as does the total number of neutrons, so that total
and total are not conserved in decay, as they are in decay. Energy released in decay can be calculated given the masses of the parent and products.
The second type of beta decay is less common than the first. It is decay. Certain nuclides decay by the emission of a positive electron. This is antielectron or positron decay (see ).
The antielectron is often represented by the symbol , but in beta decay it is written as to indicate the antielectron was emitted in a nuclear decay. Antielectrons are the antimatter counterpart to electrons, being nearly identical, having the same mass, spin, and so on, but having a positive charge and an electron family number of . When a positron encounters an electron, there is a mutual annihilation in which all the mass of the antielectron-electron pair is converted into pure photon energy. (The reaction, , conserves electron family number as well as all other conserved quantities.) If a nuclide is known to decay, then its decay equation is
where Y is the nuclide having one less proton than X (to conserve charge) and is the symbol for the electron’s neutrino, which has an electron family number of . Since an antimatter member of the electron family (the ) is created in the decay, a matter member of the family (here the ) must also be created. Given, for example, that
decays, you can write its full decay equation by first finding that for , so that the daughter nuclide will have
, the atomic number for neon. Thus the decay equation for is
In decay, it is as if one of the protons in the parent nucleus decays into a neutron, a positron, and a neutrino. Protons do not do this outside of the nucleus, and so the decay is due to the complexities of the nuclear force. Note again that the total number of nucleons is constant in this and any other reaction. To find the energy emitted in decay, you must again count the number of electrons in the neutral atoms, since atomic masses are used. The daughter has one less electron than the parent, and one electron mass is created in the decay. Thus, in decay,
since we use the masses of neutral atoms.
Electron capture is the third type of beta decay. Here, a nucleus captures an inner-shell electron and undergoes a nuclear reaction that has the same effect as decay. Electron capture is sometimes denoted by the letters EC. We know that electrons cannot reside in the nucleus, but this is a nuclear reaction that consumes the electron and occurs spontaneously only when the products have less mass than the parent plus the electron. If a nuclide is known to undergo electron capture, then its electron capture equation is
Any nuclide that can decay can also undergo electron capture (and often does both). The same conservation laws are obeyed for EC as for decay. It is good practice to confirm these for yourself.
All forms of beta decay occur because the parent nuclide is unstable and lies outside the region of stability in the chart of nuclides. Those nuclides that have relatively more neutrons than those in the region of stability will decay to produce a daughter with fewer neutrons, producing a daughter nearer the region of stability. Similarly, those nuclides having relatively more protons than those in the region of stability will decay or undergo electron capture to produce a daughter with fewer protons, nearer the region of stability.
### Gamma Decay
Gamma decay is the simplest form of nuclear decay—it is the emission of energetic photons by nuclei left in an excited state by some earlier process. Protons and neutrons in an excited nucleus are in higher orbitals, and they fall to lower levels by photon emission (analogous to electrons in excited atoms). Nuclear excited states have lifetimes typically of only about s, an indication of the great strength of the forces pulling the nucleons to lower states. The decay equation is simply
where the asterisk indicates the nucleus is in an excited state. There may be one or more s emitted, depending on how the nuclide de-excites. In radioactive decay, emission is common and is preceded by or decay. For example, when decays, it most often leaves the daughter nucleus in an excited state, written . Then the nickel nucleus quickly decays by the emission of two penetrating s:
These are called cobalt rays, although they come from nickel—they are used for cancer therapy, for example. It is again constructive to verify the conservation laws for gamma decay. Finally, since decay does not change the nuclide to another species, it is not prominently featured in charts of decay series, such as that in .
There are other types of nuclear decay, but they occur less commonly than ,
, and decay. Spontaneous fission is the most important of the other forms of nuclear decay because of its applications in nuclear power and weapons. It is covered in the next chapter.
### Test Prep for AP Courses
### Section Summary
1. When a parent nucleus decays, it produces a daughter nucleus following rules and conservation laws. There are three major types of nuclear decay, called alpha beta and gamma . The decay equation is
2. Nuclear decay releases an amount of energy related to the mass destroyed by
3. There are three forms of beta decay. The decay equation is
4. The decay equation is
5. The electron capture equation is
6. is an electron, is an antielectron or positron, represents an electron’s neutrino, and is an electron’s antineutrino. In addition to all previously known conservation laws, two new ones arise— conservation of electron family number and conservation of the total number of nucleons. The decay equation is is a high-energy photon originating in a nucleus.
### Conceptual Questions
### Problems & Exercises
In the following eight problems, write the complete decay equation for the given nuclide in the complete notation. Refer to the periodic table for values of .
In the following four problems, identify the parent nuclide and write the complete decay equation in the notation. Refer to the periodic table for values of . |
# Radioactivity and Nuclear Physics
## Half-Life and Activity
### Learning Objectives
By the end of this section, you will be able to:
1. Define half-life.
2. Define dating.
3. Calculate age of old objects by radioactive dating.
Unstable nuclei decay. However, some nuclides decay faster than others. For example, radium and polonium, discovered by the Curies, decay faster than uranium. This means they have shorter lifetimes, producing a greater rate of decay. In this section we explore half-life and activity, the quantitative terms for lifetime and rate of decay.
### Half-Life
Why use a term like half-life rather than lifetime? The answer can be found by examining , which shows how the number of radioactive nuclei in a sample decreases with time. The time in which half of the original number of nuclei decay is defined as the half-life, . Half of the remaining nuclei decay in the next half-life. Further, half of that amount decays in the following half-life. Therefore, the number of radioactive nuclei decreases from to in one half-life, then to in the next, and to in the next, and so on. If is a large number, then many half-lives (not just two) pass before all of the nuclei decay. Nuclear decay is an example of a purely statistical process. A more precise definition of half-life is that each nucleus has a 50% chance of living for a time equal to one half-life . Thus, if is reasonably large, half of the original nuclei decay in a time of one half-life. If an individual nucleus makes it through that time, it still has a 50% chance of surviving through another half-life. Even if it happens to make it through hundreds of half-lives, it still has a 50% chance of surviving through one more. The probability of decay is the same no matter when you start counting. This is like random coin flipping. The chance of heads is 50%, no matter what has happened before.
There is a tremendous range in the half-lives of various nuclides, from as short as s for the most unstable, to more than y for the least unstable, or about 46 orders of magnitude. Nuclides with the shortest half-lives are those for which the nuclear forces are least attractive, an indication of the extent to which the nuclear force can depend on the particular combination of neutrons and protons. The concept of half-life is applicable to other subatomic particles, as will be discussed in Particle Physics. It is also applicable to the decay of excited states in atoms and nuclei. The following equation gives the quantitative relationship between the original number of nuclei present at time zero () and the number () at a later time :
where is the base of the natural logarithm, and is the decay constant for the nuclide. The shorter the half-life, the larger is the value of , and the faster the exponential decreases with time. The relationship between the decay constant and the half-life is
To see how the number of nuclei declines to half its original value in one half-life, let in the exponential in the equation . This gives . For integral numbers of half-lives, you can just divide the original number by 2 over and over again, rather than using the exponential relationship. For example, if ten half-lives have passed, we divide by 2 ten times. This reduces it to . For an arbitrary time, not just a multiple of the half-life, the exponential relationship must be used.
Radioactive dating is a clever use of naturally occurring radioactivity. Its most famous application is carbon-14 dating. Carbon-14 has a half-life of 5730 years and is produced in a nuclear reaction induced when solar neutrinos strike in the atmosphere. Radioactive carbon has the same chemistry as stable carbon, and so it mixes into the ecosphere, where it is consumed and becomes part of every living organism. Carbon-14 has an abundance of 1.3 parts per trillion of normal carbon. Thus, if you know the number of carbon nuclei in an object (perhaps determined by mass and Avogadro’s number), you multiply that number by to find the number of
nuclei in the object. When an organism dies, carbon exchange with the environment ceases, and
is not replenished as it decays. By comparing the abundance of
in an artifact, such as mummy wrappings, with the normal abundance in living tissue, it is possible to determine the artifact’s age (or time since death). Carbon-14 dating can be used for biological tissues as old as 50 or 60 thousand years, but is most accurate for younger samples, since the abundance of
nuclei in them is greater. Very old biological materials contain no
at all. There are instances in which the date of an artifact can be determined by other means, such as historical knowledge or tree-ring counting. These cross-references have confirmed the validity of carbon-14 dating and permitted us to calibrate the technique as well. Carbon-14 dating revolutionized parts of archaeology and is of such importance that it earned the 1960 Nobel Prize in chemistry for its developer, the American chemist Willard Libby (1908–1980).
One of the most famous cases of carbon-14 dating involves the Shroud of Turin, a long piece of fabric purported to be the burial shroud of Jesus (see ). This relic was first displayed in Turin in 1354 and was denounced as a fraud at that time by a French bishop. Its remarkable negative imprint of an apparently crucified body resembles the then-accepted image of Jesus, and so the shroud was never disregarded completely and remained controversial over the centuries. Carbon-14 dating was not performed on the shroud until 1988, when the process had been refined to the point where only a small amount of material needed to be destroyed. Samples were tested at three independent laboratories, each being given four pieces of cloth, with only one unidentified piece from the shroud, to avoid prejudice. All three laboratories found samples of the shroud contain 92% of the found in living tissues, allowing the shroud to be dated (see ).
There are other forms of radioactive dating. Rocks, for example, can sometimes be dated based on the decay of . The decay series for ends with , so that the ratio of these nuclides in a rock is an indication of how long it has been since the rock solidified. The original composition of the rock, such as the absence of lead, must be known with some confidence. However, as with carbon-14 dating, the technique can be verified by a consistent body of knowledge. Since has a half-life of y, it is useful for dating only very old materials, showing, for example, that the oldest rocks on Earth solidified about years ago.
### Activity, the Rate of Decay
What do we mean when we say a source is highly radioactive? Generally, this means the number of decays per unit time is very high. We define activity to be the rate of decay expressed in decays per unit time. In equation form, this is
where is the number of decays that occur in time . The SI unit for activity is one decay per second and is given the name becquerel (Bq) in honor of the discoverer of radioactivity. That is,
Activity is often expressed in other units, such as decays per minute or decays per year. One of the most common units for activity is the curie (Ci), defined to be the activity of 1 g of , in honor of Marie Curie’s work with radium. The definition of curie is
or decays per second. A curie is a large unit of activity, while a becquerel is a relatively small unit. . In countries like Australia and New Zealand that adhere more to SI units, most radioactive sources, such as those used in medical diagnostics or in physics laboratories, are labeled in Bq or megabecquerel (MBq).
Intuitively, you would expect the activity of a source to depend on two things: the amount of the radioactive substance present, and its half-life. The greater the number of radioactive nuclei present in the sample, the more will decay per unit of time. The shorter the half-life, the more decays per unit time, for a given number of nuclei. So activity should be proportional to the number of radioactive nuclei, , and inversely proportional to their half-life, . In fact, your intuition is correct. It can be shown that the activity of a source is
where is the number of radioactive nuclei present, having half-life . This relationship is useful in a variety of calculations, as the next two examples illustrate.
Human-made (or artificial) radioactivity has been produced for decades and has many uses. Some of these include medical therapy for cancer, medical imaging and diagnostics, and food preservation by irradiation. Many applications as well as the biological effects of radiation are explored in Medical Applications of Nuclear Physics, but it is clear that radiation is hazardous. A number of tragic examples of this exist, one of the most disastrous being the meltdown and fire at the Chernobyl reactor complex in the Ukraine (see ). Several radioactive isotopes were released in huge quantities, contaminating many thousands of square kilometers and directly affecting hundreds of thousands of people. The most significant releases were of , , , , , and . Estimates are that the total amount of radiation released was about 100 million curies.
### Human and Medical Applications
Activity decreases in time, going to half its original value in one half-life, then to one-fourth its original value in the next half-life, and so on. Since , the activity decreases as the number of radioactive nuclei decreases. The equation for as a function of time is found by combining the equations and , yielding
where is the activity at . This equation shows exponential decay of radioactive nuclei. For example, if a source originally has a 1.00-mCi activity, it declines to 0.500 mCi in one half-life, to 0.250 mCi in two half-lives, to 0.125 mCi in three half-lives, and so on. For times other than whole half-lives, the equation must be used to find .
### Test Prep for AP Courses
### Section Summary
1. Half-life is the time in which there is a 50% chance that a nucleus will decay. The number of nuclei as a function of time is
where is the number present at , and is the decay constant, related to the half-life by
2. One of the applications of radioactive decay is radioactive dating, in which the age of a material is determined by the amount of radioactive decay that occurs. The rate of decay is called the activity :
3. The SI unit for is the becquerel (Bq), defined by
4. is also expressed in terms of curies (Ci), where
5. The activity of a source is related to and by
6. Since has an exponential behavior as in the equation , the activity also has an exponential behavior, given by
where is the activity at .
### Conceptual Questions
### Problems & Exercises
Data from the appendices and the periodic table may be needed for these problems. |
# Radioactivity and Nuclear Physics
## Binding Energy
### Learning Objectives
By the end of this section, you will be able to:
1. Define and discuss binding energy.
2. Calculate the binding energy per nucleon of a particle.
The more tightly bound a system is, the stronger the forces that hold it together and the greater the energy required to pull it apart. We can therefore learn about nuclear forces by examining how tightly bound the nuclei are. We define the binding energy (BE) of a nucleus to be the energy required to completely disassemble it into separate protons and neutrons. We can determine the BE of a nucleus from its rest mass. The two are connected through Einstein’s famous relationship . A bound system has a smaller mass than its separate constituents; the more tightly the nucleons are bound together, the smaller the mass of the nucleus.
Imagine pulling a nuclide apart as illustrated in . Work done to overcome the nuclear forces holding the nucleus together puts energy into the system. By definition, the energy input equals the binding energy BE. The pieces are at rest when separated, and so the energy put into them increases their total rest mass compared with what it was when they were glued together as a nucleus. That mass increase is thus . This difference in mass is known as mass defect. It implies that the mass of the nucleus is less than the sum of the masses of its constituent protons and neutrons. A nuclide has protons and neutrons, so that the difference in mass is
Thus,
where is the mass of the nuclide , is the mass of a proton, and is the mass of a neutron. Traditionally, we deal with the masses of neutral atoms. To get atomic masses into the last equation, we first add electrons to , which gives , the atomic mass of the nuclide. We then add electrons to the protons, which gives , or times the mass of a hydrogen atom. Thus the binding energy of a nuclide is
The atomic masses can be found in Appendix A, most conveniently expressed in unified atomic mass units u (). BE is thus calculated from known atomic masses.
What patterns and insights are gained from an examination of the binding energy of various nuclides? First, we find that BE is approximately proportional to the number of nucleons in any nucleus. About twice as much energy is needed to pull apart a nucleus like compared with pulling apart , for example. To help us look at other effects, we divide BE by and consider the binding energy per nucleon, . The graph of in reveals some very interesting aspects of nuclei. We see that the binding energy per nucleon averages about 8 MeV, but is lower for both the lightest and heaviest nuclei. This overall trend, in which nuclei with equal to about 60 have the greatest and are thus the most tightly bound, is due to the combined characteristics of the attractive nuclear forces and the repulsive Coulomb force. It is especially important to note two things—the strong nuclear force is about 100 times stronger than the Coulomb force, and the nuclear forces are shorter in range compared to the Coulomb force. So, for low-mass nuclei, the nuclear attraction dominates and each added nucleon forms bonds with all others, causing progressively heavier nuclei to have progressively greater values of . This continues up to , roughly corresponding to the mass number of iron. Beyond that, new nucleons added to a nucleus will be too far from some others to feel their nuclear attraction. Added protons, however, feel the repulsion of all other protons, since the Coulomb force is longer in range. Coulomb repulsion grows for progressively heavier nuclei, but nuclear attraction remains about the same, and so becomes smaller. This is why stable nuclei heavier than have more neutrons than protons. Coulomb repulsion is reduced by having more neutrons to keep the protons farther apart (see ).
There are some noticeable spikes on the graph, which represent particularly tightly bound nuclei. These spikes reveal further details of nuclear forces, such as confirming that closed-shell nuclei (those with magic numbers of protons or neutrons or both) are more tightly bound. The spikes also indicate that some nuclei with even numbers for and , and with , are exceptionally tightly bound. This finding can be correlated with some of the cosmic abundances of the elements. The most common elements in the universe, as determined by observations of atomic spectra from outer space, are hydrogen, followed by , with much smaller amounts of and other elements. It should be noted that the heavier elements are created in supernova explosions, while the lighter ones are produced by nuclear fusion during the normal life cycles of stars, as will be discussed in subsequent chapters. The most common elements have the most tightly bound nuclei. It is also no accident that one of the most tightly bound light nuclei is , emitted in decay.
There is more to be learned from nuclear binding energies. The general trend in is fundamental to energy production in stars, and to fusion and fission energy sources on Earth, for example. This is one of the applications of nuclear physics covered in Medical Applications of Nuclear Physics. The abundance of elements on Earth, in stars, and in the universe as a whole is related to the binding energy of nuclei and has implications for the continued expansion of the universe.
### Problem-Solving Strategies
### For Reaction And Binding Energies and Activity Calculations in Nuclear Physics
1. Identify exactly what needs to be determined in the problem (identify the unknowns). This will allow you to decide whether the energy of a decay or nuclear reaction is involved, for example, or whether the problem is primarily concerned with activity (rate of decay).
2. Make a list of what is given or can be inferred from the problem as stated (identify the knowns).
3. For reaction and binding-energy problems, we use atomic rather than nuclear masses. Since the masses of neutral atoms are used, you must count the number of electrons involved. If these do not balance (such as in decay), then an energy adjustment of 0.511 MeV per electron must be made. Also note that atomic masses may not be given in a problem; they can be found in tables.
4. For problems involving activity, the relationship of activity to half-life, and the number of nuclei given in the equation Owing to the fact that number of nuclei is involved, you will also need to be familiar with moles and Avogadro’s number.
5. Perform the desired calculation; keep careful track of plus and minus signs as well as powers of 10.
6. Check the answer to see if it is reasonable: Does it make sense? Compare your results with worked examples and other information in the text. (Heeding the advice in Step 5 will also help you to be certain of your result.) You must understand the problem conceptually to be able to determine whether the numerical result is reasonable.
### Test Prep for AP Courses
### Section Summary
1. The binding energy (BE) of a nucleus is the energy needed to separate it into individual protons and neutrons. In terms of atomic masses,
where is the mass of a hydrogen atom, is the atomic mass of the nuclide, and is the mass of a neutron. Patterns in the binding energy per nucleon, , reveal details of the nuclear force. The larger the , the more stable the nucleus.
### Conceptual Questions
### Problems & Exercises
|
# Radioactivity and Nuclear Physics
## Tunneling
### Learning Objectives
By the end of this section, you will be able to:
1. Define and discuss tunneling.
2. Define potential barrier.
3. Explain quantum tunneling.
Protons and neutrons are bound inside nuclei, that means energy must be supplied to break them away. The situation is analogous to a marble in a bowl that can roll around but lacks the energy to get over the rim. It is bound inside the bowl (see ). If the marble could get over the rim, it would gain kinetic energy by rolling down outside. However classically, if the marble does not have enough kinetic energy to get over the rim, it remains forever trapped in its well.
In a nucleus, the attractive nuclear potential is analogous to the bowl at the top of a volcano (where the “volcano” refers only to the shape). Protons and neutrons have kinetic energy, but it is about 8 MeV less than that needed to get out (see ). That is, they are bound by an average of 8 MeV per nucleon. The slope of the hill outside the bowl is analogous to the repulsive Coulomb potential for a nucleus, such as for an particle outside a positive nucleus. In decay, two protons and two neutrons spontaneously break away as a unit. Yet the protons and neutrons do not have enough kinetic energy to get over the rim. So how does the particle get out?
The answer was supplied in 1928 by the Russian physicist George Gamow (1904–1968). The particle tunnels through a region of space it is forbidden to be in, and it comes out of the side of the nucleus. Like an electron making a transition between orbits around an atom, it travels from one point to another without ever having been in between. indicates how this works. The wave function of a quantum mechanical particle varies smoothly, going from within an atomic nucleus (on one side of a potential energy barrier) to outside the nucleus (on the other side of the potential energy barrier). Inside the barrier, the wave function does not become zero but decreases exponentially, and we do not observe the particle inside the barrier. The probability of finding a particle is related to the square of its wave function, and so there is a small probability of finding the particle outside the barrier, which implies that the particle can tunnel through the barrier. This process is called barrier penetration or quantum mechanical tunneling. This concept was developed in theory by J. Robert Oppenheimer (who led the development of the first nuclear bombs during World War II) and was used by Gamow and others to describe decay.
Good ideas explain
more than one thing. In addition to qualitatively explaining how the four nucleons in an particle can get out of the nucleus, the detailed theory also explains quantitatively the half-life of various nuclei that undergo decay. This description is what Gamow and others devised, and it works for decay half-lives that vary by 17 orders of magnitude. Experiments have shown that the more energetic the decay of a particular nuclide is, the shorter is its half-life. Tunneling explains this in the following manner: For the decay to be more energetic, the nucleons must have more energy in the nucleus and should be able to ascend a little closer to the rim. The barrier is therefore not as thick for more energetic decay, and the exponential decrease of the wave function inside the barrier is not as great. Thus the probability of finding the particle outside the barrier is greater, and the half-life is shorter.
Tunneling as an effect also occurs in quantum mechanical systems other than nuclei. Electrons trapped in solids can tunnel from one object to another if the barrier between the objects is thin enough. The process is the same in principle as described for decay. It is far more likely for a thin barrier than a thick one. Scanning tunneling electron microscopes function on this principle. The current of electrons that travels between a probe and a sample tunnels through a barrier and is very sensitive to its thickness, allowing detection of individual atoms as shown in .
### Section Summary
1. Tunneling is a quantum mechanical process of potential energy barrier penetration. The concept was first applied to explain decay, but tunneling is found to occur in other quantum mechanical systems.
### Conceptual Questions
### Problems-Exercises
|
Subsets and Splits