text
stringlengths
313
1.33M
# Physics of Hearing ## Ultrasound ### Learning Objectives By the end of this section, you will be able to: 1. Define acoustic impedance and intensity reflection coefficient. 2. Describe medical and other uses of ultrasound technology. 3. Calculate acoustic impedance using density values and the speed of ultrasound. 4. Calculate the velocity of a moving object using Doppler-shifted ultrasound. Any sound with a frequency above 20,000 Hz (or 20 kHz)—that is, above the highest audible frequency—is defined to be ultrasound. In practice, it is possible to create ultrasound frequencies up to more than a gigahertz. (Higher frequencies are difficult to create; furthermore, they propagate poorly because they are very strongly absorbed.) Ultrasound has a tremendous number of applications, which range from burglar alarms to use in cleaning delicate objects to the guidance systems of bats. We begin our discussion of ultrasound with some of its applications in medicine, in which it is used extensively both for diagnosis and for therapy. ### Ultrasound in Medical Therapy Ultrasound, like any wave, carries energy that can be absorbed by the medium carrying it, producing effects that vary with intensity. When focused to intensities of to , ultrasound can be used to shatter gallstones or pulverize cancerous tissue in surgical procedures. (See .) Intensities this great can damage individual cells, variously causing their protoplasm to stream inside them, altering their permeability, or rupturing their walls through cavitation. Cavitation is the creation of vapor cavities in a fluid—the longitudinal vibrations in ultrasound alternatively compress and expand the medium, and at sufficient amplitudes the expansion separates molecules. Most cavitation damage is done when the cavities collapse, producing even greater shock pressures. Most of the energy carried by high-intensity ultrasound in tissue is converted to thermal energy. In fact, intensities of to are commonly used for deep-heat treatments called ultrasound diathermy. Frequencies of 0.8 to 1 MHz are typical. In both athletics and physical therapy, ultrasound diathermy is most often applied to injured or overworked muscles to relieve pain and improve flexibility. Skill is needed by the therapist to avoid “bone burns” and other tissue damage caused by overheating and cavitation, sometimes made worse by reflection and focusing of the ultrasound by joint and bone tissue. In some instances, you may encounter a different decibel scale, called the sound pressure level, when ultrasound travels in water or in human and other biological tissues. We shall not use the scale here, but it is notable that numbers for sound pressure levels range 60 to 70 dB higher than you would quote for , the sound intensity level used in this text. Should you encounter a sound pressure level of 220 decibels, then, it is not an astronomically high intensity, but equivalent to about 155 dB—high enough to destroy tissue, but not as unreasonably high as it might seem at first. ### Ultrasound in Medical Diagnostics When used for imaging, ultrasonic waves are emitted from a transducer, a crystal exhibiting the piezoelectric effect (the expansion and contraction of a substance when a voltage is applied across it, causing a vibration of the crystal). These high-frequency vibrations are transmitted into any tissue in contact with the transducer. Similarly, if a pressure is applied to the crystal (in the form of a wave reflected off tissue layers), a voltage is produced which can be recorded. The crystal therefore acts as both a transmitter and a receiver of sound. Ultrasound is also partially absorbed by tissue on its path, both on its journey away from the transducer and on its return journey. From the time between when the original signal is sent and when the reflections from various boundaries between media are received, (as well as a measure of the intensity loss of the signal), the nature and position of each boundary between tissues and organs may be deduced. Reflections at boundaries between two different media occur because of differences in a characteristic known as the acoustic impedance of each substance. Impedance is defined as where is the density of the medium (in ) and is the speed of sound through the medium (in m/s). The units for are therefore . shows the density and speed of sound through various media (including various soft tissues) and the associated acoustic impedances. Note that the acoustic impedances for soft tissue do not vary much but that there is a big difference between the acoustic impedance of soft tissue and air and also between soft tissue and bone. At the boundary between media of different acoustic impedances, some of the wave energy is reflected and some is transmitted. The greater the difference in acoustic impedance between the two media, the greater the reflection and the smaller the transmission. The intensity reflection coefficient is defined as the ratio of the intensity of the reflected wave relative to the incident (transmitted) wave. This statement can be written mathematically as where and are the acoustic impedances of the two media making up the boundary. A reflection coefficient of zero (corresponding to total transmission and no reflection) occurs when the acoustic impedances of the two media are the same. An impedance “match” (no reflection) provides an efficient coupling of sound energy from one medium to another. The image formed in an ultrasound is made by tracking reflections (as shown in ) and mapping the intensity of the reflected sound waves in a two-dimensional plane. The applications of ultrasound in medical diagnostics have produced untold benefits with no known risks. Diagnostic intensities are too low (about ) to cause thermal damage. More significantly, ultrasound has been in use for several decades and detailed follow-up studies do not show evidence of ill effects, quite unlike the case for x-rays. The most common ultrasound applications produce an image like that shown in . The speaker-microphone broadcasts a directional beam, sweeping the beam across the area of interest. This is accomplished by having multiple ultrasound sources in the probe’s head, which are phased to interfere constructively in a given, adjustable direction. Echoes are measured as a function of position as well as depth. A computer constructs an image that reveals the shape and density of internal structures. How much detail can ultrasound reveal? The image in is typical of low-cost systems, but that in shows the remarkable detail possible with more advanced systems, including 3D imaging. Ultrasound today is commonly used in prenatal care. Such imaging can be used to see if the fetus is developing at a normal rate, and help in the determination of serious problems early in the pregnancy. Ultrasound is also in wide use to image the chambers of the heart and the flow of blood within the beating heart, using the Doppler effect (echocardiology). Whenever a wave is used as a probe, it is very difficult to detect details smaller than its wavelength . Indeed, current technology cannot do quite this well. Abdominal scans may use a 7-MHz frequency, and the speed of sound in tissue is about 1540 m/s—so the wavelength limit to detail would be . In practice, 1-mm detail is attainable, which is sufficient for many purposes. Higher-frequency ultrasound would allow greater detail, but it does not penetrate as well as lower frequencies do. The accepted rule of thumb is that you can effectively scan to a depth of about into tissue. For 7 MHz, this penetration limit is , which is 0.11 m. Higher frequencies may be employed in smaller organs, such as the eye, but are not practical for looking deep into the body. In addition to shape information, ultrasonic scans can produce density information superior to that found in X-rays, because the intensity of a reflected sound is related to changes in density. Sound is most strongly reflected at places where density changes are greatest. Another major use of ultrasound in medical diagnostics is to detect motion and determine velocity through the Doppler shift of an echo, known as Doppler-shifted ultrasound. This technique is used to monitor fetal heartbeat, measure blood velocity, and detect occlusions in blood vessels, for example. (See .) The magnitude of the Doppler shift in an echo is directly proportional to the velocity of whatever reflects the sound. Because an echo is involved, there is actually a double shift. The first occurs because the reflector (say a fetal heart) is a moving observer and receives a Doppler-shifted frequency. The reflector then acts as a moving source, producing a second Doppler shift. A clever technique is used to measure the Doppler shift in an echo. The frequency of the echoed sound is superimposed on the broadcast frequency, producing beats. The beat frequency is , and so it is directly proportional to the Doppler shift () and hence, the reflector’s velocity. The advantage in this technique is that the Doppler shift is small (because the reflector’s velocity is small), so that great accuracy would be needed to measure the shift directly. But measuring the beat frequency is easy, and it is not affected if the broadcast frequency varies somewhat. Furthermore, the beat frequency is in the audible range and can be amplified for audio feedback to the medical observer. ### Section Summary 1. The acoustic impedance is defined as: is the density of a medium through which the sound travels and is the speed of sound through that medium. 2. The intensity reflection coefficient , a measure of the ratio of the intensity of the wave reflected off a boundary between two media relative to the intensity of the incident wave, is given by 3. The intensity reflection coefficient is a unitless quantity. ### Conceptual Questions ### Problems & Exercises Unless otherwise indicated, for problems in this section, assume that the speed of sound through human tissues is 1540 m/s.
# Electric Charge and Electric Field ## Introduction to Electric Charge and Electric Field The image of American politician and scientist Benjamin Franklin (1706–1790) flying a kite in a thunderstorm is familiar to many schoolchildren. (See .) In this experiment, Franklin demonstrated a connection between lightning and static electricity. Sparks were drawn from a key hung on a kite string during an electrical storm. These sparks were like those produced by static electricity, such as the spark that jumps from your finger to a metal doorknob after you walk across a wool carpet. What Franklin demonstrated in his dangerous experiment was a connection between phenomena on two different scales: one the grand power of an electrical storm, the other an effect of more human proportions. Connections like this one reveal the underlying unity of the laws of nature, an aspect we humans find particularly appealing. Much has been written about Franklin. His experiments were only part of the life of a man who was a scientist, inventor, revolutionary, statesman, and writer. Franklin’s experiments were not performed in isolation, nor were they the only ones to reveal connections. For example, the Italian scientist Luigi Galvani (1737–1798) performed a series of experiments in which static electricity was used to stimulate contractions of leg muscles of dead frogs, an effect already known in humans subjected to static discharges. But Galvani also found that if he joined two metal wires (say copper and zinc) end to end and touched the other ends to muscles, he produced the same effect in frogs as static discharge. Alessandro Volta (1745–1827), partly inspired by Galvani’s work, experimented with various combinations of metals and developed the battery. During the same era, other scientists made progress in discovering fundamental connections. The periodic table was developed as the systematic properties of the elements were discovered. This influenced the development and refinement of the concept of atoms as the basis of matter. Such submicroscopic descriptions of matter also help explain a great deal more. Atomic and molecular interactions, such as the forces of friction, cohesion, and adhesion, are now known to be manifestations of the electromagnetic force. Static electricity is just one aspect of the electromagnetic force, which also includes moving electricity and magnetism. All the macroscopic forces that we experience directly, such as the sensations of touch and the tension in a rope, are due to the electromagnetic force, one of the four fundamental forces in nature. The gravitational force, another fundamental force, is actually sensed through the electromagnetic interaction of molecules, such as between those in our feet and those on the top of a bathroom scale. (The other two fundamental forces, the strong nuclear force and the weak nuclear force, cannot be sensed on the human scale.) This chapter begins the study of electromagnetic phenomena at a fundamental level. The next several chapters will cover static electricity, moving electricity, and magnetism—collectively known as electromagnetism. In this chapter, we begin with the study of electric phenomena due to charges that are at least temporarily stationary, called electrostatics, or static electricity.
# Electric Charge and Electric Field ## Static Electricity and Charge: Conservation of Charge ### Learning Objectives By the end of this section, you will be able to: 1. Define electric charge, and describe how the two types of charge interact. 2. Describe three common situations that generate static electricity. 3. State the law of conservation of charge. What makes plastic wrap cling? Static electricity. Not only are applications of static electricity common these days, its existence has been known since ancient times. The first record of its effects dates to ancient Greeks who noted more than 500 years B.C. that polishing amber temporarily enabled it to attract bits of straw (see ). The very word electric derives from the Greek word for amber (electron). Many of the characteristics of static electricity can be explored by rubbing things together. Rubbing creates the spark you get from walking across a wool carpet, for example. Static cling generated in a clothes dryer and the attraction of straw to recently polished amber also result from rubbing. Similarly, lightning results from air movements under certain weather conditions. You can also rub a balloon on your hair, and the static electricity created can then make the balloon cling to a wall. We also have to be cautious of static electricity, especially in dry climates. When we pump gasoline, we are warned to discharge ourselves (after sliding across the seat) on a metal surface before grabbing the gas nozzle. Attendants in hospital operating rooms must wear booties with a conductive strip of aluminum foil on the bottoms to avoid creating sparks which may ignite flammable anesthesia gases combined with the oxygen being used. Some of the most basic characteristics of static electricity include: 1. The effects of static electricity are explained by a physical quantity not previously introduced, called electric charge. 2. There are only two types of charge, one called positive and the other called negative. 3. Like charges repel, whereas unlike charges attract. 4. The force between charges decreases with distance. How do we know there are two types of electric charge? When various materials are rubbed together in controlled ways, certain combinations of materials always produce one type of charge on one material and the opposite type on the other. By convention, we call one type of charge “positive”, and the other type “negative.” For example, when glass is rubbed with silk, the glass becomes positively charged and the silk negatively charged. Since the glass and silk have opposite charges, they attract one another like clothes that have rubbed together in a dryer. Two glass rods rubbed with silk in this manner will repel one another, since each rod has positive charge on it. Similarly, two silk cloths so rubbed will repel, since both cloths have negative charge. shows how these simple materials can be used to explore the nature of the force between charges. More sophisticated questions arise. Where do these charges come from? Can you create or destroy charge? Is there a smallest unit of charge? Exactly how does the force depend on the amount of charge and the distance between charges? Such questions obviously occurred to Benjamin Franklin and other early researchers, and they interest us even today. ### Charge Carried by Electrons and Protons Franklin wrote in his letters and books that he could see the effects of electric charge but did not understand what caused the phenomenon. Today we have the advantage of knowing that normal matter is made of atoms, and that atoms contain positive and negative charges, usually in equal amounts. shows a simple model of an atom with negative electrons orbiting its positive nucleus. The nucleus is positive due to the presence of positively charged protons. Nearly all charge in nature is due to electrons and protons, which are two of the three building blocks of most matter. (The third is the neutron, which is neutral, carrying no charge.) Other charge-carrying particles are observed in cosmic rays and nuclear decay, and are created in particle accelerators. All but the electron and proton survive only a short time and are quite rare by comparison. The charges of electrons and protons are identical in magnitude but opposite in sign. Furthermore, all charged objects in nature are integral multiples of this basic quantity of charge, meaning that all charges are made of combinations of a basic unit of charge. Usually, charges are formed by combinations of electrons and protons. The magnitude of this basic charge is The symbol is commonly used for charge and the subscript indicates the charge of a single electron (or proton). The SI unit of charge is the coulomb (C). The number of protons needed to make a charge of 1.00 C is Similarly, electrons have a combined charge of −1.00 coulomb. Just as there is a smallest bit of an element (an atom), there is a smallest bit of charge. There is no directly observed charge smaller than (see Things Great and Small: The Submicroscopic Origin of Charge), and all observed charges are integral multiples of . shows a person touching a Van de Graaff generator and receiving excess positive charge. The expanded view of a hair shows the existence of both types of charges but an excess of positive. The repulsion of these positive like charges causes the strands of hair to repel other strands of hair and to stand up. The further blowup shows an artist’s conception of an electron and a proton perhaps found in an atom in a strand of hair. The electron seems to have no substructure; in contrast, when the substructure of protons is explored by scattering extremely energetic electrons from them, it appears that there are point-like particles inside the proton. These sub-particles, named quarks, have never been directly observed, but they are believed to carry fractional charges as seen in . Charges on electrons and protons and all other directly observable particles are unitary, but these quark substructures carry charges of either or . There are continuing attempts to observe fractional charge directly and to learn of the properties of quarks, which are perhaps the ultimate substructure of matter. ### Separation of Charge in Atoms Charges in atoms and molecules can be separated—for example, by rubbing materials together. Some atoms and molecules have a greater affinity for electrons than others and will become negatively charged by close contact in rubbing, leaving the other material positively charged. (See .) Positive charge can similarly be induced by rubbing. Methods other than rubbing can also separate charges. Batteries, for example, use combinations of substances that interact in such a way as to separate charges. Chemical interactions may transfer negative charge from one substance to the other, making one battery terminal negative and leaving the first one positive. No charge is actually created or destroyed when charges are separated as we have been discussing. Rather, existing charges are moved about. In fact, in all situations the total amount of charge is always constant. This universally obeyed law of nature is called the law of conservation of charge. In more exotic situations, such as in particle accelerators, mass, , can be created from energy in the amount . Sometimes, the created mass is charged, such as when an electron is created. Whenever a charged particle is created, another having an opposite charge is always created along with it, so that the total charge created is zero. Usually, the two particles are “matter-antimatter” counterparts. For example, an antielectron would usually be created at the same time as an electron. The antielectron has a positive charge (it is called a positron), and so the total charge created is zero. (See .) All particles have antimatter counterparts with opposite signs. When matter and antimatter counterparts are brought together, they completely annihilate one another. By annihilate, we mean that the mass of the two particles is converted to energy E, again obeying the relationship . Since the two particles have equal and opposite charge, the total charge is zero before and after the annihilation; thus, total charge is conserved. The law of conservation of charge is absolute—it has never been observed to be violated. Charge, then, is a special physical quantity, joining a very short list of other quantities in nature that are always conserved. Other conserved quantities include energy, momentum, and angular momentum. ### Test Prep for AP Courses ### Section Summary 1. There are only two types of charge, which we call positive and negative. 2. Like charges repel, unlike charges attract, and the force between charges decreases with the square of the distance. 3. The vast majority of positive charge in nature is carried by protons, while the vast majority of negative charge is carried by electrons. 4. The electric charge of one electron is equal in magnitude and opposite in sign to the charge of one proton. 5. An ion is an atom or molecule that has nonzero total charge due to having unequal numbers of electrons and protons. 6. The SI unit for charge is the coulomb (C), with protons and electrons having charges of opposite sign but equal magnitude; the magnitude of this basic charge is 7. Whenever charge is created or destroyed, equal amounts of positive and negative are involved. 8. Most often, existing charges are separated from neutral objects to obtain some net charge. 9. Both positive and negative charges exist in neutral objects and can be separated by rubbing one object with another. For macroscopic objects, negatively charged means an excess of electrons and positively charged means a depletion of electrons. 10. The law of conservation of charge ensures that whenever a charge is created, an equal charge of the opposite sign is created at the same time. ### Conceptual Questions ### Problems & Exercises
# Electric Charge and Electric Field ## Conductors and Insulators ### Learning Objectives By the end of this section, you will be able to: 1. Define conductor and insulator, explain the difference, and give examples of each. 2. Describe three methods for charging an object. 3. Explain what happens to an electric force as you move farther from the source. 4. Define polarization. Some substances, such as metals and salty water, allow charges to move through them with relative ease. Some of the electrons in metals and similar conductors are not bound to individual atoms or sites in the material. These free electrons can move through the material much as air moves through loose sand. Any substance that has free electrons and allows charge to move relatively freely through it is called a conductor. The moving electrons may collide with fixed atoms and molecules, losing some energy, but they can move in a conductor. Superconductors allow the movement of charge without any loss of energy. Salty water and other similar conducting materials contain free ions that can move through them. An ion is an atom or molecule having a positive or negative (nonzero) total charge. In other words, the total number of electrons is not equal to the total number of protons. Other substances, such as glass, do not allow charges to move through them. These are called insulators. Electrons and ions in insulators are bound in the structure and cannot move easily—as much as times more slowly than in conductors. Pure water and dry table salt are insulators, for example, whereas molten salt and salty water are conductors. ### Charging by Contact shows an electroscope being charged by touching it with a positively charged glass rod. Because the glass rod is an insulator, it must actually touch the electroscope to transfer charge to or from it. (Note that the extra positive charges reside on the surface of the glass rod as a result of rubbing it with silk before starting the experiment.) Since only electrons move in metals, we see that they are attracted to the top of the electroscope. There, some are transferred to the positive rod by touch, leaving the electroscope with a net positive charge. Electrostatic repulsion in the leaves of the charged electroscope separates them. The electrostatic force has a horizontal component that results in the leaves moving apart as well as a vertical component that is balanced by the gravitational force. Similarly, the electroscope can be negatively charged by contact with a negatively charged object. ### Charging by Induction It is not necessary to transfer excess charge directly to an object in order to charge it. shows a method of induction wherein a charge is created in a nearby object, without direct contact. Here we see two neutral metal spheres in contact with one another but insulated from the rest of the world. A positively charged rod is brought near one of them, attracting negative charge to that side, leaving the other sphere positively charged. This is an example of induced polarization of neutral objects. Polarization is the separation of charges in an object that remains neutral. If the spheres are now separated (before the rod is pulled away), each sphere will have a net charge. Note that the object closest to the charged rod receives an opposite charge when charged by induction. Note also that no charge is removed from the charged rod, so that this process can be repeated without depleting the supply of excess charge. Another method of charging by induction is shown in . The neutral metal sphere is polarized when a charged rod is brought near it. The sphere is then grounded, meaning that a conducting wire is run from the sphere to the ground. Since the earth is large and most ground is a good conductor, it can supply or accept excess charge easily. In this case, electrons are attracted to the sphere through a wire called the ground wire, because it supplies a conducting path to the ground. The ground connection is broken before the charged rod is removed, leaving the sphere with an excess charge opposite to that of the rod. Again, an opposite charge is achieved when charging by induction and the charged rod loses none of its excess charge. Neutral objects can be attracted to any charged object. The pieces of straw attracted to polished amber are neutral, for example. If you run a plastic comb through your hair, the charged comb can pick up neutral pieces of paper. shows how the polarization of atoms and molecules in neutral objects results in their attraction to a charged object. When a charged rod is brought near a neutral substance, an insulator in this case, the distribution of charge in atoms and molecules is shifted slightly. Opposite charge is attracted nearer the external charged rod, while like charge is repelled. Since the electrostatic force decreases with distance, the repulsion of like charges is weaker than the attraction of unlike charges, and so there is a net attraction. Thus a positively charged glass rod attracts neutral pieces of paper, as will a negatively charged rubber rod. Some molecules, like water, are polar molecules. Polar molecules have a natural or inherent separation of charge, although they are neutral overall. Polar molecules are particularly affected by other charged objects and show greater polarization effects than molecules with naturally uniform charge distributions. ### Test Prep for AP Courses ### Section Summary 1. Polarization is the separation of positive and negative charges in a neutral object. 2. A conductor is a substance that allows charge to flow freely through its atomic structure. 3. An insulator holds charge within its atomic structure. 4. Objects with like charges repel each other, while those with unlike charges attract each other. 5. A conducting object is said to be grounded if it is connected to the Earth through a conductor. Grounding allows transfer of charge to and from the earth’s large reservoir. 6. Objects can be charged by contact with another charged object and obtain the same sign charge. 7. If an object is temporarily grounded, it can be charged by induction, and obtains the opposite sign charge. 8. Polarized objects have their positive and negative charges concentrated in different areas, giving them a non-symmetrical charge. 9. Polar molecules have an inherent separation of charge. ### Conceptual Questions ### Problems & Exercises
# Electric Charge and Electric Field ## Coulomb’s Law ### Learning Objectives By the end of this section, you will be able to: 1. State Coulomb’s law in terms of how the electrostatic force changes with the distance between two objects. 2. Calculate the electrostatic force between two charged point forces, such as electrons or protons. 3. Compare the electrostatic force to the gravitational attraction for a proton and an electron; for a human and the Earth. Through the work of scientists in the late 18th century, the main features of the electrostatic force—the existence of two types of charge, the observation that like charges repel, unlike charges attract, and the decrease of force with distance—were eventually refined, and expressed as a mathematical formula. The mathematical formula for the electrostatic force is called Coulomb’s law after the French physicist Charles Coulomb (1736–1806), who performed experiments and first proposed a formula to calculate it. Although the formula for Coulomb’s law is simple, it was no mean task to prove it. The experiments Coulomb did, with the primitive equipment then available, were difficult. Modern experiments have verified Coulomb’s law to great precision. For example, it has been shown that the force is inversely proportional to distance between two objects squared to an accuracy of 1 part in . No exceptions have ever been found, even at the small distances within the atom. As the example implies, gravitational force is completely negligible on a small scale, where the interactions of individual charged particles are important. On a large scale, such as between the Earth and a person, the reverse is true. Most objects are nearly electrically neutral, and so attractive and repulsive Coulomb forces nearly cancel. Gravitational force on a large scale dominates interactions between large objects because it is always attractive, while Coulomb forces tend to cancel. ### Test Prep for AP Courses ### Section Summary 1. Frenchman Charles Coulomb was the first to publish the mathematical equation that describes the electrostatic force between two objects. 2. Coulomb’s law gives the magnitude of the force between point charges. It is where 3. This Coulomb force is extremely basic, since most charges are due to point-like particles. It is responsible for all electrostatic effects and underlies most macroscopic forces. 4. The Coulomb force is extraordinarily strong compared with the gravitational force, another basic force—but unlike gravitational force it can cancel, since it can be either attractive or repulsive. 5. The electrostatic force between two subatomic particles is far greater than the gravitational force between the same two particles. ### Conceptual Questions ### Problems & Exercises
# Electric Charge and Electric Field ## Electric Field: Concept of a Field Revisited ### Learning Objectives By the end of this section, you will be able to: 1. Describe a force field and calculate the strength of an electric field due to a point charge. 2. Calculate the force exerted on a test charge by an electric field. 3. Explain the relationship between electrical force (F) on a test charge and electrical field strength (E). Contact forces, such as between a baseball and a bat, are explained on the small scale by the interaction of the charges in atoms and molecules in close proximity. They interact through forces that include the Coulomb force. Action at a distance is a force between objects that are not close enough for their atoms to “touch.” That is, they are separated by more than a few atomic diameters. For example, a charged rubber comb attracts neutral bits of paper from a distance via the Coulomb force. It is very useful to think of an object being surrounded in space by a force field. The force field carries the force to another object (called a test object) some distance away. ### Concept of a Field A field is a way of conceptualizing and mapping the force that surrounds any object and acts on another object at a distance without apparent physical connection. For example, the gravitational field surrounding the earth (and all other masses) represents the gravitational force that would be experienced if another mass were placed at a given point within the field. In the same way, the Coulomb force field surrounding any charge extends throughout space. Using Coulomb’s law, , its magnitude is given by the equation , for a point charge (a particle having a charge ) acting on a test charge at a distance (see ). Both the magnitude and direction of the Coulomb force field depend on and the test charge . To simplify things, we would prefer to have a field that depends only on and not on the test charge . The electric field is defined in such a manner that it represents only the charge creating it and is unique at every point in space. Specifically, the electric field is defined to be the ratio of the Coulomb force to the test charge: where is the electrostatic force (or Coulomb force) exerted on a positive test charge . It is understood that is in the same direction as . It is also assumed that is so small that it does not alter the charge distribution creating the electric field. The units of electric field are newtons per coulomb (N/C). If the electric field is known, then the electrostatic force on any charge is simply obtained by multiplying charge times electric field, or . Consider the electric field due to a point charge . According to Coulomb’s law, the force it exerts on a test charge is . Thus the magnitude of the electric field, , for a point charge is Since the test charge cancels, we see that The electric field is thus seen to depend only on the charge and the distance ; it is completely independent of the test charge . ### Test Prep for AP Courses ### Section Summary 1. The electrostatic force field surrounding a charged object extends out into space in all directions. 2. The electrostatic force exerted by a point charge on a test charge at a distance depends on the charge of both charges, as well as the distance between the two. 3. The electric field is defined to be where 4. The magnitude of the electric field created by a point charge is where ### Conceptual Questions ### Problem Exercises
# Electric Charge and Electric Field ## Electric Field Lines: Multiple Charges ### Learning Objectives By the end of this section, you will be able to: 1. Calculate the total force (magnitude and direction) exerted on a test charge from more than one charge 2. Describe an electric field diagram of a positive point charge; of a negative point charge with twice the magnitude of positive charge 3. Draw the electric field lines between two points of the same charge; between two points of opposite charge. Drawings using lines to represent electric fields around charged objects are very useful in visualizing field strength and direction. Since the electric field has both magnitude and direction, it is a vector. Like all vectors, the electric field can be represented by an arrow that has length proportional to its magnitude and that points in the correct direction. (We have used arrows extensively to represent force vectors, for example.) shows two pictorial representations of the same electric field created by a positive point charge . (b) shows the standard representation using continuous lines. (a) shows numerous individual arrows with each arrow representing the force on a test charge . Field lines are essentially a map of infinitesimal force vectors. Note that the electric field is defined for a positive test charge , so that the field lines point away from a positive charge and toward a negative charge. (See .) The electric field strength is exactly proportional to the number of field lines per unit area, since the magnitude of the electric field for a point charge is and area is proportional to . This pictorial representation, in which field lines represent the direction and their closeness (that is, their areal density or the number of lines crossing a unit area) represents strength, is used for all fields: electrostatic, gravitational, magnetic, and others. In many situations, there are multiple charges. The total electric field created by multiple charges is the vector sum of the individual fields created by each charge. The following example shows how to add electric field vectors. shows how the electric field from two point charges can be drawn by finding the total field at representative points and drawing electric field lines consistent with those points. While the electric fields from multiple charges are more complex than those of single charges, some simple features are easily noticed. For example, the field is weaker between like charges, as shown by the lines being farther apart in that region. (This is because the fields from each charge exert opposing forces on any charge placed between them.) (See and (a).) Furthermore, at a great distance from two like charges, the field becomes identical to the field from a single, larger charge. (b) shows the electric field of two unlike charges. The field is stronger between the charges. In that region, the fields from each charge are in the same direction, and so their strengths add. The field of two unlike charges is weak at large distances, because the fields of the individual charges are in opposite directions and so their strengths subtract. At very large distances, the field of two unlike charges looks like that of a smaller single charge. We use electric field lines to visualize and analyze electric fields (the lines are a pictorial tool, not a physical entity in themselves). The properties of electric field lines for any charge distribution can be summarized as follows: 1. Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges. 2. The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge. 3. The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines. 4. The direction of the electric field is tangent to the field line at any point in space. 5. Field lines can never cross. The last property means that the field is unique at any point. The field line represents the direction of the field; so if they crossed, the field would have two directions at that location (an impossibility if the field is unique). ### Test Prep for AP Courses ### Section Summary 1. Drawings of electric field lines are useful visual tools. The properties of electric field lines for any charge distribution are that: 2. Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges. 3. The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge. 4. The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines. 5. The direction of the electric field is tangent to the field line at any point in space. 6. Field lines can never cross. ### Conceptual Questions ### Problem Exercises
# Electric Charge and Electric Field ## Electric Forces in Biology ### Learning Objectives By the end of this section, you will be able to: 1. Describe how a water molecule is polar. 2. Explain electrostatic screening by a water molecule within a living cell. Classical electrostatics has an important role to play in modern molecular biology. Large molecules such as proteins, nucleic acids, and so on—so important to life—are usually electrically charged. DNA itself is highly charged; it is the electrostatic force that not only holds the molecule together but gives the molecule structure and strength. is a schematic of the DNA double helix. The four nucleotide bases are given the symbols A (adenine), C (cytosine), G (guanine), and T (thymine). The order of the four bases varies in each strand, but the pairing between bases is always the same. C and G are always paired and A and T are always paired, which helps to preserve the order of bases in cell division (mitosis) so as to pass on the correct genetic information. Since the Coulomb force drops with distance (), the distances between the base pairs must be small enough that the electrostatic force is sufficient to hold them together. DNA is a highly charged molecule, with about (fundamental charge) per m. The distance separating the two strands that make up the DNA structure is about 1 nm, while the distance separating the individual atoms within each base is about 0.3 nm. One might wonder why electrostatic forces do not play a larger role in biology than they do if we have so many charged molecules. The reason is that the electrostatic force is “diluted” due to screening between molecules. This is due to the presence of other charges in the cell. ### Polarity of Water Molecules The best example of this charge screening is the water molecule, represented as . Water is a strongly polar molecule. Its 10 electrons (8 from the oxygen atom and 2 from the two hydrogen atoms) tend to remain closer to the oxygen nucleus than the hydrogen nuclei. This creates two centers of equal and opposite charges—what is called a dipole, as illustrated in . The magnitude of the dipole is called the dipole moment. These two centers of charge will terminate some of the electric field lines coming from a free charge, as on a DNA molecule. This results in a reduction in the strength of the Coulomb interaction. One might say that screening makes the Coulomb force a short range force rather than long range. ### Cell Membranes Other ions of importance in biology that can reduce or screen Coulomb interactions are and and . These ions are located both inside and outside of living cells. The movement of these ions through cell membranes is crucial to the motion of nerve impulses through nerve axons. Recent studies of electrostatics in biology seem to show that electric fields in cells can be extended over larger distances, in spite of screening, by “microtubules” within the cell. These microtubules are hollow tubes composed of proteins that guide the movement of chromosomes when cells divide, the motion of other organisms within the cell, and provide mechanisms for motion of some cells (as motors). You are likely familiar with the role of electrical signals in nerve conduction and the importance of charges in cardiac and related activity. Changes in electrical properties are also essential in core biological processes. Ernest Everett Just, whose expertise in understanding and handling egg cells led to a number of critical experimental discoveries, investigated the role of the cell membrane in reproductive fertilization. In one key experiment, Just established that the egg membrane undergoes a depolarizing "wave of negativity" the moment it fuses with a sperm cell. This change in charge is now known as the "fast block" that ensures that only one sperm cell fuses with an egg cell and is critical for embryonic development. ### Bioelectricity and Wound Healing Just as electrical forces drive activities in healthy cells and systems, they are also critical in damaged ones. Scientists have long known that injuries or infections are managed by the body through various responses, including increased white blood cell concentrations, swelling, and tissue repair. For example, human cells damaged by wounds heal through a complex process. But what triggers it? Physicists and biologists working together at Vanderbilt University used an ultra-precise laser to uncover the processes organisms use to repair damage. Lead researchers Andrea Page-Degraw and Shane Hutson and study author Erica Shannon discovered that immediately upon damage, cells release calcium ions and eventually other molecules, driving an electrochemical response that initiates the healing process. Shannon notes that different types of damage lead to different chemical releases, demonstrating how organisms may initiate specific responses to best address the injury. While far more research is required to understand the triggering and response method, other research indicates that bioelectricity is highly involved in wound healing. Several studies have indicated that precise and low-level electrical stimulation of wounds (such as those from surgeries) leads to faster healing. While the mechanisms are not fully understood, electrical stimulation is a growing area of research and practice in medicine. ### Section Summary 1. Many molecules in living organisms, such as DNA, carry a charge. 2. An uneven distribution of the positive and negative charges within a polar molecule produces a dipole. 3. The effect of a Coulomb field generated by a charged object may be reduced or blocked by other nearby charged objects. 4. Biological systems contain water, and because water molecules are polar, they have a strong effect on other molecules in living systems. ### Conceptual Question
# Electric Charge and Electric Field ## Conductors and Electric Fields in Static Equilibrium ### Learning Objectives By the end of this section, you will be able to: 1. List the three properties of a conductor in electrostatic equilibrium. 2. Explain the effect of an electric field on free charges in a conductor. 3. Explain why no electric field may exist inside a conductor. 4. Describe the electric field surrounding Earth. 5. Explain what happens to an electric field applied to an irregular conductor. 6. Describe how a lightning rod works. 7. Explain how a metal car may protect passengers inside from the dangerous electric fields caused by a downed line touching the car. Conductors contain free charges that move easily. When excess charge is placed on a conductor or the conductor is put into a static electric field, charges in the conductor quickly respond to reach a steady state called electrostatic equilibrium. shows the effect of an electric field on free charges in a conductor. The free charges move until the field is perpendicular to the conductor’s surface. There can be no component of the field parallel to the surface in electrostatic equilibrium, since, if there were, it would produce further movement of charge. A positive free charge is shown, but free charges can be either positive or negative and are, in fact, negative in metals. The motion of a positive charge is equivalent to the motion of a negative charge in the opposite direction. A conductor placed in an electric field will be polarized. shows the result of placing a neutral conductor in an originally uniform electric field. The field becomes stronger near the conductor but entirely disappears inside it. The properties of a conductor are consistent with the situations already discussed and can be used to analyze any conductor in electrostatic equilibrium. This can lead to some interesting new insights, such as described below. How can a very uniform electric field be created? Consider a system of two metal plates with opposite charges on them, as shown in . The properties of conductors in electrostatic equilibrium indicate that the electric field between the plates will be uniform in strength and direction. Except near the edges, the excess charges distribute themselves uniformly, producing field lines that are uniformly spaced (hence uniform in strength) and perpendicular to the surfaces (hence uniform in direction, since the plates are flat). The edge effects are less important when the plates are close together. ### Earth’s Electric Field A near uniform electric field of approximately 150 N/C, directed downward, surrounds Earth, with the magnitude increasing slightly as we get closer to the surface. What causes the electric field? At around 100 km above the surface of Earth we have a layer of charged particles, called the ionosphere. The ionosphere is responsible for a range of phenomena including the electric field surrounding Earth. In fair weather the ionosphere is positive and the Earth largely negative, maintaining the electric field ((a)). In storm conditions clouds form and localized electric fields can be larger and reversed in direction ((b)). The exact charge distributions depend on the local conditions, and variations of (b) are possible. If the electric field is sufficiently large, the insulating properties of the surrounding material break down and it becomes conducting. For air this occurs at around N/C. Air ionizes ions and electrons recombine, and we get discharge in the form of lightning sparks and corona discharge. ### Electric Fields on Uneven Surfaces So far we have considered excess charges on a smooth, symmetrical conductor surface. What happens if a conductor has sharp corners or is pointed? Excess charges on a nonuniform conductor become concentrated at the sharpest points. Additionally, excess charge may move on or off the conductor at the sharpest points. To see how and why this happens, consider the charged conductor in . The electrostatic repulsion of like charges is most effective in moving them apart on the flattest surface, and so they become least concentrated there. This is because the forces between identical pairs of charges at either end of the conductor are identical, but the components of the forces parallel to the surfaces are different. The component parallel to the surface is greatest on the flattest surface and, hence, more effective in moving the charge. The same effect is produced on a conductor by an externally applied electric field, as seen in (c). Since the field lines must be perpendicular to the surface, more of them are concentrated on the most curved parts. ### Applications of Conductors On a very sharply curved surface, such as shown in , the charges are so concentrated at the point that the resulting electric field can be great enough to remove them from the surface. This can be useful. Lightning rods work best when they are most pointed. The large charges created in storm clouds induce an opposite charge on a building that can result in a lightning bolt hitting the building. The induced charge is bled away continually by a lightning rod, preventing the more dramatic lightning strike. Of course, we sometimes wish to prevent the transfer of charge rather than to facilitate it. In that case, the conductor should be very smooth and have as large a radius of curvature as possible. (See .) Smooth surfaces are used on high-voltage transmission lines, for example, to avoid leakage of charge into the air. Another device that makes use of some of these principles is a Faraday cage. This is a metal shield that encloses a volume. All electrical charges will reside on the outside surface of this shield, and there will be no electrical field inside. A Faraday cage is used to prohibit stray electrical fields in the environment from interfering with sensitive measurements, such as the electrical signals inside a nerve cell. During electrical storms if you are driving a car, it is best to stay inside the car as its metal body acts as a Faraday cage with zero electrical field inside. If in the vicinity of a lightning strike, its effect is felt on the outside of the car and the inside is unaffected, provided you remain totally inside. This is also true if an active (“hot”) electrical wire was broken (in a storm or an accident) and fell on your car. ### Test Prep for AP Courses ### Section Summary 1. A conductor allows free charges to move about within it. 2. The electrical forces around a conductor will cause free charges to move around inside the conductor until static equilibrium is reached. 3. Any excess charge will collect along the surface of a conductor. 4. Conductors with sharp corners or points will collect more charge at those points. 5. A lightning rod is a conductor with sharply pointed ends that collect excess charge on the building caused by an electrical storm and allow it to dissipate back into the air. 6. Electrical storms result when the electrical field of Earth’s surface in certain locations becomes more strongly charged, due to changes in the insulating effect of the air. 7. A Faraday cage acts like a shield around an object, preventing electric charge from penetrating inside. ### Conceptual Questions ### Problems & Exercises
# Electric Charge and Electric Field ## Applications of Electrostatics ### Learning Objectives By the end of this section, you will be able to: The study of electrostatics has proven useful in many areas. This module covers just a few of the many applications of electrostatics. 1. Name several real-world applications of the study of electrostatics. ### The Van de Graaff Generator Van de Graaff generators (or Van de Graaffs) are not only spectacular devices used to demonstrate high voltage due to static electricity—they are also used for serious research. The first was built by Robert Van de Graaff in 1931 (based on original suggestions by Lord Kelvin) for use in nuclear physics research. shows a schematic of a large research version. Van de Graaffs utilize both smooth and pointed surfaces, and conductors and insulators to generate large static charges and, hence, large voltages. A very large excess charge can be deposited on the sphere, because it moves quickly to the outer surface. Practical limits arise because the large electric fields polarize and eventually ionize surrounding materials, creating free charges that neutralize excess charge or allow it to escape. Nevertheless, voltages of 15 million volts are well within practical limits. ### Xerography Most copy machines use an electrostatic process called xerography—a word coined from the Greek words xeros for dry and graphos for writing. The heart of the process is shown in simplified form in . A selenium-coated aluminum drum is sprayed with positive charge from points on a device called a corotron. Selenium is a substance with an interesting property—it is a photoconductor. That is, selenium is an insulator when in the dark and a conductor when exposed to light. In the first stage of the xerography process, the conducting aluminum drum is grounded so that a negative charge is induced under the thin layer of uniformly positively charged selenium. In the second stage, the surface of the drum is exposed to the image of whatever is to be copied. Where the image is light, the selenium becomes conducting, and the positive charge is neutralized. In dark areas, the positive charge remains, and so the image has been transferred to the drum. The third stage takes a dry black powder, called toner, and sprays it with a negative charge so that it will be attracted to the positive regions of the drum. Next, a blank piece of paper is given a greater positive charge than on the drum so that it will pull the toner from the drum. Finally, the paper and electrostatically held toner are passed through heated pressure rollers, which melt and permanently adhere the toner within the fibers of the paper. ### Laser Printers Laser printers use the xerographic process to make high-quality images on paper, employing a laser to produce an image on the photoconducting drum as shown in . In its most common application, the laser printer receives output from a computer, and it can achieve high-quality output because of the precision with which laser light can be controlled. Many laser printers do significant information processing, such as making sophisticated letters or fonts, and may contain a computer more powerful than the one giving them the raw data to be printed. ### Ink Jet Printers and Electrostatic Painting The ink jet printer, commonly used to print computer-generated text and graphics, also employs electrostatics. A nozzle makes a fine spray of tiny ink droplets, which are then given an electrostatic charge. (See .) Once charged, the droplets can be directed, using pairs of charged plates, with great precision to form letters and images on paper. Ink jet printers can produce color images by using a black jet and three other jets with primary colors, usually cyan, magenta, and yellow, much as a color television produces color. (This is more difficult with xerography, requiring multiple drums and toners.) Electrostatic painting employs electrostatic charge to spray paint onto odd-shaped surfaces. Mutual repulsion of like charges causes the paint to fly away from its source. Surface tension forms drops, which are then attracted by unlike charges to the surface to be painted. Electrostatic painting can reach those hard-to-get at places, applying an even coat in a controlled manner. If the object is a conductor, the electric field is perpendicular to the surface, tending to bring the drops in perpendicularly. Corners and points on conductors will receive extra paint. Felt can similarly be applied. ### Smoke Precipitators and Electrostatic Air Cleaning Another important application of electrostatics is found in air cleaners, both large and small. The electrostatic part of the process places excess (usually positive) charge on smoke, dust, pollen, and other particles in the air and then passes the air through an oppositely charged grid that attracts and retains the charged particles. (See .) Large electrostatic precipitators are used industrially to remove over 99% of the particles from stack gas emissions associated with the burning of coal and oil. Home precipitators, often in conjunction with the home heating and air conditioning system, are very effective in removing polluting particles, irritants, and allergens. ### Integrated Concepts The Integrated Concepts exercises for this module involve concepts such as electric charges, electric fields, and several other topics. Physics is most interesting when applied to general situations involving more than a narrow set of physical principles. The electric field exerts force on charges, for example, and hence the relevance of Dynamics: Force and Newton’s Laws of Motion. The following topics are involved in some or all of the problems labeled “Integrated Concepts”: The following worked example illustrates how this strategy is applied to an Integrated Concept problem: ### Section Summary 1. Electrostatics is the study of electric fields in static equilibrium. 2. In addition to research using equipment such as a Van de Graaff generator, many practical applications of electrostatics exist, including photocopiers, laser printers, ink-jet printers and electrostatic air filters. ### Problems & Exercises
# Electric Potential and Electric Field ## Introduction to Electric Potential and Electric Energy In Electric Charge and Electric Field, we just scratched the surface (or at least rubbed it) of electrical phenomena. Two of the most familiar aspects of electricity are its energy and voltage. We know, for example, that great amounts of electrical energy can be stored in batteries, are transmitted cross-country through power lines, and may jump from clouds to explode the sap of trees. In a similar manner, at molecular levels, ions cross cell membranes and transfer information. We also know about voltages associated with electricity. Batteries are typically a few volts, the outlets in your home produce 120 volts, and power lines can be as high as hundreds of thousands of volts. But energy and voltage are not the same thing. A motorcycle battery, for example, is small and would not be very successful in replacing the much larger car battery, yet each has the same voltage. In this chapter, we shall examine the relationship between voltage and electrical energy and begin to explore some of the many applications of electricity.
# Electric Potential and Electric Field ## Electric Potential Energy: Potential Difference ### Learning Objectives By the end of this section, you will be able to: 1. Define electric potential and electric potential energy. 2. Describe the relationship between potential difference and electrical potential energy. 3. Explain electron volt and its usage in submicroscopic process. 4. Determine electric potential energy given potential difference and amount of charge. When a free positive charge is accelerated by an electric field, such as shown in , it is given kinetic energy. The process is analogous to an object being accelerated by a gravitational field. It is as if the charge is going down an electrical hill where its electric potential energy is converted to kinetic energy. Let us explore the work done on a charge by the electric field in this process, so that we may develop a definition of electric potential energy. The electrostatic or Coulomb force is conservative, which means that the work done on is independent of the path taken. This is exactly analogous to the gravitational force in the absence of dissipative forces such as friction. When a force is conservative, it is possible to define a potential energy associated with the force, and it is usually easier to deal with the potential energy (because it depends only on position) than to calculate the work directly. We use the letters PE to denote electric potential energy, which has units of joules (J). The change in potential energy, , is crucial, since the work done by a conservative force is the negative of the change in potential energy; that is, . For example, work done to accelerate a positive charge from rest is positive and results from a loss in PE, or a negative . There must be a minus sign in front of to make positive. PE can be found at any point by taking one point as a reference and calculating the work needed to move a charge to the other point. Gravitational potential energy and electric potential energy are quite analogous. Potential energy accounts for work done by a conservative force and gives added insight regarding energy and energy transformation without the necessity of dealing with the force directly. It is much more common, for example, to use the concept of voltage (related to electric potential energy) than to deal with the Coulomb force directly. Calculating the work directly is generally difficult, since and the direction and magnitude of can be complex for multiple charges, for odd-shaped objects, and along arbitrary paths. But we do know that, since , the work, and hence , is proportional to the test charge To have a physical quantity that is independent of test charge, we define electric potential (or simply potential, since electric is understood) to be the potential energy per unit charge: Since PE is proportional to , the dependence on cancels. Thus does not depend on . The change in potential energy is crucial, and so we are concerned with the difference in potential or potential difference between two points, where The potential difference between points A and B, , is thus defined to be the change in potential energy of a charge moved from A to B, divided by the charge. Units of potential difference are joules per coulomb, given the name volt (V) after Alessandro Volta. The familiar term voltage is the common name for potential difference. Keep in mind that whenever a voltage is quoted, it is understood to be the potential difference between two points. For example, every battery has two terminals, and its voltage is the potential difference between them. More fundamentally, the point you choose to be zero volts is arbitrary. This is analogous to the fact that gravitational potential energy has an arbitrary zero, such as sea level or perhaps a lecture hall floor. In summary, the relationship between potential difference (or voltage) and electrical potential energy is given by Voltage is not the same as energy. Voltage is the energy per unit charge. Thus a motorcycle battery and a car battery can both have the same voltage (more precisely, the same potential difference between battery terminals), yet one stores much more energy than the other since . The car battery can move more charge than the motorcycle battery, although both are 12 V batteries. Note that the energies calculated in the previous example are absolute values. The change in potential energy for the battery is negative, since it loses energy. These batteries, like many electrical systems, actually move negative charge—electrons in particular. The batteries repel electrons from their negative terminals (A) through whatever circuitry is involved and attract them to their positive terminals (B) as shown in . The change in potential is and the charge is negative, so that is negative, meaning the potential energy of the battery has decreased when has moved from A to B. ### The Electron Volt The energy per electron is very small in macroscopic situations like that in the previous example—a tiny fraction of a joule. But on a submicroscopic scale, such energy per particle (electron, proton, or ion) can be of great importance. For example, even a tiny fraction of a joule can be great enough for these particles to destroy organic molecules and harm living tissue. The particle may do its damage by direct collision, or it may create harmful x rays, which can also inflict damage. It is useful to have an energy unit related to submicroscopic effects. shows a situation related to the definition of such an energy unit. An electron is accelerated between two charged metal plates as it might be in an old-model television tube or oscilloscope. The electron is given kinetic energy that is later converted to another form—light in the television tube, for example. (Note that downhill for the electron is uphill for a positive charge.) Since energy is related to voltage by we can think of the joule as a coulomb-volt. On the submicroscopic scale, it is more convenient to define an energy unit called the electron volt (eV), which is the energy given to a fundamental charge accelerated through a potential difference of 1 V. In equation form, An electron accelerated through a potential difference of 1 V is given an energy of 1 eV. It follows that an electron accelerated through 50 V is given 50 eV. A potential difference of 100,000 V (100 kV) will give an electron an energy of 100,000 eV (100 keV), and so on. Similarly, an ion with a double positive charge accelerated through 100 V will be given 200 eV of energy. These simple relationships between accelerating voltage and particle charges make the electron volt a simple and convenient energy unit in such circumstances. The electron volt is commonly employed in submicroscopic processes—chemical valence energies and molecular and nuclear binding energies are among the quantities often expressed in electron volts. For example, about 5 eV of energy is required to break up certain organic molecules. If a proton is accelerated from rest through a potential difference of 30 kV, it is given an energy of 30 keV (30,000 eV) and it can break up as many as 6000 of these molecules (). Nuclear decay energies are on the order of 1 MeV (1,000,000 eV) per event and can, thus, produce significant biological damage. ### Conservation of Energy The total energy of a system is conserved if there is no net addition (or subtraction) of work or heat transfer. For conservative forces, such as the electrostatic force, conservation of energy states that mechanical energy is a constant. Mechanical energy is the sum of the kinetic energy and potential energy of a system; that is, . A loss of PE of a charged particle becomes an increase in its KE. Here PE is the electric potential energy. Conservation of energy is stated in equation form as or where i and f stand for initial and final conditions. As we have found many times before, considering energy can give us insights and facilitate problem solving. ### Test Prep for AP Courses ### Section Summary 1. Electric potential is potential energy per unit charge. 2. The potential difference between points A and B, , defined to be the change in potential energy of a charge moved from A to B, is equal to the change in potential energy divided by the charge, Potential difference is commonly called voltage, represented by the symbol . 3. An electron volt is the energy given to a fundamental charge accelerated through a potential difference of 1 V. In equation form, 4. Mechanical energy is the sum of the kinetic energy and potential energy of a system, that is, This sum is a constant. ### Conceptual Questions ### Problems & Exercises
# Electric Potential and Electric Field ## Electric Potential in a Uniform Electric Field ### Learning Objectives By the end of this section, you will be able to: 1. Describe the relationship between voltage and electric field. 2. Derive an expression for the electric potential and electric field. 3. Calculate electric field strength given distance and voltage. In the previous section, we explored the relationship between voltage and energy. In this section, we will explore the relationship between voltage and electric field. For example, a uniform electric field is produced by placing a potential difference (or voltage) across two parallel metal plates, labeled A and B. (See .) Examining this will tell us what voltage is needed to produce a certain electric field strength; it will also reveal a more fundamental relationship between electric potential and electric field. From a physicist’s point of view, either or can be used to describe any charge distribution. is most closely tied to energy, whereas is most closely related to force. is a scalar quantity and has no direction, while is a vector quantity, having both magnitude and direction. (Note that the magnitude of the electric field strength, a scalar quantity, is represented by below.) The relationship between and is revealed by calculating the work done by the force in moving a charge from point A to point B. But, as noted in Electric Potential Energy: Potential Difference, this is complex for arbitrary charge distributions, requiring calculus. We therefore look at a uniform electric field as an interesting special case. The work done by the electric field in to move a positive charge from A, the positive plate, higher potential, to B, the negative plate, lower potential, is The potential difference between points A and B is Entering this into the expression for work yields Work is ; here , since the path is parallel to the field, and so . Since , we see that . Substituting this expression for work into the previous equation gives The charge cancels, and so the voltage between points A and B is seen to be where is the distance from A to B, or the distance between the plates in . Note that the above equation implies the units for electric field are volts per meter. We already know the units for electric field are newtons per coulomb; thus the following relation among units is valid: In more general situations, regardless of whether the electric field is uniform, it points in the direction of decreasing potential, because the force on a positive charge is in the direction of and also in the direction of lower potential . Furthermore, the magnitude of equals the rate of decrease of with distance. The faster decreases over distance, the greater the electric field. In equation form, the general relationship between voltage and electric field is where is the distance over which the change in potential, , takes place. The minus sign tells us that points in the direction of decreasing potential. The electric field is said to be the gradient (as in grade or slope) of the electric potential. For continually changing potentials, and become infinitesimals and differential calculus must be employed to determine the electric field. ### Test Prep for AP Courses ### Section Summary 1. The voltage between points A and B is where is the distance from A to B, or the distance between the plates. 2. In equation form, the general relationship between voltage and electric field is where is the distance over which the change in potential, , takes place. The minus sign tells us that points in the direction of decreasing potential.) The electric field is said to be the gradient (as in grade or slope) of the electric potential. ### Conceptual Questions ### Problems & Exercises
# Electric Potential and Electric Field ## Electrical Potential Due to a Point Charge ### Learning Objectives By the end of this section, you will be able to: 1. Explain point charges and express the equation for electric potential of a point charge. 2. Distinguish between electric potential and electric field. 3. Determine the electric potential of a point charge given charge and distance. Point charges, such as electrons, are among the fundamental building blocks of matter. Furthermore, spherical charge distributions (like on a metal sphere) create external electric fields exactly like a point charge. The electric potential due to a point charge is, thus, a case we need to consider. Using calculus to find the work needed to move a test charge from a large distance away to a distance of from a point charge , and noting the connection between work and potential , it can be shown that the electric potential is where k is a constant equal to . The potential at infinity is chosen to be zero. Thus for a point charge decreases with distance, whereas for a point charge decreases with distance squared: Recall that the electric potential is a scalar and has no direction, whereas the electric field is a vector. To find the voltage due to a combination of point charges, you add the individual voltages as numbers. To find the total electric field, you must add the individual fields as vectors, taking magnitude and direction into account. This is consistent with the fact that is closely associated with energy, a scalar, whereas is closely associated with force, a vector. The voltages in both of these examples could be measured with a meter that compares the measured potential with ground potential. Ground potential is often taken to be zero (instead of taking the potential at infinity to be zero). It is the potential difference between two points that is of importance, and very often there is a tacit assumption that some reference point, such as Earth or a very distant point, is at zero potential. As noted in Electric Potential Energy: Potential Difference, this is analogous to taking sea level as when considering gravitational potential energy, . ### Section Summary 1. Electric potential of a point charge is . 2. Electric potential is a scalar, and electric field is a vector. Addition of voltages as numbers gives the voltage due to a combination of point charges, whereas addition of individual fields as vectors gives the total electric field. ### Conceptual Questions ### Problems & Exercises
# Electric Potential and Electric Field ## Equipotential Lines ### Learning Objectives By the end of this section, you will be able to: 1. Explain equipotential lines and equipotential surfaces. 2. Describe the action of grounding an electrical appliance. 3. Compare electric field and equipotential lines. We can represent electric potentials (voltages) pictorially, just as we drew pictures to illustrate electric fields. Of course, the two are related. Consider , which shows an isolated positive point charge and its electric field lines. Electric field lines radiate out from a positive charge and terminate on negative charges. While we use blue arrows to represent the magnitude and direction of the electric field, we use green lines to represent places where the electric potential is constant. These are called equipotential lines in two dimensions, or equipotential surfaces in three dimensions. The term equipotential is also used as a noun, referring to an equipotential line or surface. The potential for a point charge is the same anywhere on an imaginary sphere of radius surrounding the charge. This is true since the potential for a point charge is given by and, thus, has the same value at any point that is a given distance from the charge. An equipotential sphere is a circle in the two-dimensional view of . Since the electric field lines point radially away from the charge, they are perpendicular to the equipotential lines. It is important to note that equipotential lines are always perpendicular to electric field lines. No work is required to move a charge along an equipotential, since . Thus the work is Work is zero if force is perpendicular to motion. Force is in the same direction as , so that motion along an equipotential must be perpendicular to . More precisely, work is related to the electric field by Note that in the above equation, and symbolize the magnitudes of the electric field strength and force, respectively. Neither nor nor is zero, and so must be 0, meaning must be . In other words, motion along an equipotential is perpendicular to . One of the rules for static electric fields and conductors is that the electric field must be perpendicular to the surface of any conductor. This implies that a conductor is an equipotential surface in static situations. There can be no voltage difference across the surface of a conductor, or charges will flow. One of the uses of this fact is that a conductor can be fixed at zero volts by connecting it to the earth with a good conductor—a process called grounding. Grounding can be a useful safety tool. For example, grounding the metal case of an electrical appliance ensures that it is at zero volts relative to the earth. Because a conductor is an equipotential, it can replace any equipotential surface. For example, in a charged spherical conductor can replace the point charge, and the electric field and potential surfaces outside of it will be unchanged, confirming the contention that a spherical charge distribution is equivalent to a point charge at its center. shows the electric field and equipotential lines for two equal and opposite charges. Given the electric field lines, the equipotential lines can be drawn simply by making them perpendicular to the electric field lines. Conversely, given the equipotential lines, as in (a), the electric field lines can be drawn by making them perpendicular to the equipotentials, as in (b). One of the most important cases is that of the familiar parallel conducting plates shown in . Between the plates, the equipotentials are evenly spaced and parallel. The same field could be maintained by placing conducting plates at the equipotential lines at the potentials shown. An important application of electric fields and equipotential lines involves the heart. The heart relies on electrical signals to maintain its rhythm. The movement of electrical signals causes the chambers of the heart to contract and relax. When a person has a heart attack, the movement of these electrical signals may be disturbed. An artificial pacemaker and a defibrillator can be used to initiate the rhythm of electrical signals. The equipotential lines around the heart, the thoracic region, and the axis of the heart are useful ways of monitoring the structure and functions of the heart. An electrocardiogram (ECG) measures the small electric signals being generated during the activity of the heart. More about the relationship between electric fields and the heart is discussed in Energy Stored in Capacitors. ### Test Prep for AP Courses ### Section Summary 1. An equipotential line is a line along which the electric potential is constant. 2. An equipotential surface is a three-dimensional version of equipotential lines. 3. Equipotential lines are always perpendicular to electric field lines. 4. The process by which a conductor can be fixed at zero volts by connecting it to the earth with a good conductor is called grounding. ### Conceptual Questions ### Problems & Exercises
# Electric Potential and Electric Field ## Capacitors and Dielectrics ### Learning Objectives By the end of this section, you will be able to: 1. Describe the action of a capacitor and define capacitance. 2. Explain parallel plate capacitors and their capacitances. 3. Discuss the process of increasing the capacitance of a dielectric. 4. Determine capacitance given charge and voltage. A capacitor is a device used to store electric charge. Capacitors have applications ranging from filtering static out of radio reception to energy storage in heart defibrillators. Typically, commercial capacitors have two conducting parts close to one another, but not touching, such as those in . (Most of the time an insulator is used between the two plates to provide separation—see the discussion on dielectrics below.) When battery terminals are connected to an initially uncharged capacitor, equal amounts of positive and negative charge, and , are separated into its two plates. The capacitor remains neutral overall, but we refer to it as storing a charge in this circumstance. The amount of charge a capacitor can store depends on two major factors—the voltage applied and the capacitor’s physical characteristics, such as its size. A system composed of two identical, parallel conducting plates separated by a distance, as in , is called a parallel plate capacitor. It is easy to see the relationship between the voltage and the stored charge for a parallel plate capacitor, as shown in . Each electric field line starts on an individual positive charge and ends on a negative one, so that there will be more field lines if there is more charge. (Drawing a single field line per charge is a convenience, only. We can draw many field lines for each charge, but the total number is proportional to the number of charges.) The electric field strength is, thus, directly proportional to . The field is proportional to the charge: where the symbol means “proportional to.” From the discussion in Electric Potential in a Uniform Electric Field, we know that the voltage across parallel plates is . Thus, It follows, then, that , and conversely, This is true in general: The greater the voltage applied to any capacitor, the greater the charge stored in it. Different capacitors will store different amounts of charge for the same applied voltage, depending on their physical characteristics. We define their capacitance to be such that the charge stored in a capacitor is proportional to . The charge stored in a capacitor is given by This equation expresses the two major factors affecting the amount of charge stored. Those factors are the physical characteristics of the capacitor, , and the voltage, . Rearranging the equation, we see that capacitance or The unit of capacitance is the farad (F), named for Michael Faraday (1791–1867), an English scientist who contributed to the fields of electromagnetism and electrochemistry. Since capacitance is charge per unit voltage, we see that a farad is a coulomb per volt, or A 1-farad capacitor would be able to store 1 coulomb (a very large amount of charge) with the application of only 1 volt. One farad is, thus, a very large capacitance. Typical capacitors range from fractions of a picofarad to millifarads . shows some common capacitors. Capacitors are primarily made of ceramic, glass, or plastic, depending upon purpose and size. Insulating materials, called dielectrics, are commonly used in their construction, as discussed below. ### Parallel Plate Capacitor The parallel plate capacitor shown in has two identical conducting plates, each having a surface area , separated by a distance (with no material between the plates). When a voltage is applied to the capacitor, it stores a charge , as shown. We can see how its capacitance depends on and by considering the characteristics of the Coulomb force. We know that like charges repel, unlike charges attract, and the force between charges decreases with distance. So it seems quite reasonable that the bigger the plates are, the more charge they can store—because the charges can spread out more. Thus should be greater for larger . Similarly, the closer the plates are together, the greater the attraction of the opposite charges on them. So should be greater for smaller . It can be shown that for a parallel plate capacitor there are only two factors ( and ) that affect its capacitance . The capacitance of a parallel plate capacitor in equation form is given by is the area of one plate in square meters, and is the distance between the plates in meters. The constant is the permittivity of free space; its numerical value in SI units is . The units of F/m are equivalent to . The small numerical value of is related to the large size of the farad. A parallel plate capacitor must have a large area to have a capacitance approaching a farad. (Note that the above equation is valid when the parallel plates are separated by air or free space. When another material is placed between the plates, the equation is modified, as discussed below.) Another interesting biological example dealing with electric potential is found in the cell’s plasma membrane. The membrane sets a cell off from its surroundings and also allows ions to selectively pass in and out of the cell. There is a potential difference across the membrane of about . This is due to the mainly negatively charged ions in the cell and the predominance of positively charged sodium ( ) ions outside. Things change when a nerve cell is stimulated. ions are allowed to pass through the membrane into the cell, producing a positive membrane potential—the nerve signal. The cell membrane is about 7 to 10 nm thick. An approximate value of the electric field across it is given by This electric field is enough to cause a breakdown in air. ### Dielectric The previous example highlights the difficulty of storing a large amount of charge in capacitors. If is made smaller to produce a larger capacitance, then the maximum voltage must be reduced proportionally to avoid breakdown (since ). An important solution to this difficulty is to put an insulating material, called a dielectric, between the plates of a capacitor and allow to be as small as possible. Not only does the smaller make the capacitance greater, but many insulators can withstand greater electric fields than air before breaking down. There is another benefit to using a dielectric in a capacitor. Depending on the material used, the capacitance is greater than that given by the equation by a factor , called the dielectric constant. A parallel plate capacitor with a dielectric between its plates has a capacitance given by Values of the dielectric constant for various materials are given in . Note that for vacuum is exactly 1, and so the above equation is valid in that case, too. If a dielectric is used, perhaps by placing Teflon between the plates of the capacitor in , then the capacitance is greater by the factor , which for Teflon is 2.1. Note also that the dielectric constant for air is very close to 1, so that air-filled capacitors act much like those with vacuum between their plates except that the air can become conductive if the electric field strength becomes too great. (Recall that for a parallel plate capacitor.) Also shown in are maximum electric field strengths in V/m, called dielectric strengths, for several materials. These are the fields above which the material begins to break down and conduct. The dielectric strength imposes a limit on the voltage that can be applied for a given plate separation. For instance, in , the separation is 1.00 mm, and so the voltage limit for air is However, the limit for a 1.00 mm separation filled with Teflon is 60,000 V, since the dielectric strength of Teflon is V/m. So the same capacitor filled with Teflon has a greater capacitance and can be subjected to a much greater voltage. Using the capacitance we calculated in the above example for the air-filled parallel plate capacitor, we find that the Teflon-filled capacitor can store a maximum charge of This is 42 times the charge of the same air-filled capacitor. Microscopically, how does a dielectric increase capacitance? Polarization of the insulator is responsible. The more easily it is polarized, the greater its dielectric constant . Water, for example, is a polar molecule because one end of the molecule has a slight positive charge and the other end has a slight negative charge. The polarity of water causes it to have a relatively large dielectric constant of 80. The effect of polarization can be best explained in terms of the characteristics of the Coulomb force. shows the separation of charge schematically in the molecules of a dielectric material placed between the charged plates of a capacitor. The Coulomb force between the closest ends of the molecules and the charge on the plates is attractive and very strong, since they are very close together. This attracts more charge onto the plates than if the space were empty and the opposite charges were a distance away. Another way to understand how a dielectric increases capacitance is to consider its effect on the electric field inside the capacitor. (b) shows the electric field lines with a dielectric in place. Since the field lines end on charges in the dielectric, there are fewer of them going from one side of the capacitor to the other. So the electric field strength is less than if there were a vacuum between the plates, even though the same charge is on the plates. The voltage between the plates is , so it too is reduced by the dielectric. Thus there is a smaller voltage for the same charge ; since , the capacitance is greater. The dielectric constant is generally defined to be , or the ratio of the electric field in a vacuum to that in the dielectric material, and is intimately related to the polarizability of the material. We will find in Atomic Physics that the orbits of electrons are more properly viewed as electron clouds with the density of the cloud related to the probability of finding an electron in that location (as opposed to the definite locations and paths of planets in their orbits around the Sun). This cloud is shifted by the Coulomb force so that the atom on average has a separation of charge. Although the atom remains neutral, it can now be the source of a Coulomb force, since a charge brought near the atom will be closer to one type of charge than the other. Some molecules, such as those of water, have an inherent separation of charge and are thus called polar molecules. illustrates the separation of charge in a water molecule, which has two hydrogen atoms and one oxygen atom . The water molecule is not symmetric—the hydrogen atoms are repelled to one side, giving the molecule a boomerang shape. The electrons in a water molecule are more concentrated around the more highly charged oxygen nucleus than around the hydrogen nuclei. This makes the oxygen end of the molecule slightly negative and leaves the hydrogen ends slightly positive. The inherent separation of charge in polar molecules makes it easier to align them with external fields and charges. Polar molecules therefore exhibit greater polarization effects and have greater dielectric constants. Those who study chemistry will find that the polar nature of water has many effects. For example, water molecules gather ions much more effectively because they have an electric field and a separation of charge to attract charges of both signs. Also, as brought out in the previous chapter, polar water provides a shield or screening of the electric fields in the highly charged molecules of interest in biological systems. ### Test Prep for AP Courses ### Section Summary 1. A capacitor is a device used to store charge. 2. The amount of charge a capacitor can store depends on two major factors—the voltage applied and the capacitor’s physical characteristics, such as its size. 3. The capacitance is the amount of charge stored per volt, or 4. The capacitance of a parallel plate capacitor is , when the plates are separated by air or free space. is called the permittivity of free space. 5. A parallel plate capacitor with a dielectric between its plates has a capacitance given by where is the dielectric constant of the material. 6. The maximum electric field strength above which an insulating material begins to break down and conduct is called dielectric strength. ### Conceptual Questions ### Problems & Exercises
# Electric Potential and Electric Field ## Capacitors in Series and Parallel ### Learning Objectives By the end of this section, you will be able to: 1. Derive expressions for total capacitance in series and in parallel. 2. Identify series and parallel parts in the combination of connection of capacitors. 3. Calculate the effective capacitance in series and parallel given individual capacitances. Several capacitors may be connected together in a variety of applications. Multiple connections of capacitors act like a single equivalent capacitor. The total capacitance of this equivalent single capacitor depends both on the individual capacitors and how they are connected. There are two simple and common types of connections, called series and parallel, for which we can easily calculate the total capacitance. Certain more complicated connections can also be related to combinations of series and parallel. ### Capacitance in Series (a) shows a series connection of three capacitors with a voltage applied. As for any capacitor, the capacitance of the combination is related to charge and voltage by . Note in that opposite charges of magnitude flow to either side of the originally uncharged combination of capacitors when the voltage is applied. Conservation of charge requires that equal-magnitude charges be created on the plates of the individual capacitors, since charge is only being separated in these originally neutral devices. The end result is that the combination resembles a single capacitor with an effective plate separation greater than that of the individual capacitors alone. (See (b).) Larger plate separation means smaller capacitance. It is a general feature of series connections of capacitors that the total capacitance is less than any of the individual capacitances. We can find an expression for the total capacitance by considering the voltage across the individual capacitors shown in . Solving for gives . The voltages across the individual capacitors are thus , , and . The total voltage is the sum of the individual voltages: Now, calling the total capacitance for series capacitance, consider that Entering the expressions for , , and , we get Canceling the s, we obtain the equation for the total capacitance in series to be where “...” indicates that the expression is valid for any number of capacitors connected in series. An expression of this form always results in a total capacitance that is less than any of the individual capacitances , , ..., as the next example illustrates. ### Capacitors in Parallel (a) shows a parallel connection of three capacitors with a voltage applied. Here the total capacitance is easier to find than in the series case. To find the equivalent total capacitance , we first note that the voltage across each capacitor is , the same as that of the source, since they are connected directly to it through a conductor. (Conductors are equipotentials, and so the voltage across the capacitors is the same as that across the voltage source.) Thus the capacitors have the same charges on them as they would have if connected individually to the voltage source. The total charge is the sum of the individual charges: Using the relationship , we see that the total charge is , and the individual charges are , , and . Entering these into the previous equation gives Canceling from the equation, we obtain the equation for the total capacitance in parallel : Total capacitance in parallel is simply the sum of the individual capacitances. (Again the “...” indicates the expression is valid for any number of capacitors connected in parallel.) So, for example, if the capacitors in the example above were connected in parallel, their capacitance would be The equivalent capacitor for a parallel connection has an effectively larger plate area and, thus, a larger capacitance, as illustrated in (b). More complicated connections of capacitors can sometimes be combinations of series and parallel. (See .) To find the total capacitance of such combinations, we identify series and parallel parts, compute their capacitances, and then find the total. ### Section Summary 1. Total capacitance in series 2. Total capacitance in parallel 3. If a circuit contains a combination of capacitors in series and parallel, identify series and parallel parts, compute their capacitances, and then find the total. ### Conceptual Questions ### Problems & Exercises
# Electric Potential and Electric Field ## Energy Stored in Capacitors ### Learning Objectives By the end of this section, you will be able to: 1. List some uses of capacitors. 2. Express in equation form the energy stored in a capacitor. 3. Explain the function of a defibrillator. Most of us have seen dramatizations in which medical personnel use a defibrillator to pass an electric current through a patient’s heart to get it to beat normally. (Review .) Often realistic in detail, the person applying the shock directs another person to “make it 400 joules this time.” The energy delivered by the defibrillator is stored in a capacitor and can be adjusted to fit the situation. SI units of joules are often employed. Less dramatic is the use of capacitors in microelectronics, such as certain handheld calculators, to supply energy when batteries are charged. (See .) Capacitors are also used to supply energy for flash lamps on cameras. Energy stored in a capacitor is electrical potential energy, and it is thus related to the charge and voltage on the capacitor. We must be careful when applying the equation for electrical potential energy to a capacitor. Remember that is the potential energy of a charge going through a voltage . But the capacitor starts with zero voltage and gradually comes up to its full voltage as it is charged. The first charge placed on a capacitor experiences a change in voltage , since the capacitor has zero voltage when uncharged. The final charge placed on a capacitor experiences , since the capacitor now has its full voltage on it. The average voltage on the capacitor during the charging process is , and so the average voltage experienced by the full charge is . Thus the energy stored in a capacitor, , is where is the charge on a capacitor with a voltage applied. (Note that the energy is not , but .) Charge and voltage are related to the capacitance of a capacitor by , and so the expression for can be algebraically manipulated into three equivalent expressions: where is the charge and the voltage on a capacitor . The energy is in joules for a charge in coulombs, voltage in volts, and capacitance in farads. In a defibrillator, the delivery of a large charge in a short burst to a set of paddles across a person’s chest can be a lifesaver. The person’s heart attack might have arisen from the onset of fast, irregular beating of the heart—cardiac or ventricular fibrillation. The application of a large shock of electrical energy can terminate the arrhythmia and allow the body’s pacemaker to resume normal patterns. Today it is common for ambulances to carry a defibrillator, which also uses an electrocardiogram to analyze the patient’s heartbeat pattern. Automated external defibrillators (AED) are found in many public places (). These are designed to be used by lay persons. The device automatically diagnoses the patient’s heart condition and then applies the shock with appropriate energy and waveform. CPR is recommended in many cases before use of an AED. ### Test Prep for AP Courses ### Section Summary 1. Capacitors are used in a variety of devices, including defibrillators, microelectronics such as calculators, and flash lamps, to supply energy. 2. The energy stored in a capacitor can be expressed in three ways: where is the charge, is the voltage, and is the capacitance of the capacitor. The energy is in joules when the charge is in coulombs, voltage is in volts, and capacitance is in farads. ### Conceptual Questions ### Problems & Exercises
# Electric Current, Resistance, and Ohm's Law ## Introduction to Electric Current, Resistance, and Ohm's Law The flicker of numbers on a handheld calculator, nerve impulses carrying signals of vision to the brain, an ultrasound device sending a signal to a computer screen, the brain sending a message for a baby to twitch its toes, an electric train pulling its load over a mountain pass, a hydroelectric plant sending energy to metropolitan and rural users—these and many other examples of electricity involve electric current, the movement of charge. Humankind has indeed harnessed electricity, the basis of technology, to improve our quality of life. Whereas the previous two chapters concentrated on static electricity and the fundamental force underlying its behavior, the next few chapters will be devoted to electric and magnetic phenomena involving current. In addition to exploring applications of electricity, we shall gain new insights into nature—in particular, the fact that all magnetism results from electric current.
# Electric Current, Resistance, and Ohm's Law ## Current ### Learning Objectives By the end of this section, you will be able to: 1. Define electric current, ampere, and drift velocity 2. Describe the direction of charge flow in conventional current. 3. Use drift velocity to calculate current and vice versa. ### Electric Current Electric current is defined to be the rate at which charge flows. A large current, such as that used to start a truck engine, moves a large amount of charge in a small time, whereas a small current, such as that used to operate a hand-held calculator, moves a small amount of charge over a long period of time. In equation form, electric current is defined to be where is the amount of charge passing through a given area in time . (As in previous chapters, initial time is often taken to be zero, in which case .) (See .) The SI unit for current is the ampere (A), named for the French physicist André-Marie Ampère (1775–1836). Since , we see that an ampere is one coulomb per second: Not only are fuses and circuit breakers rated in amperes (or amps), so are many electrical appliances. shows a simple circuit and the standard schematic representation of a battery, conducting path, and load (a resistor). Schematics are very useful in visualizing the main features of a circuit. A single schematic can represent a wide variety of situations. The schematic in (b), for example, can represent anything from a truck battery connected to a headlight lighting the street in front of the truck to a small battery connected to a penlight lighting a keyhole in a door. Such schematics are useful because the analysis is the same for a wide variety of situations. We need to understand a few schematics to apply the concepts and analysis to many more situations. Note that the direction of current flow in is from positive to negative. The direction of conventional current is the direction that positive charge would flow. Depending on the situation, positive charges, negative charges, or both may move. In metal wires, for example, current is carried by electrons—that is, negative charges move. In ionic solutions, such as salt water, both positive and negative charges move. This is also true in nerve cells. A Van de Graaff generator used for nuclear research can produce a current of pure positive charges, such as protons. illustrates the movement of charged particles that compose a current. The fact that conventional current is taken to be in the direction that positive charge would flow can be traced back to American politician and scientist Benjamin Franklin in the 1700s. He named the type of charge associated with electrons negative, long before they were known to carry current in so many situations. Franklin, in fact, was totally unaware of the small-scale structure of electricity. It is important to realize that there is an electric field in conductors responsible for producing the current, as illustrated in . Unlike static electricity, where a conductor in equilibrium cannot have an electric field in it, conductors carrying a current have an electric field and are not in static equilibrium. An electric field is needed to supply energy to move the charges. ### Drift Velocity Electrical signals are known to move very rapidly. Telephone conversations carried by currents in wires cover large distances without noticeable delays. Lights come on as soon as a switch is flicked. Most electrical signals carried by currents travel at speeds on the order of , a significant fraction of the speed of light. Interestingly, the individual charges that make up the current move much more slowly on average, typically drifting at speeds on the order of . How do we reconcile these two speeds, and what does it tell us about standard conductors? The high speed of electrical signals results from the fact that the force between charges acts rapidly at a distance. Thus, when a free charge is forced into a wire, as in , the incoming charge pushes other charges ahead of it, which in turn push on charges farther down the line. The density of charge in a system cannot easily be increased, and so the signal is passed on rapidly. The resulting electrical shock wave moves through the system at nearly the speed of light. To be precise, this rapidly moving signal or shock wave is a rapidly propagating change in electric field. Good conductors have large numbers of free charges in them. In metals, the free charges are free electrons. shows how free electrons move through an ordinary conductor. The distance that an individual electron can move between collisions with atoms or other electrons is quite small. The electron paths thus appear nearly random, like the motion of atoms in a gas. But there is an electric field in the conductor that causes the electrons to drift in the direction shown (opposite to the field, since they are negative). The drift velocity is the average velocity of the free charges. Drift velocity is quite small, since there are so many free charges. If we have an estimate of the density of free electrons in a conductor, we can calculate the drift velocity for a given current. The larger the density, the lower the velocity required for a given current. The free-electron collisions transfer energy to the atoms of the conductor. The electric field does work in moving the electrons through a distance, but that work does not increase the kinetic energy (nor speed, therefore) of the electrons. The work is transferred to the conductor’s atoms, possibly increasing temperature. Thus a continuous power input is required to keep a current flowing. An exception, of course, is found in superconductors, for reasons we shall explore in a later chapter. Superconductors can have a steady current without a continual supply of energy—a great energy savings. In contrast, the supply of energy can be useful, such as in a lightbulb filament. The supply of energy is necessary to increase the temperature of the tungsten filament, so that the filament glows. We can obtain an expression for the relationship between current and drift velocity by considering the number of free charges in a segment of wire, as illustrated in . The number of free charges per unit volume is given the symbol and depends on the material. The shaded segment has a volume , so that the number of free charges in it is . The charge in this segment is thus , where is the amount of charge on each carrier. (Recall that for electrons, is .) Current is charge moved per unit time; thus, if all the original charges move out of this segment in time , the current is Note that is the magnitude of the drift velocity, , since the charges move an average distance in a time . Rearranging terms gives where is the current through a wire of cross-sectional area made of a material with a free charge density . The carriers of the current each have charge and move with a drift velocity of magnitude . Note that simple drift velocity is not the entire story. The speed of an electron is much greater than its drift velocity. In addition, not all of the electrons in a conductor can move freely, and those that do might move somewhat faster or slower than the drift velocity. So what do we mean by free electrons? Atoms in a metallic conductor are packed in the form of a lattice structure. Some electrons are far enough away from the atomic nuclei that they do not experience the attraction of the nuclei as much as the inner electrons do. These are the free electrons. They are not bound to a single atom but can instead move freely among the atoms in a “sea” of electrons. These free electrons respond by accelerating when an electric field is applied. Of course as they move they collide with the atoms in the lattice and other electrons, generating thermal energy, and the conductor gets warmer. In an insulator, the organization of the atoms and the structure do not allow for such free electrons. ### Test Prep for AP Courses ### Section Summary 1. Electric current is the rate at which charge flows, given by where is the amount of charge passing through an area in time . 2. The direction of conventional current is taken as the direction in which positive charge moves. 3. The SI unit for current is the ampere (A), where 4. Current is the flow of free charges, such as electrons and ions. 5. Drift velocity is the average speed at which these charges move. 6. Current is proportional to drift velocity , as expressed in the relationship . Here, is the current through a wire of cross-sectional area . The wire’s material has a free-charge density , and each carrier has charge and a drift velocity . 7. Electrical signals travel at speeds about times greater than the drift velocity of free electrons. ### Conceptual Questions ### Problems & Exercises
# Electric Current, Resistance, and Ohm's Law ## Ohm’s Law: Resistance and Simple Circuits ### Learning Objectives By the end of this section, you will be able to: 1. Explain the origin of Ohm’s law. 2. Calculate voltages, currents, or resistances with Ohm’s law. 3. Explain what an ohmic material is. 4. Describe a simple circuit. What drives current? We can think of various devices—such as batteries, generators, wall outlets, and so on—which are necessary to maintain a current. All such devices create a potential difference and are loosely referred to as voltage sources. When a voltage source is connected to a conductor, it applies a potential difference that creates an electric field. The electric field in turn exerts force on charges, causing current. ### Ohm’s Law The current that flows through most substances is directly proportional to the voltage applied to it. The German physicist Georg Simon Ohm (1787–1854) was the first to demonstrate experimentally that the current in a metal wire is directly proportional to the voltage applied: This important relationship is known as Ohm’s law. It can be viewed as a cause-and-effect relationship, with voltage the cause and current the effect. This is an empirical law like that for friction—an experimentally observed phenomenon. Such a linear relationship doesn’t always occur. ### Resistance and Simple Circuits If voltage drives current, what impedes it? The electric property that impedes current (crudely similar to friction and air resistance) is called resistance . Collisions of moving charges with atoms and molecules in a substance transfer energy to the substance and limit current. Resistance is defined as inversely proportional to current, or Thus, for example, current is cut in half if resistance doubles. Combining the relationships of current to voltage and current to resistance gives This relationship is also called Ohm’s law. Ohm’s law in this form really defines resistance for certain materials. Ohm’s law (like Hooke’s law) is not universally valid. The many substances for which Ohm’s law holds are called ohmic. These include good conductors like copper and aluminum, and some poor conductors under certain circumstances. Ohmic materials have a resistance that is independent of voltage and current . An object that has simple resistance is called a resistor, even if its resistance is small. The unit for resistance is an ohm and is given the symbol (upper case Greek omega). Rearranging gives , and so the units of resistance are 1 ohm = 1 volt per ampere: shows the schematic for a simple circuit. A simple circuit has a single voltage source and a single resistor. The wires connecting the voltage source to the resistor can be assumed to have negligible resistance, or their resistance can be included in . Resistances range over many orders of magnitude. Some ceramic insulators, such as those used to support power lines, have resistances of or more. A dry person may have a hand-to-foot resistance of , whereas the resistance of the human heart is about . A meter-long piece of large-diameter copper wire may have a resistance of , and superconductors have no resistance at all (they are non-ohmic). Resistance is related to the shape of an object and the material of which it is composed, as will be seen in Resistance and Resistivity. Additional insight is gained by solving for yielding This expression for can be interpreted as the voltage drop across a resistor produced by the flow of current . The phrase drop is often used for this voltage. For instance, the headlight in has an drop of 12.0 V. If voltage is measured at various points in a circuit, it will be seen to increase at the voltage source and decrease at the resistor. Voltage is similar to fluid pressure. The voltage source is like a pump, creating a pressure difference, causing current—the flow of charge. The resistor is like a pipe that reduces pressure and limits flow because of its resistance. Conservation of energy has important consequences here. The voltage source supplies energy (causing an electric field and a current), and the resistor converts it to another form (such as thermal energy). In a simple circuit (one with a single simple resistor), the voltage supplied by the source equals the voltage drop across the resistor, since , and the same flows through each. Thus the energy supplied by the voltage source and the energy converted by the resistor are equal. (See .) ### Test Prep for AP Courses ### Section Summary 1. A simple circuit is one in which there is a single voltage source and a single resistance. 2. One statement of Ohm’s law gives the relationship between current , voltage , and resistance in a simple circuit to be 3. Resistance has units of ohms ( ), related to volts and amperes by . 4. There is a voltage or drop across a resistor, caused by the current flowing through it, given by . ### Conceptual Questions ### Problems & Exercises
# Electric Current, Resistance, and Ohm's Law ## Resistance and Resistivity ### Learning Objectives By the end of this section, you will be able to: 1. Explain the concept of resistivity. 2. Use resistivity to calculate the resistance of specified configurations of material. 3. Use the thermal coefficient of resistivity to calculate the change of resistance with temperature. ### Material and Shape Dependence of Resistance The resistance of an object depends on its shape and the material of which it is composed. The cylindrical resistor in is easy to analyze, and, by so doing, we can gain insight into the resistance of more complicated shapes. As you might expect, the cylinder’s electric resistance is directly proportional to its length , similar to the resistance of a pipe to fluid flow. The longer the cylinder, the more collisions charges will make with its atoms. The greater the diameter of the cylinder, the more current it can carry (again similar to the flow of fluid through a pipe). In fact, is inversely proportional to the cylinder’s cross-sectional area . For a given shape, the resistance depends on the material of which the object is composed. Different materials offer different resistance to the flow of charge. We define the resistivity of a substance so that the resistance of an object is directly proportional to . Resistivity is an intrinsic property of a material, independent of its shape or size. The resistance of a uniform cylinder of length , of cross-sectional area , and made of a material with resistivity , is gives representative values of . The materials listed in the table are separated into categories of conductors, semiconductors, and insulators, based on broad groupings of resistivities. Conductors have the smallest resistivities, and insulators have the largest; semiconductors have intermediate resistivities. Conductors have varying but large free charge densities, whereas most charges in insulators are bound to atoms and are not free to move. Semiconductors are intermediate, having far fewer free charges than conductors, but having properties that make the number of free charges depend strongly on the type and amount of impurities in the semiconductor. These unique properties of semiconductors are put to use in modern electronics, as will be explored in later chapters. ### Temperature Variation of Resistance The resistivity of all materials depends on temperature. Some even become superconductors (zero resistivity) at very low temperatures. (See .) Conversely, the resistivity of conductors increases with increasing temperature. Since the atoms vibrate more rapidly and over larger distances at higher temperatures, the electrons moving through a metal make more collisions, effectively making the resistivity higher. Over relatively small temperature changes (about or less), resistivity varies with temperature change as expressed in the following equation where is the original resistivity and is the temperature coefficient of resistivity. (See the values of in below.) For larger temperature changes, may vary or a nonlinear equation may be needed to find . Note that is positive for metals, meaning their resistivity increases with temperature. Some alloys have been developed specifically to have a small temperature dependence. Manganin (which is made of copper, manganese and nickel), for example, has close to zero (to three digits on the scale in ), and so its resistivity varies only slightly with temperature. This is useful for making a temperature-independent resistance standard, for example. Note also that is negative for the semiconductors listed in , meaning that their resistivity decreases with increasing temperature. They become better conductors at higher temperature, because increased thermal agitation increases the number of free charges available to carry current. This property of decreasing with temperature is also related to the type and amount of impurities present in the semiconductors. The resistance of an object also depends on temperature, since is directly proportional to . For a cylinder we know , and so, if and do not change greatly with temperature, will have the same temperature dependence as . (Examination of the coefficients of linear expansion shows them to be about two orders of magnitude less than typical temperature coefficients of resistivity, and so the effect of temperature on and is about two orders of magnitude less than on .) Thus, is the temperature dependence of the resistance of an object, where is the original resistance and is the resistance after a temperature change . Numerous thermometers are based on the effect of temperature on resistance. (See .) One of the most common is the thermistor, a semiconductor crystal with a strong temperature dependence, the resistance of which is measured to obtain its temperature. The device is small, so that it quickly comes into thermal equilibrium with the part of a person it touches. ### Test Prep for AP Courses ### Section Summary 1. The resistance of a cylinder of length and cross-sectional area is , where is the resistivity of the material. 2. Values of in show that materials fall into three groups—conductors, semiconductors, and insulators. 3. Temperature affects resistivity; for relatively small temperature changes , resistivity is , where is the original resistivity and is the temperature coefficient of resistivity. 4. gives values for , the temperature coefficient of resistivity. 5. The resistance of an object also varies with temperature: , where is the original resistance, and is the resistance after the temperature change. ### Conceptual Questions ### Problems & Exercises
# Electric Current, Resistance, and Ohm's Law ## Electric Power and Energy ### Learning Objectives By the end of this section, you will be able to: 1. Calculate the power dissipated by a resistor and power supplied by a power supply. 2. Calculate the cost of electricity under various circumstances. ### Power in Electric Circuits Power is associated by many people with electricity. Knowing that power is the rate of energy use or energy conversion, what is the expression for electric power? Power transmission lines might come to mind. We also think of lightbulbs in terms of their power ratings in watts. Let us compare a 25-W bulb with a 60-W bulb. (See (a).) Since both operate on the same voltage, the 60-W bulb must draw more current to have a greater power rating. Thus the 60-W bulb’s resistance must be lower than that of a 25-W bulb. If we increase voltage, we also increase power. For example, when a 25-W bulb that is designed to operate on 120 V is connected to 240 V, it briefly glows very brightly and then burns out. Precisely how are voltage, current, and resistance related to electric power? Electric energy depends on both the voltage involved and the charge moved. This is expressed most simply as , where is the charge moved and is the voltage (or more precisely, the potential difference the charge moves through). Power is the rate at which energy is moved, and so electric power is Recognizing that current is (note that here), the expression for power becomes Electric power ( ) is simply the product of current times voltage. Power has familiar units of watts. Since the SI unit for potential energy (PE) is the joule, power has units of joules per second, or watts. Thus, . For example, cars often have one or more auxiliary power outlets with which you can charge a cell phone or other electronic devices. These outlets may be rated at 20 A, so that the circuit can deliver a maximum power . In some applications, electric power may be expressed as volt-amperes or even kilovolt-amperes ( ). To see the relationship of power to resistance, we combine Ohm’s law with . Substituting gives . Similarly, substituting gives . Three expressions for electric power are listed together here for convenience: Note that the first equation is always valid, whereas the other two can be used only for resistors. In a simple circuit, with one voltage source and a single resistor, the power supplied by the voltage source and that dissipated by the resistor are identical. (In more complicated circuits, can be the power dissipated by a single device and not the total power in the circuit.) Different insights can be gained from the three different expressions for electric power. For example, implies that the lower the resistance connected to a given voltage source, the greater the power delivered. Furthermore, since voltage is squared in , the effect of applying a higher voltage is perhaps greater than expected. Thus, when the voltage is doubled to a 25-W bulb, its power nearly quadruples to about 100 W, burning it out. If the bulb’s resistance remained constant, its power would be exactly 100 W, but at the higher temperature its resistance is higher, too. ### The Cost of Electricity The more electric appliances you use and the longer they are left on, the higher your electric bill. This familiar fact is based on the relationship between energy and power. You pay for the energy used. Since , we see that is the energy used by a device using power for a time interval . For example, the more lightbulbs burning, the greater used; the longer they are on, the greater is. The energy unit on electric bills is the kilowatt-hour (), consistent with the relationship . It is easy to estimate the cost of operating electric appliances if you have some idea of their power consumption rate in watts or kilowatts, the time they are on in hours, and the cost per kilowatt-hour for your electric utility. Kilowatt-hours, like all other specialized energy units such as food calories, can be converted to joules. You can prove to yourself that . The electrical energy ( ) used can be reduced either by reducing the time of use or by reducing the power consumption of that appliance or fixture. This will not only reduce the cost, but it will also result in a reduced impact on the environment. Improvements to lighting are some of the fastest ways to reduce the electrical energy used in a home or business. About 20% of a home’s use of energy goes to lighting, while the number for commercial establishments is closer to 40%. Fluorescent lights are about four times more efficient than incandescent lights—this is true for both the long tubes and the compact fluorescent lights (CFL). (See (b).) Thus, a 60-W incandescent bulb can be replaced by a 15-W CFL, which has the same brightness and color. CFLs have a bent tube inside a globe or a spiral-shaped tube, all connected to a standard screw-in base that fits standard incandescent light sockets. (Original problems with color, flicker, shape, and high initial investment for CFLs have been addressed in recent years.) The heat transfer from these CFLs is less, and they last up to 10 times longer. The significance of an investment in such bulbs is addressed in the next example. New white LED lights (which are clusters of small LED bulbs) are even more efficient (twice that of CFLs) and last 5 times longer than CFLs. However, their cost is still high. ### Test Prep for AP Courses ### Section Summary 1. Electric power is the rate (in watts) that energy is supplied by a source or dissipated by a device. 2. Three expressions for electrical power are and 3. The energy used by a device with a power over a time is . ### Conceptual Questions ### Problem Exercises
# Electric Current, Resistance, and Ohm's Law ## Alternating Current versus Direct Current ### Learning Objectives By the end of this section, you will be able to: 1. Explain the differences and similarities between AC and DC current. 2. Calculate rms voltage, current, and average power. 3. Explain why AC current is used for power transmission. ### Alternating Current Most of the examples dealt with so far, and particularly those utilizing batteries, have constant voltage sources. Once the current is established, it is thus also a constant. Direct current (DC) is the flow of electric charge in only one direction. It is the steady state of a constant-voltage circuit. Most well-known applications, however, use a time-varying voltage source. Alternating current (AC) is the flow of electric charge that periodically reverses direction. If the source varies periodically, particularly sinusoidally, the circuit is known as an alternating current circuit. Examples include the commercial and residential power that serves so many of our needs. shows graphs of voltage and current versus time for typical DC and AC power. The AC voltages and frequencies commonly used in homes and businesses vary around the world. shows a schematic of a simple circuit with an AC voltage source. The voltage between the terminals fluctuates as shown, with the AC voltage given by where is the voltage at time , is the peak voltage, and is the frequency in hertz. For this simple resistance circuit, , and so the AC current is where is the current at time , and is the peak current. For this example, the voltage and current are said to be in phase, as seen in (b). Current in the resistor alternates back and forth just like the driving voltage, since . If the resistor is a fluorescent light bulb, for example, it brightens and dims 120 times per second as the current repeatedly goes through zero. A 120-Hz flicker is too rapid for your eyes to detect, but if you wave your hand back and forth between your face and a fluorescent light, you will see a stroboscopic effect evidencing AC. The fact that the light output fluctuates means that the power is fluctuating. The power supplied is . Using the expressions for and above, we see that the time dependence of power is , as shown in . We are most often concerned with average power rather than its fluctuations—that 60-W light bulb in your desk lamp has an average power consumption of 60 W, for example. As illustrated in , the average power is This is evident from the graph, since the areas above and below the line are equal, but it can also be proven using trigonometric identities. Similarly, we define an average or rms current and average or rms voltage to be, respectively, and where rms stands for root mean square, a particular kind of average. In general, to obtain a root mean square, the particular quantity is squared, its mean (or average) is found, and the square root is taken. This is useful for AC, since the average value is zero. Now, which gives as stated above. It is standard practice to quote , , and rather than the peak values. For example, most household electricity is 120 V AC, which means that is 120 V. The common 10-A circuit breaker will interrupt a sustained greater than 10 A. Your 1.0-kW microwave oven consumes , and so on. You can think of these rms and average values as the equivalent DC values for a simple resistive circuit. To summarize, when dealing with AC, Ohm’s law and the equations for power are completely analogous to those for DC, but rms and average values are used for AC. Thus, for AC, Ohm’s law is written The various expressions for AC power are and ### Why Use AC for Power Distribution? Most large power-distribution systems are AC. Moreover, the power is transmitted at much higher voltages than the 120-V AC (240 V in most parts of the world) we use in homes and on the job. Economies of scale make it cheaper to build a few very large electric power-generation plants than to build numerous small ones. This necessitates sending power long distances, and it is obviously important that energy losses en route be minimized. High voltages can be transmitted with much smaller power losses than low voltages, as we shall see. (See .) For safety reasons, the voltage at the user is reduced to familiar values. The crucial factor is that it is much easier to increase and decrease AC voltages than DC, so AC is used in most large power distribution systems. It is widely recognized that high voltages pose greater hazards than low voltages. But, in fact, some high voltages, such as those associated with common static electricity, can be harmless. So it is not voltage alone that determines a hazard. It is not so widely recognized that AC shocks are often more harmful than similar DC shocks. Thomas Edison thought that AC shocks were more harmful and set up a DC power-distribution system in New York City in the late 1800s. There were bitter fights, in particular between Edison and George Westinghouse and Nikola Tesla, who were advocating the use of AC in early power-distribution systems. AC has prevailed largely due to transformers and lower power losses with high-voltage transmission. ### Section Summary 1. Direct current (DC) is the flow of electric current in only one direction. It refers to systems where the source voltage is constant. 2. The voltage source of an alternating current (AC) system puts out , where is the voltage at time , is the peak voltage, and is the frequency in hertz. 3. In a simple circuit, and AC current is , where is the current at time , and is the peak current. 4. The average AC power is . 5. Average (rms) current and average (rms) voltage are and , where rms stands for root mean square. 6. Thus, . 7. Ohm’s law for AC is . 8. Expressions for the average power of an AC circuit are , , and , analogous to the expressions for DC circuits. ### Conceptual Questions ### Problem Exercises
# Electric Current, Resistance, and Ohm's Law ## Electric Hazards and the Human Body ### Learning Objectives By the end of this section, you will be able to: 1. Define thermal hazard, shock hazard, and short circuit. 2. Explain what effects various levels of current have on the human body. There are two known hazards of electricity—thermal and shock. A thermal hazard is one where excessive electric power causes undesired thermal effects, such as starting a fire in the wall of a house. A shock hazard occurs when electric current passes through a person. Shocks range in severity from painful, but otherwise harmless, to heart-stopping lethality. This section considers these hazards and the various factors affecting them in a quantitative manner. Electrical Safety: Systems and Devices will consider systems and devices for preventing electrical hazards. ### Thermal Hazards Electric power causes undesired heating effects whenever electric energy is converted to thermal energy at a rate faster than it can be safely dissipated. A classic example of this is the short circuit, a low-resistance path between terminals of a voltage source. An example of a short circuit is shown in . Insulation on wires leading to an appliance has worn through, allowing the two wires to come into contact. Such an undesired contact with a high voltage is called a short. Since the resistance of the short, , is very small, the power dissipated in the short, , is very large. For example, if is 120 V and is , then the power is 144 kW, much greater than that used by a typical household appliance. Thermal energy delivered at this rate will very quickly raise the temperature of surrounding materials, melting or perhaps igniting them. One particularly insidious aspect of a short circuit is that its resistance may actually be decreased due to the increase in temperature. This can happen if the short creates ionization. These charged atoms and molecules are free to move and, thus, lower the resistance . Since , the power dissipated in the short rises, possibly causing more ionization, more power, and so on. High voltages, such as the 480-V AC used in some industrial applications, lend themselves to this hazard, because higher voltages create higher initial power production in a short. Another serious, but less dramatic, thermal hazard occurs when wires supplying power to a user are overloaded with too great a current. As discussed in the previous section, the power dissipated in the supply wires is , where is the resistance of the wires and the current flowing through them. If either or is too large, the wires overheat. For example, a worn appliance cord (with some of its braided wires broken) may have rather than the it should be. If 10.0 A of current passes through the cord, then is dissipated in the cord—much more than is safe. Similarly, if a wire with a resistance is meant to carry a few amps, but is instead carrying 100 A, it will severely overheat. The power dissipated in the wire will in that case be . Fuses and circuit breakers are used to limit excessive currents. (See and .) Each device opens the circuit automatically when a sustained current exceeds safe limits. Fuses and circuit breakers for typical household voltages and currents are relatively simple to produce, but those for large voltages and currents experience special problems. For example, when a circuit breaker tries to interrupt the flow of high-voltage electricity, a spark can jump across its points that ionizes the air in the gap and allows the current to continue flowing. Large circuit breakers found in power-distribution systems employ insulating gas and even use jets of gas to blow out such sparks. Here AC is safer than DC, since AC current goes through zero 120 times per second, giving a quick opportunity to extinguish these arcs. ### Shock Hazards Electrical currents through people produce tremendously varied effects. An electrical current can be used to block back pain. The possibility of using electrical current to stimulate muscle action in paralyzed limbs, perhaps allowing paraplegics to walk, is under study. TV dramatizations in which electrical shocks are used to bring a heart attack victim out of ventricular fibrillation (a massively irregular, often fatal, beating of the heart) are more than common. Yet most electrical shock fatalities occur because a current put the heart into fibrillation. A pacemaker uses electrical shocks to stimulate the heart to beat properly. Some fatal shocks do not produce burns, but warts can be safely burned off with electric current (though freezing using liquid nitrogen is now more common). Of course, there are consistent explanations for these disparate effects. The major factors upon which the effects of electrical shock depend are 1. The amount of current 2. The path taken by the current 3. The duration of the shock 4. The frequency of the current ( for DC) gives the effects of electrical shocks as a function of current for a typical accidental shock. The effects are for a shock that passes through the trunk of the body, has a duration of 1 s, and is caused by 60-Hz power. Our bodies are relatively good conductors due to the water in our bodies. Given that larger currents will flow through sections with lower resistance (to be further discussed in the next chapter), electric currents preferentially flow through paths in the human body that have a minimum resistance in a direct path to earth. The earth is a natural electron sink. Wearing insulating shoes, a requirement in many professions, prohibits a pathway for electrons by providing a large resistance in that path. Whenever working with high-power tools (drills), or in risky situations, ensure that you do not provide a pathway for current flow (especially through the heart). Very small currents pass harmlessly and unfelt through the body. This happens to you regularly without your knowledge. The threshold of sensation is only 1 mA and, although unpleasant, shocks are apparently harmless for currents less than 5 mA. A great number of safety rules take the 5-mA value for the maximum allowed shock. At 10 to 20 mA and above, the current can stimulate sustained muscular contractions much as regular nerve impulses do. People sometimes say they were knocked across the room by a shock, but what really happened was that certain muscles contracted, propelling them in a manner not of their own choosing. (See (a).) More frightening, and potentially more dangerous, is the “can’t let go” effect illustrated in (b). The muscles that close the fingers are stronger than those that open them, so the hand closes involuntarily on the wire shocking it. This can prolong the shock indefinitely. It can also be a danger to a person trying to rescue the victim, because the rescuer’s hand may close about the victim’s wrist. Usually the best way to help the victim is to give the fist a hard knock/blow/jar with an insulator or to throw an insulator at the fist. Modern electric fences, used in animal enclosures, are now pulsed on and off to allow people who touch them to get free, rendering them less lethal than in the past. Greater currents may affect the heart. Its electrical patterns can be disrupted, so that it beats irregularly and ineffectively in a condition called “ventricular fibrillation.” This condition often lingers after the shock and is fatal due to a lack of blood circulation. The threshold for ventricular fibrillation is between 100 and 300 mA. At about 300 mA and above, the shock can cause burns, depending on the concentration of current—the more concentrated, the greater the likelihood of burns. Very large currents cause the heart and diaphragm to contract for the duration of the shock. Both the heart and breathing stop. Interestingly, both often return to normal following the shock. The electrical patterns on the heart are completely erased in a manner that the heart can start afresh with normal beating, as opposed to the permanent disruption caused by smaller currents that can put the heart into ventricular fibrillation. The latter is something like scribbling on a blackboard, whereas the former completely erases it. TV dramatizations of electric shock used to bring a heart attack victim out of ventricular fibrillation also show large paddles. These are used to spread out current passed through the victim to reduce the likelihood of burns. Current is the major factor determining shock severity (given that other conditions such as path, duration, and frequency are fixed, such as in the table and preceding discussion). A larger voltage is more hazardous, but since , the severity of the shock depends on the combination of voltage and resistance. For example, a person with dry skin has a resistance of about . If he comes into contact with 120-V AC, a current passes harmlessly through him. The same person soaking wet may have a resistance of and the same 120 V will produce a current of 12 mA—above the “can’t let go” threshold and potentially dangerous. Most of the body’s resistance is in its dry skin. When wet, salts go into ion form, lowering the resistance significantly. The interior of the body has a much lower resistance than dry skin because of all the ionic solutions and fluids it contains. If skin resistance is bypassed, such as by an intravenous infusion, a catheter, or exposed pacemaker leads, a person is rendered microshock sensitive. In this condition, currents about 1/1000 those listed in produce similar effects. During open-heart surgery, currents as small as can be used to still the heart. Stringent electrical safety requirements in hospitals, particularly in surgery and intensive care, are related to the doubly disadvantaged microshock-sensitive patient. The break in the skin has reduced his resistance, and so the same voltage causes a greater current, and a much smaller current has a greater effect. Factors other than current that affect the severity of a shock are its path, duration, and AC frequency. Path has obvious consequences. For example, the heart is unaffected by an electric shock through the brain, such as may be used to treat manic depression. And it is a general truth that the longer the duration of a shock, the greater its effects. presents a graph that illustrates the effects of frequency on a shock. The curves show the minimum current for two different effects, as a function of frequency. The lower the current needed, the more sensitive the body is at that frequency. Ironically, the body is most sensitive to frequencies near the 50- or 60-Hz frequencies in common use. The body is slightly less sensitive for DC (), mildly confirming Edison’s claims that AC presents a greater hazard. At higher and higher frequencies, the body becomes progressively less sensitive to any effects that involve nerves. This is related to the maximum rates at which nerves can fire or be stimulated. At very high frequencies, electrical current travels only on the surface of a person. Thus a wart can be burned off with very high frequency current without causing the heart to stop. (Do not try this at home with 60-Hz AC!) Some of the spectacular demonstrations of electricity, in which high-voltage arcs are passed through the air and over people’s bodies, employ high frequencies and low currents. (See .) Electrical safety devices and techniques are discussed in detail in Electrical Safety: Systems and Devices. ### Section Summary 1. The two types of electric hazards are thermal (excessive power) and shock (current through a person). 2. Shock severity is determined by current, path, duration, and AC frequency. 3. lists shock hazards as a function of current. 4. graphs the threshold current for two hazards as a function of frequency. ### Conceptual Questions ### Problem Exercises
# Electric Current, Resistance, and Ohm's Law ## Nerve Conduction–Electrocardiograms ### Learning Objectives By the end of this section, you will be able to: 1. Explain the process by which electric signals are transmitted along a neuron. 2. Explain the effects myelin sheaths have on signal propagation. 3. Explain what the features of an ECG signal indicate. ### Nerve Conduction Electric currents in the vastly complex system of billions of nerves in our body allow us to sense the world, control parts of our body, and think. These are representative of the three major functions of nerves. First, nerves carry messages from our sensory organs and others to the central nervous system, consisting of the brain and spinal cord. Second, nerves carry messages from the central nervous system to muscles and other organs. Third, nerves transmit and process signals within the central nervous system. The sheer number of nerve cells and the incredibly greater number of connections between them makes this system the subtle wonder that it is. Nerve conduction is a general term for electrical signals carried by nerve cells. It is one aspect of bioelectricity, or electrical effects in and created by biological systems. Nerve cells, properly called neurons, look different from other cells—they have tendrils, some of them many centimeters long, connecting them with other cells. (See .) Signals arrive at the cell body across synapses or through dendrites, stimulating the neuron to generate its own signal, sent along its long axon to other nerve or muscle cells. Signals may arrive from many other locations and be transmitted to yet others, conditioning the synapses by use, giving the system its complexity and its ability to learn. The method by which these electric currents are generated and transmitted is more complex than the simple movement of free charges in a conductor, but it can be understood with principles already discussed in this text. The most important of these are the Coulomb force and diffusion. illustrates how a voltage (potential difference) is created across the cell membrane of a neuron in its resting state. This thin membrane separates electrically neutral fluids having differing concentrations of ions, the most important varieties being , , and (these are sodium, potassium, and chlorine ions with single plus or minus charges as indicated). As discussed in Molecular Transport Phenomena: Diffusion, Osmosis, and Related Processes, free ions will diffuse from a region of high concentration to one of low concentration. But the cell membrane is semipermeable, meaning that some ions may cross it while others cannot. In its resting state, the cell membrane is permeable to and , and impermeable to . Diffusion of and thus creates the layers of positive and negative charge on the outside and inside of the membrane. The Coulomb force prevents the ions from diffusing across in their entirety. Once the charge layer has built up, the repulsion of like charges prevents more from moving across, and the attraction of unlike charges prevents more from leaving either side. The result is two layers of charge right on the membrane, with diffusion being balanced by the Coulomb force. A tiny fraction of the charges move across and the fluids remain neutral (other ions are present), while a separation of charge and a voltage have been created across the membrane. The separation of charge creates a potential difference of 70 to 90 mV across the cell membrane. While this is a small voltage, the resulting electric field () across the only 8-nm-thick membrane is immense (on the order of 11 MV/m!) and has fundamental effects on its structure and permeability. Now, if the exterior of a neuron is taken to be at 0 V, then the interior has a resting potential of about –90 mV. Such voltages are created across the membranes of almost all types of animal cells but are largest in nerve and muscle cells. In fact, fully 25% of the energy used by cells goes toward creating and maintaining these potentials. Electric currents along the cell membrane are created by any stimulus that changes the membrane’s permeability. The membrane thus temporarily becomes permeable to , which then rushes in, driven both by diffusion and the Coulomb force. This inrush of first neutralizes the inside membrane, or depolarizes it, and then makes it slightly positive. The depolarization causes the membrane to again become impermeable to , and the movement of quickly returns the cell to its resting potential, or repolarizes it. This sequence of events results in a voltage pulse, called the action potential. (See .) Only small fractions of the ions move, so that the cell can fire many hundreds of times without depleting the excess concentrations of and . Eventually, the cell must replenish these ions to maintain the concentration differences that create bioelectricity. This sodium-potassium pump is an example of active transport, wherein cell energy is used to move ions across membranes against diffusion gradients and the Coulomb force. The action potential is a voltage pulse at one location on a cell membrane. How does it get transmitted along the cell membrane, and in particular down an axon, as a nerve impulse? The answer is that the changing voltage and electric fields affect the permeability of the adjacent cell membrane, so that the same process takes place there. The adjacent membrane depolarizes, affecting the membrane further down, and so on, as illustrated in . Thus the action potential stimulated at one location triggers a nerve impulse that moves slowly (about 1 m/s) along the cell membrane. Some axons, like that in , are sheathed with myelin, consisting of fat-containing cells. shows an enlarged view of an axon having myelin sheaths characteristically separated by unmyelinated gaps (called nodes of Ranvier). This arrangement gives the axon a number of interesting properties. Since myelin is an insulator, it prevents signals from jumping between adjacent nerves (cross talk). Additionally, the myelinated regions transmit electrical signals at a very high speed, as an ordinary conductor or resistor would. There is no action potential in the myelinated regions, so that no cell energy is used in them. There is an signal loss in the myelin, but the signal is regenerated in the gaps, where the voltage pulse triggers the action potential at full voltage. So a myelinated axon transmits a nerve impulse faster, with less energy consumption, and is better protected from cross talk than an unmyelinated one. Not all axons are myelinated, so that cross talk and slow signal transmission are a characteristic of the normal operation of these axons, another variable in the nervous system. The degeneration or destruction of the myelin sheaths that surround the nerve fibers impairs signal transmission and can lead to numerous neurological effects. One of the most prominent of these diseases comes from the body’s own immune system attacking the myelin in the central nervous system—multiple sclerosis. MS symptoms include fatigue, vision problems, weakness of arms and legs, loss of balance, and tingling or numbness in one’s extremities (neuropathy). It is more apt to strike younger adults, especially females. Causes might come from infection, environmental or geographic affects, or genetics. At the moment there is no known cure for MS. Most animal cells can fire or create their own action potential. Muscle cells contract when they fire and are often induced to do so by a nerve impulse. In fact, nerve and muscle cells are physiologically similar, and there are even hybrid cells, such as in the heart, that have characteristics of both nerves and muscles. Some animals, like the infamous electric eel (see ), use muscles ganged so that their voltages add in order to create a shock great enough to stun prey. ### Electrocardiograms Just as nerve impulses are transmitted by depolarization and repolarization of adjacent membrane, the depolarization that causes muscle contraction can also stimulate adjacent muscle cells to depolarize (fire) and contract. Thus, a depolarization wave can be sent across the heart, coordinating its rhythmic contractions and enabling it to perform its vital function of propelling blood through the circulatory system. is a simplified graphic of a depolarization wave spreading across the heart from the sinoarterial (SA) node, the heart’s natural pacemaker. An electrocardiogram (ECG) is a record of the voltages created by the wave of depolarization and subsequent repolarization in the heart. (They are also abbreviated EKG.) Voltages between pairs of electrodes placed on the chest are vector components of the voltage wave on the heart. Standard ECGs have 12 or more electrodes, but only three are shown in for clarity. Decades ago, three-electrode ECGs were performed by placing electrodes on the left and right arms and the left leg. The voltage between the right arm and the left leg is called the lead II potential and is the most often graphed. We shall examine the lead II potential as an indicator of heart-muscle function and see that it is coordinated with arterial blood pressure as well. Heart function and its four-chamber action are explored in Viscosity and Laminar Flow; Poiseuille’s Law. Basically, the right and left atria receive blood from the body and lungs, respectively, and pump the blood into the ventricles. The right and left ventricles, in turn, pump blood through the lungs and the rest of the body, respectively. Depolarization of the heart muscle causes it to contract. After contraction it is repolarized to ready it for the next beat. The ECG measures components of depolarization and repolarization of the heart muscle and can yield significant information on the functioning and malfunctioning of the heart. shows an ECG of the lead II potential and a graph of the corresponding arterial blood pressure. The major features are labeled P, Q, R, S, and T. The P wave is generated by the depolarization and contraction of the atria as they pump blood into the ventricles. The QRS complex is created by the depolarization of the ventricles as they pump blood to the lungs and body. Since the shape of the heart and the path of the depolarization wave are not simple, the QRS complex has this typical shape and time span. The lead II QRS signal also masks the repolarization of the atria, which occur at the same time. Finally, the T wave is generated by the repolarization of the ventricles and is followed by the next P wave in the next heartbeat. Arterial blood pressure varies with each part of the heartbeat, with systolic (maximum) pressure occurring closely after the QRS complex, which signals contraction of the ventricles. Taken together, the 12 leads of a state-of-the-art ECG can yield a wealth of information about the heart. For example, regions of damaged heart tissue, called infarcts, reflect electrical waves and are apparent in one or more lead potentials. Subtle changes due to slight or gradual damage to the heart are most readily detected by comparing a recent ECG to an older one. This is particularly the case since individual heart shape, size, and orientation can cause variations in ECGs from one individual to another. ECG technology has advanced to the point where a portable ECG monitor can be incorporated into wearable devices and other small objects. See . ### Section Summary 1. Electric potentials in neurons and other cells are created by ionic concentration differences across semipermeable membranes. 2. Stimuli change the permeability and create action potentials that propagate along neurons. 3. Myelin sheaths speed this process and reduce the needed energy input. 4. This process in the heart can be measured with an electrocardiogram (ECG). ### Conceptual Questions ### Problems & Exercises
# Circuits and DC Instruments ## Introduction to Circuits and DC Instruments Electric circuits are commonplace. Some are simple, such as those in flashlights. Others, such as those used in supercomputers, are extremely complex. This collection of modules takes the topic of electric circuits a step beyond simple circuits. When the circuit is purely resistive, everything in this module applies to both DC and AC. Matters become more complex when capacitance is involved. We do consider what happens when capacitors are connected to DC voltage sources, but the interaction of capacitors and other nonresistive devices with AC is left for a later chapter. Finally, a number of important DC instruments, such as meters that measure voltage and current, are covered in this chapter.
# Circuits and DC Instruments ## Resistors in Series and Parallel ### Learning Objectives By the end of this section, you will be able to: 1. Draw a circuit with resistors in parallel and in series. 2. Calculate the voltage drop of a current across a resistor using Ohm’s law. 3. Contrast the way total resistance is calculated for resistors in series and in parallel. 4. Explain why total resistance of a parallel circuit is less than the smallest resistance of any of the resistors in that circuit. 5. Calculate total resistance of a circuit that contains a mixture of resistors connected in series and in parallel. Most circuits have more than one component, called a resistor that limits the flow of charge in the circuit. A measure of this limit on charge flow is called resistance. The simplest combinations of resistors are the series and parallel connections illustrated in . The total resistance of a combination of resistors depends on both their individual values and how they are connected. ### Resistors in Series When are resistors in series? Resistors are in series whenever the flow of charge, called the current, must flow through devices sequentially. For example, if current flows through a person holding a screwdriver and into the Earth, then in (a) could be the resistance of the screwdriver’s shaft, the resistance of its handle, the person’s body resistance, and the resistance of her shoes. shows resistors in series connected to a voltage source. It seems reasonable that the total resistance is the sum of the individual resistances, considering that the current has to pass through each resistor in sequence. (This fact would be an advantage to a person wishing to avoid an electrical shock, who could reduce the current by wearing high-resistance rubber-soled shoes. It could be a disadvantage if one of the resistances were a faulty high-resistance cord to an appliance that would reduce the operating current.) To verify that resistances in series do indeed add, let us consider the loss of electrical power, called a voltage drop, in each resistor in . According to Ohm’s law, the voltage drop, , across a resistor when a current flows through it is calculated using the equation , where equals the current in amps (A) and is the resistance in ohms . Another way to think of this is that is the voltage necessary to make a current flow through a resistance . So the voltage drop across is , that across is , and that across is . The sum of these voltages equals the voltage output of the source; that is, This equation is based on the conservation of energy and conservation of charge. Electrical potential energy can be described by the equation , where is the electric charge and is the voltage. Thus the energy supplied by the source is , while that dissipated by the resistors is These energies must be equal, because there is no other source and no other destination for energy in the circuit. Thus, . The charge cancels, yielding , as stated. (Note that the same amount of charge passes through the battery and each resistor in a given amount of time, since there is no capacitance to store charge, there is no place for charge to leak, and charge is conserved.) Now substituting the values for the individual voltages gives Note that for the equivalent single series resistance , we have This implies that the total or equivalent series resistance of three resistors is . This logic is valid in general for any number of resistors in series; thus, the total resistance of a series connection is as proposed. Since all of the current must pass through each resistor, it experiences the resistance of each, and resistances in series simply add up. ### Resistors in Parallel shows resistors in parallel, wired to a voltage source. Resistors are in parallel when each resistor is connected directly to the voltage source by connecting wires having negligible resistance. Each resistor thus has the full voltage of the source applied to it. Each resistor draws the same current it would if it alone were connected to the voltage source (provided the voltage source is not overloaded). For example, an automobile’s headlights, radio, and so on, are wired in parallel, so that they utilize the full voltage of the source and can operate completely independently. The same is true in your house, or any building. (See (b).) To find an expression for the equivalent parallel resistance , let us consider the currents that flow and how they are related to resistance. Since each resistor in the circuit has the full voltage, the currents flowing through the individual resistors are , , and . Conservation of charge implies that the total current produced by the source is the sum of these currents: Substituting the expressions for the individual currents gives Note that Ohm’s law for the equivalent single resistance gives The terms inside the parentheses in the last two equations must be equal. Generalizing to any number of resistors, the total resistance of a parallel connection is related to the individual resistances by This relationship results in a total resistance that is less than the smallest of the individual resistances. (This is seen in the next example.) When resistors are connected in parallel, more current flows from the source than would flow for any of them individually, and so the total resistance is lower. ### Combinations of Series and Parallel More complex connections of resistors are sometimes just combinations of series and parallel. These are commonly encountered, especially when wire resistance is considered. In that case, wire resistance is in series with other resistances that are in parallel. Combinations of series and parallel can be reduced to a single equivalent resistance using the technique illustrated in . Various parts are identified as either series or parallel, reduced to their equivalents, and further reduced until a single resistance is left. The process is more time consuming than difficult. The simplest combination of series and parallel resistance, shown in , is also the most instructive, since it is found in many applications. For example, could be the resistance of wires from a car battery to its electrical devices, which are in parallel. and could be the starter motor and a passenger compartment light. We have previously assumed that wire resistance is negligible, but, when it is not, it has important effects, as the next example indicates. ### Practical Implications One implication of this last example is that resistance in wires reduces the current and power delivered to a resistor. If wire resistance is relatively large, as in a worn (or a very long) extension cord, then this loss can be significant. If a large current is drawn, the drop in the wires can also be significant. For example, when you are rummaging in the refrigerator and the motor comes on, the refrigerator light dims momentarily. Similarly, you can see the passenger compartment light dim when you start the engine of your car (although this may be due to resistance inside the battery itself). What is happening in these high-current situations is illustrated in . The device represented by has a very low resistance, and so when it is switched on, a large current flows. This increased current causes a larger drop in the wires represented by , reducing the voltage across the light bulb (which is ), which then dims noticeably. ### Test Prep for AP Courses ### Section Summary 1. The total resistance of an electrical circuit with resistors wired in a series is the sum of the individual resistances: 2. Each resistor in a series circuit has the same amount of current flowing through it. 3. The voltage drop, or power dissipation, across each individual resistor in a series is different, and their combined total adds up to the power source input. 4. The total resistance of an electrical circuit with resistors wired in parallel is less than the lowest resistance of any of the components and can be determined using the formula: 5. Each resistor in a parallel circuit has the same full voltage of the source applied to it. 6. The current flowing through each resistor in a parallel circuit is different, depending on the resistance. 7. If a more complex connection of resistors is a combination of series and parallel, it can be reduced to a single equivalent resistance by identifying its various parts as series or parallel, reducing each to its equivalent, and continuing until a single resistance is eventually reached. ### Conceptual Questions ### Problem Exercises Note: Data taken from figures can be assumed to be accurate to three significant digits.
# Circuits and DC Instruments ## Electromotive Force: Terminal Voltage ### Learning Objectives By the end of this section, you will be able to: 1. Compare and contrast the voltage and the electromagnetic force of an electric power source. 2. Describe what happens to the terminal voltage, current, and power delivered to a load as internal resistance of the voltage source increases (due to aging of batteries, for example). 3. Explain why it is beneficial to use more than one voltage source connected in parallel. When you forget to turn off your car lights, they slowly dim as the battery runs down. Why don’t they simply blink off when the battery’s energy is gone? Their gradual dimming implies that battery output voltage decreases as the battery is depleted. Furthermore, if you connect an excessive number of 12-V lights in parallel to a car battery, they will be dim even when the battery is fresh and even if the wires to the lights have very low resistance. This implies that the battery’s output voltage is reduced by the overload. The reason for the decrease in output voltage for depleted or overloaded batteries is that all voltage sources have two fundamental parts—a source of electrical energy and an internal resistance. Let us examine both. ### Electromotive Force You can think of many different types of voltage sources. Batteries themselves come in many varieties. There are many types of mechanical/electrical generators, driven by many different energy sources, ranging from nuclear to wind. Solar cells create voltages directly from light, while thermoelectric devices create voltage from temperature differences. A few voltage sources are shown in . All such devices create a potential difference and can supply current if connected to a resistance. On the small scale, the potential difference creates an electric field that exerts force on charges, causing current. We thus use the name electromotive force, abbreviated emf. Emf is not a force at all; it is a special type of potential difference. To be precise, the electromotive force (emf) is the potential difference of a source when no current is flowing. Units of emf are volts. Electromotive force is directly related to the source of potential difference, such as the particular combination of chemicals in a battery. However, emf differs from the voltage output of the device when current flows. The voltage across the terminals of a battery, for example, is less than the emf when the battery supplies current, and it declines further as the battery is depleted or loaded down. However, if the device’s output voltage can be measured without drawing current, then output voltage will equal emf (even for a very depleted battery). ### Internal Resistance As noted before, a 12-V truck battery is physically larger, contains more charge and energy, and can deliver a larger current than a 12-V motorcycle battery. Both are lead-acid batteries with identical emf, but, because of its size, the truck battery has a smaller internal resistance . Internal resistance is the inherent resistance to the flow of current within the source itself. is a schematic representation of the two fundamental parts of any voltage source. The emf (represented by a script E in the figure) and internal resistance are in series. The smaller the internal resistance for a given emf, the more current and the more power the source can supply. The internal resistance can behave in complex ways. As noted, increases as a battery is depleted. But internal resistance may also depend on the magnitude and direction of the current through a voltage source, its temperature, and even its history. The internal resistance of rechargeable nickel-cadmium cells, for example, depends on how many times and how deeply they have been depleted. Why are the chemicals able to produce a unique potential difference? Quantum mechanical descriptions of molecules, which take into account the types of atoms and numbers of electrons in them, are able to predict the energy states they can have and the energies of reactions between them. In the case of a lead-acid battery, an energy of 2 eV is given to each electron sent to the anode. Voltage is defined as the electrical potential energy divided by charge: . An electron volt is the energy given to a single electron by a voltage of 1 V. So the voltage here is 2 V, since 2 eV is given to each electron. It is the energy produced in each molecular reaction that produces the voltage. A different reaction produces a different energy and, hence, a different voltage. ### Terminal Voltage The voltage output of a device is measured across its terminals and, thus, is called its terminal voltage. Terminal voltage is given by where is the internal resistance and is the current flowing at the time of the measurement. is positive if current flows away from the positive terminal, as shown in . You can see that the larger the current, the smaller the terminal voltage. And it is likewise true that the larger the internal resistance, the smaller the terminal voltage. Suppose a load resistance is connected to a voltage source, as in . Since the resistances are in series, the total resistance in the circuit is . Thus the current is given by Ohm’s law to be We see from this expression that the smaller the internal resistance , the greater the current the voltage source supplies to its load . As batteries are depleted, increases. If becomes a significant fraction of the load resistance, then the current is significantly reduced, as the following example illustrates. Battery testers, such as those in , use small load resistors to intentionally draw current to determine whether the terminal voltage drops below an acceptable level. They really test the internal resistance of the battery. If internal resistance is high, the battery is weak, as evidenced by its low terminal voltage. Some batteries can be recharged by passing a current through them in the direction opposite to the current they supply to a resistance. This is done routinely in cars and batteries for small electrical appliances and electronic devices, and is represented pictorially in . The voltage output of the battery charger must be greater than the emf of the battery to reverse current through it. This will cause the terminal voltage of the battery to be greater than the emf, since , and is now negative. ### Multiple Voltage Sources There are two voltage sources when a battery charger is used. Voltage sources connected in series are relatively simple. When voltage sources are in series, their internal resistances add and their emfs add algebraically. (See .) Series connections of voltage sources are common—for example, in flashlights, toys, and other appliances. Usually, the cells are in series in order to produce a larger total emf. But if the cells oppose one another, such as when one is put into an appliance backward, the total emf is less, since it is the algebraic sum of the individual emfs. A battery is a multiple connection of voltaic cells, as shown in . The disadvantage of series connections of cells is that their internal resistances add. One of the authors once owned a 1957 MGA that had two 6-V batteries in series, rather than a single 12-V battery. This arrangement produced a large internal resistance that caused him many problems in starting the engine. If the series connection of two voltage sources is made into a complete circuit with the emfs in opposition, then a current of magnitude flows. See , for example, which shows a circuit exactly analogous to the battery charger discussed above. If two voltage sources in series with emfs in the same sense are connected to a load , as in , then flows. shows two voltage sources with identical emfs in parallel and connected to a load resistance. In this simple case, the total emf is the same as the individual emfs. But the total internal resistance is reduced, since the internal resistances are in parallel. The parallel connection thus can produce a larger current. Here, flows through the load, and is less than those of the individual batteries. For example, some diesel-powered cars use two 12-V batteries in parallel; they produce a total emf of 12 V but can deliver the larger current needed to start a diesel engine. ### Animals as Electrical Detectors A number of animals both produce and detect electrical signals. Fish, sharks, platypuses, and echidnas (spiny anteaters) all detect electric fields generated by nerve activity in prey. Electric eels produce their own emf through biological cells (electric organs) called electroplaques, which are arranged in both series and parallel as a set of batteries. Electroplaques are flat, disk-like cells; those of the electric eel have a voltage of 0.15 V across each one. These cells are usually located toward the head or tail of the animal, although in the case of the electric eel, they are found along the entire body. The electroplaques in the South American eel are arranged in 140 rows, with each row stretching horizontally along the body and containing 5,000 electroplaques. This can yield an emf of approximately 600 V, and a current of 1 A—deadly. The mechanism for detection of external electric fields is similar to that for producing nerve signals in the cell through depolarization and repolarization—the movement of ions across the cell membrane. Within the fish, weak electric fields in the water produce a current in a gel-filled canal that runs from the skin to sensing cells, producing a nerve signal. The Australian platypus, one of the very few mammals that lay eggs, can detect fields of 30 , while sharks have been found to be able to sense a field in their snouts as small as 100 (). Electric eels use their own electric fields produced by the electroplaques to stun their prey or enemies. ### Solar Cell Arrays Another example dealing with multiple voltage sources is that of combinations of solar cells—wired in both series and parallel combinations to yield a desired voltage and current. Photovoltaic generation (PV), the conversion of sunlight directly into electricity, is based upon the photoelectric effect, in which photons hitting the surface of a solar cell create an electric current in the cell. Most solar cells are made from pure silicon—either as single-crystal silicon, or as a thin film of silicon deposited upon a glass or metal backing. Most single cells have a voltage output of about 0.5 V, while the current output is a function of the amount of sunlight upon the cell (the incident solar radiation—the insolation). Under bright noon sunlight, a current of about of cell surface area is produced by typical single-crystal cells. Individual solar cells are connected electrically in modules to meet electrical-energy needs. They can be wired together in series or in parallel—connected like the batteries discussed earlier. A solar-cell array or module usually consists of between 36 and 72 cells, with a power output of 50 W to 140 W. The output of the solar cells is direct current. For most uses in a home, AC is required, so a device called an inverter must be used to convert the DC to AC. Any extra output can then be passed on to the outside electrical grid for sale to the utility. ### Test Prep for AP Courses ### Section Summary 1. All voltage sources have two fundamental parts—a source of electrical energy that has a characteristic electromotive force (emf), and an internal resistance . 2. The emf is the potential difference of a source when no current is flowing. 3. The numerical value of the emf depends on the source of potential difference. 4. The internal resistance of a voltage source affects the output voltage when a current flows. 5. The voltage output of a device is called its terminal voltage and is given by , where is the electric current and is positive when flowing away from the positive terminal of the voltage source. 6. When multiple voltage sources are in series, their internal resistances add and their emfs add algebraically. 7. Solar cells can be wired in series or parallel to provide increased voltage or current, respectively. ### Conceptual Questions ### Problem Exercises
# Circuits and DC Instruments ## Kirchhoff’s Rules ### Learning Objectives By the end of this section, you will be able to: 1. Analyze a complex circuit using Kirchhoff’s rules, using the conventions for determining the correct signs of various terms. Many complex circuits, such as the one in , cannot be analyzed with the series-parallel techniques developed in Resistors in Series and Parallel and Electromotive Force: Terminal Voltage. There are, however, two circuit analysis rules that can be used to analyze any circuit, simple or complex. These rules are special cases of the laws of conservation of charge and conservation of energy. The rules are known as Kirchhoff’s rules, after their inventor Gustav Kirchhoff (1824–1887). Explanations of the two rules will now be given, followed by problem-solving hints for applying Kirchhoff’s rules, and a worked example that uses them. ### Kirchhoff’s First Rule Kirchhoff’s first rule (the junction rule) is an application of the conservation of charge to a junction; it is illustrated in . Current is the flow of charge, and charge is conserved; thus, whatever charge flows into the junction must flow out. Kirchhoff’s first rule requires that (see figure). Equations like this can and will be used to analyze circuits and to solve circuit problems. ### Kirchhoff’s Second Rule Kirchhoff’s second rule (the loop rule) is an application of conservation of energy. The loop rule is stated in terms of potential, , rather than potential energy, but the two are related since . Recall that emf is the potential difference of a source when no current is flowing. In a closed loop, whatever energy is supplied by emf must be transferred into other forms by devices in the loop, since there are no other ways in which energy can be transferred into or out of the circuit. illustrates the changes in potential in a simple series circuit loop. Kirchhoff’s second rule requires . Rearranged, this is , which means the emf equals the sum of the (voltage) drops in the loop. ### Applying Kirchhoff’s Rules By applying Kirchhoff’s rules, we generate equations that allow us to find the unknowns in circuits. The unknowns may be currents, emfs, or resistances. Each time a rule is applied, an equation is produced. If there are as many independent equations as unknowns, then the problem can be solved. There are two decisions you must make when applying Kirchhoff’s rules. These decisions determine the signs of various quantities in the equations you obtain from applying the rules. 1. When applying Kirchhoff’s first rule, the junction rule, you must label the current in each branch and decide in what direction it is going. For example, in , , and , currents are labeled , , , and , and arrows indicate their directions. There is no risk here, for if you choose the wrong direction, the current will be of the correct magnitude but negative. 2. When applying Kirchhoff’s second rule, the loop rule, you must identify a closed loop and decide in which direction to go around it, clockwise or counterclockwise. For example, in the loop was traversed in the same direction as the current (clockwise). Again, there is no risk; going around the circuit in the opposite direction reverses the sign of every term in the equation, which is like multiplying both sides of the equation by and the following points will help you get the plus or minus signs right when applying the loop rule. Note that the resistors and emfs are traversed by going from a to b. In many circuits, it will be necessary to construct more than one loop. In traversing each loop, one needs to be consistent for the sign of the change in potential. (See .) 1. When a resistor is traversed in the same direction as the current, the change in potential is . (See .) 2. When a resistor is traversed in the direction opposite to the current, the change in potential is . (See .) 3. When an emf is traversed from to + (the same direction it moves positive charge), the change in potential is +emf. (See .) 4. When an emf is traversed from + to (opposite to the direction it moves positive charge), the change in potential is emf. (See .) The material in this section is correct in theory. We should be able to verify it by making measurements of current and voltage. In fact, some of the devices used to make such measurements are straightforward applications of the principles covered so far and are explored in the next modules. As we shall see, a very basic, even profound, fact results—making a measurement alters the quantity being measured. ### Test Prep for AP Courses ### Section Summary 1. Kirchhoff’s rules can be used to analyze any circuit, simple or complex. 2. Kirchhoff’s first rule—the junction rule: The sum of all currents entering a junction must equal the sum of all currents leaving the junction. 3. Kirchhoff’s second rule—the loop rule: The algebraic sum of changes in potential around any closed circuit path (loop) must be zero. 4. The two rules are based, respectively, on the laws of conservation of charge and energy. 5. When calculating potential and current using Kirchhoff’s rules, a set of conventions must be followed for determining the correct signs of various terms. 6. The simpler series and parallel rules are special cases of Kirchhoff’s rules. ### Conceptual Questions ### Problem Exercises
# Circuits and DC Instruments ## DC Voltmeters and Ammeters ### Learning Objectives By the end of this section, you will be able to: 1. Explain why a voltmeter must be connected in parallel with the circuit. 2. Draw a diagram showing an ammeter correctly connected in a circuit. 3. Describe how a galvanometer can be used as either a voltmeter or an ammeter. 4. Find the resistance that must be placed in series with a galvanometer to allow it to be used as a voltmeter with a given reading. 5. Explain why measuring the voltage or current in a circuit can never be exact. Voltmeters measure voltage, whereas ammeters measure current. Some of the meters in automobile dashboards, digital cameras, cell phones, and tuner-amplifiers are voltmeters or ammeters. (See .) The internal construction of the simplest of these meters and how they are connected to the system they monitor give further insight into applications of series and parallel connections. Voltmeters are connected in parallel with whatever device’s voltage is to be measured. A parallel connection is used because objects in parallel experience the same potential difference. (See , where the voltmeter is represented by the symbol V.) Ammeters are connected in series with whatever device’s current is to be measured. A series connection is used because objects in series have the same current passing through them. (See , where the ammeter is represented by the symbol A.) ### Analog Meters: Galvanometers Analog meters have a needle that swivels to point at numbers on a scale, as opposed to digital meters, which have numerical readouts similar to a hand-held calculator. The heart of most analog meters is a device called a galvanometer, denoted by G. Current flow through a galvanometer, , produces a proportional needle deflection. (This deflection is due to the force of a magnetic field upon a current-carrying wire.) The two crucial characteristics of a given galvanometer are its resistance and current sensitivity. Current sensitivity is the current that gives a full-scale deflection of the galvanometer’s needle, the maximum current that the instrument can measure. For example, a galvanometer with a current sensitivity of has a maximum deflection of its needle when flows through it, reads half-scale when flows through it, and so on. If such a galvanometer has a resistance, then a voltage of only produces a full-scale reading. By connecting resistors to this galvanometer in different ways, you can use it as either a voltmeter or ammeter that can measure a broad range of voltages or currents. ### Galvanometer as Voltmeter shows how a galvanometer can be used as a voltmeter by connecting it in series with a large resistance, . The value of the resistance is determined by the maximum voltage to be measured. Suppose you want 10 V to produce a full-scale deflection of a voltmeter containing a galvanometer with a sensitivity. Then 10 V applied to the meter must produce a current of . The total resistance must be ( is so large that the galvanometer resistance, , is nearly negligible.) Note that 5 V applied to this voltmeter produces a half-scale deflection by producing a current through the meter, and so the voltmeter’s reading is proportional to voltage as desired. This voltmeter would not be useful for voltages less than about half a volt, because the meter deflection would be small and difficult to read accurately. For other voltage ranges, other resistances are placed in series with the galvanometer. Many meters have a choice of scales. That choice involves switching an appropriate resistance into series with the galvanometer. ### Galvanometer as Ammeter The same galvanometer can also be made into an ammeter by placing it in parallel with a small resistance , often called the shunt resistance, as shown in . Since the shunt resistance is small, most of the current passes through it, allowing an ammeter to measure currents much greater than those producing a full-scale deflection of the galvanometer. Suppose, for example, an ammeter is needed that gives a full-scale deflection for 1.0 A, and contains the same galvanometer with its sensitivity. Since and are in parallel, the voltage across them is the same. These drops are so that . Solving for , and noting that is and is 0.999950 A, we have ### Taking Measurements Alters the Circuit When you use a voltmeter or ammeter, you are connecting another resistor to an existing circuit and, thus, altering the circuit. Ideally, voltmeters and ammeters do not appreciably affect the circuit, but it is instructive to examine the circumstances under which they do or do not interfere. First, consider the voltmeter, which is always placed in parallel with the device being measured. Very little current flows through the voltmeter if its resistance is a few orders of magnitude greater than the device, and so the circuit is not appreciably affected. (See (a).) (A large resistance in parallel with a small one has a combined resistance essentially equal to the small one.) If, however, the voltmeter’s resistance is comparable to that of the device being measured, then the two in parallel have a smaller resistance, appreciably affecting the circuit. (See (b).) The voltage across the device is not the same as when the voltmeter is out of the circuit. An ammeter is placed in series in the branch of the circuit being measured, so that its resistance adds to that branch. Normally, the ammeter’s resistance is very small compared with the resistances of the devices in the circuit, and so the extra resistance is negligible. (See (a).) However, if very small load resistances are involved, or if the ammeter is not as low in resistance as it should be, then the total series resistance is significantly greater, and the current in the branch being measured is reduced. (See (b).) A practical problem can occur if the ammeter is connected incorrectly. If it was put in parallel with the resistor to measure the current in it, you could possibly damage the meter; the low resistance of the ammeter would allow most of the current in the circuit to go through the galvanometer, and this current would be larger since the effective resistance is smaller. One solution to the problem of voltmeters and ammeters interfering with the circuits being measured is to use galvanometers with greater sensitivity. This allows construction of voltmeters with greater resistance and ammeters with smaller resistance than when less sensitive galvanometers are used. There are practical limits to galvanometer sensitivity, but it is possible to get analog meters that make measurements accurate to a few percent. Note that the inaccuracy comes from altering the circuit, not from a fault in the meter. ### Section Summary 1. Voltmeters measure voltage, and ammeters measure current. 2. A voltmeter is placed in parallel with the voltage source to receive full voltage and must have a large resistance to limit its effect on the circuit. 3. An ammeter is placed in series to get the full current flowing through a branch and must have a small resistance to limit its effect on the circuit. 4. Both can be based on the combination of a resistor and a galvanometer, a device that gives an analog reading of current. 5. Standard voltmeters and ammeters alter the circuit being measured and are thus limited in accuracy. ### Conceptual Questions ### Problem Exercises
# Circuits and DC Instruments ## Null Measurements ### Learning Objectives By the end of this section, you will be able to: 1. Explain why a null measurement device is more accurate than a standard voltmeter or ammeter. 2. Demonstrate how a Wheatstone bridge can be used to accurately calculate the resistance in a circuit. Standard measurements of voltage and current alter the circuit being measured, introducing uncertainties in the measurements. Voltmeters draw some extra current, whereas ammeters reduce current flow. Null measurements balance voltages so that there is no current flowing through the measuring device and, therefore, no alteration of the circuit being measured. Null measurements are generally more accurate but are also more complex than the use of standard voltmeters and ammeters, and they still have limits to their precision. In this module, we shall consider a few specific types of null measurements, because they are common and interesting, and they further illuminate principles of electric circuits. ### The Potentiometer Suppose you wish to measure the emf of a battery. Consider what happens if you connect the battery directly to a standard voltmeter as shown in . (Once we note the problems with this measurement, we will examine a null measurement that improves accuracy.) As discussed before, the actual quantity measured is the terminal voltage , which is related to the emf of the battery by , where is the current that flows and is the internal resistance of the battery. The emf could be accurately calculated if were very accurately known, but it is usually not. If the current could be made zero, then , and so emf could be directly measured. However, standard voltmeters need a current to operate; thus, another technique is needed. A potentiometer is a null measurement device for measuring potentials (voltages). (See .) A voltage source is connected to a resistor say, a long wire, and passes a constant current through it. There is a steady drop in potential (an drop) along the wire, so that a variable potential can be obtained by making contact at varying locations along the wire. (b) shows an unknown (represented by script in the figure) connected in series with a galvanometer. Note that opposes the other voltage source. The location of the contact point (see the arrow on the drawing) is adjusted until the galvanometer reads zero. When the galvanometer reads zero, , where is the resistance of the section of wire up to the contact point. Since no current flows through the galvanometer, none flows through the unknown emf, and so is directly sensed. Now, a very precisely known standard is substituted for , and the contact point is adjusted until the galvanometer again reads zero, so that . In both cases, no current passes through the galvanometer, and so the current through the long wire is the same. Upon taking the ratio , cancels, giving Solving for gives Because a long uniform wire is used for , the ratio of resistances is the same as the ratio of the lengths of wire that zero the galvanometer for each emf. The three quantities on the right-hand side of the equation are now known or measured, and can be calculated. The uncertainty in this calculation can be considerably smaller than when using a voltmeter directly, but it is not zero. There is always some uncertainty in the ratio of resistances and in the standard . Furthermore, it is not possible to tell when the galvanometer reads exactly zero, which introduces error into both and , and may also affect the current . ### Resistance Measurements and the Wheatstone Bridge There is a variety of so-called ohmmeters that purport to measure resistance. What the most common ohmmeters actually do is to apply a voltage to a resistance, measure the current, and calculate the resistance using Ohm’s law. Their readout is this calculated resistance. Two configurations for ohmmeters using standard voltmeters and ammeters are shown in . Such configurations are limited in accuracy, because the meters alter both the voltage applied to the resistor and the current that flows through it. The Wheatstone bridge is a null measurement device for calculating resistance by balancing potential drops in a circuit. (See .) The device is called a bridge because the galvanometer forms a bridge between two branches. A variety of bridge devices are used to make null measurements in circuits. Resistors and are precisely known, while the arrow through indicates that it is a variable resistance. The value of can be precisely read. With the unknown resistance in the circuit, is adjusted until the galvanometer reads zero. The potential difference between points b and d is then zero, meaning that b and d are at the same potential. With no current running through the galvanometer, it has no effect on the rest of the circuit. So the branches abc and adc are in parallel, and each branch has the full voltage of the source. That is, the drops along abc and adc are the same. Since b and d are at the same potential, the drop along ad must equal the drop along ab. Thus, Again, since b and d are at the same potential, the drop along dc must equal the drop along bc. Thus, Taking the ratio of these last two expressions gives Canceling the currents and solving for Rx yields This equation is used to calculate the unknown resistance when current through the galvanometer is zero. This method can be very accurate (often to four significant digits), but it is limited by two factors. First, it is not possible to get the current through the galvanometer to be exactly zero. Second, there are always uncertainties in , , and , which contribute to the uncertainty in . ### Section Summary 1. Null measurement techniques achieve greater accuracy by balancing a circuit so that no current flows through the measuring device. 2. One such device, for determining voltage, is a potentiometer. 3. Another null measurement device, for determining resistance, is the Wheatstone bridge. 4. Other physical quantities can also be measured with null measurement techniques. ### Conceptual questions ### Problem Exercises
# Circuits and DC Instruments ## DC Circuits Containing Resistors and Capacitors When you use a flash camera, it takes a few seconds to charge the capacitor that powers the flash. The light flash discharges the capacitor in a tiny fraction of a second. Why does charging take longer than discharging? This question and a number of other phenomena that involve charging and discharging capacitors are discussed in this module. ### RC Circuits An is one containing a resistor R and a capacitor C. The capacitor is an electrical component that stores electric charge. shows a simple RC circuit that employs a DC (direct current) voltage source. The capacitor is initially uncharged. As soon as the switch is closed, current flows to and from the initially uncharged capacitor. As charge increases on the capacitor plates, there is increasing opposition to the flow of charge by the repulsion of like charges on each plate. In terms of voltage, this is because voltage across the capacitor is given by , where is the amount of charge stored on each plate and is the capacitance. This voltage opposes the battery, growing from zero to the maximum emf when fully charged. The current thus decreases from its initial value of to zero as the voltage on the capacitor reaches the same value as the emf. When there is no current, there is no drop, and so the voltage on the capacitor must then equal the emf of the voltage source. This can also be explained with Kirchhoff’s second rule (the loop rule), discussed in Kirchhoff’s Rules, which says that the algebraic sum of changes in potential around any closed loop must be zero. The initial current is , because all of the drop is in the resistance. Therefore, the smaller the resistance, the faster a given capacitor will be charged. Note that the internal resistance of the voltage source is included in , as are the resistances of the capacitor and the connecting wires. In the flash camera scenario above, when the batteries powering the camera begin to wear out, their internal resistance rises, reducing the current and lengthening the time it takes to get ready for the next flash. Voltage on the capacitor is initially zero and rises rapidly at first, since the initial current is a maximum. (b) shows a graph of capacitor voltage versus time () starting when the switch is closed at . The voltage approaches emf asymptotically, since the closer it gets to emf the less current flows. The equation for voltage versus time when charging a capacitor through a resistor , derived using calculus, is where is the voltage across the capacitor, emf is equal to the emf of the DC voltage source, and the exponential e = 2.718 … is the base of the natural logarithm. Note that the units of are seconds. We define where (the Greek letter tau) is called the time constant for an circuit. As noted before, a small resistance allows the capacitor to charge faster. This is reasonable, since a larger current flows through a smaller resistance. It is also reasonable that the smaller the capacitor , the less time needed to charge it. Both factors are contained in . More quantitatively, consider what happens when . Then the voltage on the capacitor is This means that in the time , the voltage rises to 0.632 of its final value. The voltage will rise 0.632 of the remainder in the next time . It is a characteristic of the exponential function that the final value is never reached, but 0.632 of the remainder to that value is achieved in every time, . In just a few multiples of the time constant , then, the final value is very nearly achieved, as the graph in (b) illustrates. ### Discharging a Capacitor Discharging a capacitor through a resistor proceeds in a similar fashion, as illustrates. Initially, the current is , driven by the initial voltage on the capacitor. As the voltage decreases, the current and hence the rate of discharge decreases, implying another exponential formula for . Using calculus, the voltage on a capacitor being discharged through a resistor is found to be The graph in (b) is an example of this exponential decay. Again, the time constant is . A small resistance allows the capacitor to discharge in a small time, since the current is larger. Similarly, a small capacitance requires less time to discharge, since less charge is stored. In the first time interval after the switch is closed, the voltage falls to 0.368 of its initial value, since . During each successive time , the voltage falls to 0.368 of its preceding value. In a few multiples of , the voltage becomes very close to zero, as indicated by the graph in (b). Now we can explain why the flash camera in our scenario takes so much longer to charge than discharge; the resistance while charging is significantly greater than while discharging. The internal resistance of the battery accounts for most of the resistance while charging. As the battery ages, the increasing internal resistance makes the charging process even slower. (You may have noticed this.) The flash discharge is through a low-resistance ionized gas in the flash tube and proceeds very rapidly. Flash photographs, such as in , can capture a brief instant of a rapid motion because the flash can be less than a microsecond in duration. Such flashes can be made extremely intense. During World War II, nighttime reconnaissance photographs were made from the air with a single flash illuminating more than a square kilometer of enemy territory. The brevity of the flash eliminated blurring due to the surveillance aircraft’s motion. Today, an important use of intense flash lamps is to pump energy into a laser. The short intense flash can rapidly energize a laser and allow it to reemit the energy in another form. ### RC Circuits for Timing circuits are commonly used for timing purposes. A mundane example of this is found in the ubiquitous intermittent wiper systems of modern cars. The time between wipes is varied by adjusting the resistance in an circuit. Another example of an circuit is found in novelty jewelry, Halloween costumes, and various toys that have battery-powered flashing lights. (See for a timing circuit.) A more crucial use of circuits for timing purposes is in the artificial pacemaker, used to control heart rate. The heart rate is normally controlled by electrical signals generated by the sino-atrial (SA) node, which is on the wall of the right atrium chamber. This causes the muscles to contract and pump blood. Sometimes the heart rhythm is abnormal and the heartbeat is too high or too low. The artificial pacemaker is inserted near the heart to provide electrical signals to the heart when needed with the appropriate time constant. Pacemakers have sensors that detect body motion and breathing to increase the heart rate during exercise to meet the body’s increased needs for blood and oxygen. ### Test Prep for AP Courses ### Section Summary 1. An circuit is one that has both a resistor and a capacitor. 2. The time constant for an circuit is . 3. When an initially uncharged ( at ) capacitor in series with a resistor is charged by a DC voltage source, the voltage rises, asymptotically approaching the emf of the voltage source; as a function of time, 4. Within the span of each time constant , the voltage rises by 0.632 of the remaining value, approaching the final voltage asymptotically. 5. If a capacitor with an initial voltage is discharged through a resistor starting at , then its voltage decreases exponentially as given by 6. In each time constant , the voltage falls by 0.368 of its remaining initial value, approaching zero asymptotically. ### Conceptual questions ### Problem Exercises
# Magnetism ## Introduction to Magnetism One evening, an Alaskan sticks a note to his refrigerator with a small magnet. Through the kitchen window, the Aurora Borealis glows in the night sky. This grand spectacle is shaped by the same force that holds the note to the refrigerator. People have been aware of magnets and magnetism for thousands of years. The earliest records date to well before the time of Christ, particularly in a region of Asia Minor called Magnesia (the name of this region is the source of words like magnetic). Magnetic rocks found in Magnesia, which is now part of western Turkey, stimulated interest during ancient times. A practical application for magnets was found later, when they were employed as navigational compasses. The use of magnets in compasses resulted not only in improved long-distance sailing, but also in the names of “north” and “south” being given to the two types of magnetic poles. Today magnetism plays many important roles in our lives. Physicists’ understanding of magnetism has enabled the development of technologies that affect our everyday lives. The iPod in your purse or backpack, for example, wouldn’t have been possible without the applications of magnetism and electricity on a small scale. The discovery that weak changes in a magnetic field in a thin film of iron and chromium could bring about much larger changes in electrical resistance was one of the first large successes of nanotechnology. The 2007 Nobel Prize in Physics went to Albert Fert from France and Peter Grunberg from Germany for this discovery of giant magnetoresistance and its applications to computer memory. All electric motors, with uses as diverse as powering refrigerators, starting cars, and moving elevators, contain magnets. Generators, whether producing hydroelectric power or running bicycle lights, use magnetic fields. Recycling facilities employ magnets to separate iron from other refuse. Hundreds of millions of dollars are spent annually on magnetic containment of fusion as a future energy source. Magnetic resonance imaging (MRI) has become an important diagnostic tool in the field of medicine, and the use of magnetism to explore brain activity is a subject of contemporary research and development. The list of applications also includes computer hard drives, tape recording, detection of inhaled asbestos, and levitation of high-speed trains. Magnetism is used to explain atomic energy levels, cosmic rays, and charged particles trapped in the Van Allen belts. Once again, we will find all these disparate phenomena are linked by a small number of underlying physical principles.
# Magnetism ## Magnets ### Learning Objectives By the end of this section, you will be able to: 1. Describe the difference between the north and south poles of a magnet. 2. Describe how magnetic poles interact with each other. All magnets attract iron, such as that in a refrigerator door. However, magnets may attract or repel other magnets. Experimentation shows that all magnets have two poles. If freely suspended, one pole will point toward the north. The two poles are thus named the north magnetic pole and the south magnetic pole (or more properly, north-seeking and south-seeking poles, for the attractions in those directions). The fact that magnetic poles always occur in pairs of north and south is true from the very large scale—for example, sunspots always occur in pairs that are north and south magnetic poles—all the way down to the very small scale. Magnetic atoms have both a north pole and a south pole, as do many types of subatomic particles, such as electrons, protons, and neutrons. ### Section Summary 1. Magnetism is a subject that includes the properties of magnets, the effect of the magnetic force on moving charges and currents, and the creation of magnetic fields by currents. 2. There are two types of magnetic poles, called the north magnetic pole and south magnetic pole. 3. North magnetic poles are those that are attracted toward the Earth’s geographic north pole. 4. Like poles repel and unlike poles attract. 5. Magnetic poles always occur in pairs of north and south—it is not possible to isolate north and south poles. ### Conceptual Questions
# Magnetism ## Ferromagnets and Electromagnets ### Learning Objectives By the end of this section, you will be able to: 1. Define ferromagnet. 2. Describe the role of magnetic domains in magnetization. 3. Explain the significance of the Curie temperature. 4. Describe the relationship between electricity and magnetism. ### Ferromagnets Only certain materials, such as iron, cobalt, nickel, and gadolinium, exhibit strong magnetic effects. Such materials are called ferromagnetic, after the Latin word for iron, ferrum. A group of materials made from the alloys of the rare earth elements are also used as strong and permanent magnets; a popular one is neodymium. Other materials exhibit weak magnetic effects, which are detectable only with sensitive instruments. Not only do ferromagnetic materials respond strongly to magnets (the way iron is attracted to magnets), they can also be magnetized themselves—that is, they can be induced to be magnetic or made into permanent magnets. When a magnet is brought near a previously unmagnetized ferromagnetic material, it causes local magnetization of the material with unlike poles closest, as in . (This results in the attraction of the previously unmagnetized material to the magnet.) What happens on a microscopic scale is illustrated in . The regions within the material called domains act like small bar magnets. Within domains, the poles of individual atoms are aligned. Each atom acts like a tiny bar magnet. Domains are small and randomly oriented in an unmagnetized ferromagnetic object. In response to an external magnetic field, the domains may grow to millimeter size, aligning themselves as shown in (b). This induced magnetization can be made permanent if the material is heated and then cooled, or simply tapped in the presence of other magnets. Conversely, a permanent magnet can be demagnetized by hard blows or by heating it in the absence of another magnet. Increased thermal motion at higher temperature can disrupt and randomize the orientation and the size of the domains. There is a well-defined temperature for ferromagnetic materials, which is called the Curie temperature, above which they cannot be magnetized. The Curie temperature for iron is 1043 K , which is well above room temperature. There are several elements and alloys that have Curie temperatures much lower than room temperature and are ferromagnetic only below those temperatures. ### Electromagnets Early in the 19th century, it was discovered that electrical currents cause magnetic effects. The first significant observation was by the Danish scientist Hans Christian Oersted (1777–1851), who found that a compass needle was deflected by a current-carrying wire. This was the first significant evidence that the movement of charges had any connection with magnets. Electromagnetism is the use of electric current to make magnets. These temporarily induced magnets are called electromagnets. Electromagnets are employed for everything from a wrecking yard crane that lifts scrapped cars to controlling the beam of a 90-km-circumference particle accelerator to the magnets in medical imaging machines (See ). shows that the response of iron filings to a current-carrying coil and to a permanent bar magnet. The patterns are similar. In fact, electromagnets and ferromagnets have the same basic characteristics—for example, they have north and south poles that cannot be separated and for which like poles repel and unlike poles attract. Combining a ferromagnet with an electromagnet can produce particularly strong magnetic effects. (See .) Whenever strong magnetic effects are needed, such as lifting scrap metal, or in particle accelerators, electromagnets are enhanced by ferromagnetic materials. Limits to how strong the magnets can be made are imposed by coil resistance (it will overheat and melt at sufficiently high current), and so superconducting magnets may be employed. These are still limited, because superconducting properties are destroyed by too great a magnetic field. shows a few uses of combinations of electromagnets and ferromagnets. Ferromagnetic materials can act as memory devices, because the orientation of the magnetic fields of small domains can be reversed or erased. Magnetic information storage on videotapes and computer hard drives are among the most common applications. This property is vital in our digital world. ### Current: The Source of All Magnetism An electromagnet creates magnetism with an electric current. In later sections we explore this more quantitatively, finding the strength and direction of magnetic fields created by various currents. But what about ferromagnets? shows models of how electric currents create magnetism at the submicroscopic level. (Note that we cannot directly observe the paths of individual electrons about atoms, and so a model or visual image, consistent with all direct observations, is made. We can directly observe the electron’s orbital angular momentum, its spin momentum, and subsequent magnetic moments, all of which are explained with electric-current-creating subatomic magnetism.) Currents, including those associated with other submicroscopic particles like protons, allow us to explain ferromagnetism and all other magnetic effects. Ferromagnetism, for example, results from an internal cooperative alignment of electron spins, possible in some materials but not in others. Crucial to the statement that electric current is the source of all magnetism is the fact that it is impossible to separate north and south magnetic poles. (This is far different from the case of positive and negative charges, which are easily separated.) A current loop always produces a magnetic dipole—that is, a magnetic field that acts like a north pole and south pole pair. Since isolated north and south magnetic poles, called magnetic monopoles, are not observed, currents are used to explain all magnetic effects. If magnetic monopoles did exist, then we would have to modify this underlying connection that all magnetism is due to electrical current. There is no known reason that magnetic monopoles should not exist—they are simply never observed—and so searches at the subnuclear level continue. If they do not exist, we would like to find out why not. If they do exist, we would like to see evidence of them. ### Test Prep for AP Courses ### Section Summary 1. Magnetic poles always occur in pairs of north and south—it is not possible to isolate north and south poles. 2. All magnetism is created by electric current. 3. Ferromagnetic materials, such as iron, are those that exhibit strong magnetic effects. 4. The atoms in ferromagnetic materials act like small magnets (due to currents within the atoms) and can be aligned, usually in millimeter-sized regions called domains. 5. Domains can grow and align on a larger scale, producing permanent magnets. Such a material is magnetized, or induced to be magnetic. 6. Above a material’s Curie temperature, thermal agitation destroys the alignment of atoms, and ferromagnetism disappears. 7. Electromagnets employ electric currents to make magnetic fields, often aided by induced fields in ferromagnetic materials.
# Magnetism ## Magnetic Fields and Magnetic Field Lines ### Learning Objectives By the end of this section, you will be able to: 1. Define magnetic field and describe the magnetic field lines of various magnetic fields. Einstein is said to have been fascinated by a compass as a child, perhaps musing on how the needle felt a force without direct physical contact. His ability to think deeply and clearly about action at a distance, particularly for gravitational, electric, and magnetic forces, later enabled him to create his revolutionary theory of relativity. Since magnetic forces act at a distance, we define a magnetic field to represent magnetic forces. The pictorial representation of magnetic field lines is very useful in visualizing the strength and direction of the magnetic field. As shown in , the direction of magnetic field lines is defined to be the direction in which the north end of a compass needle points. The magnetic field is traditionally called the . Small compasses used to test a magnetic field will not disturb it. (This is analogous to the way we tested electric fields with a small test charge. In both cases, the fields represent only the object creating them and not the probe testing them.) shows how the magnetic field appears for a current loop and a long straight wire, as could be explored with small compasses. A small compass placed in these fields will align itself parallel to the field line at its location, with its north pole pointing in the direction of B. Note the symbols used for field into and out of the paper. Extensive exploration of magnetic fields has revealed a number of hard-and-fast rules. We use magnetic field lines to represent the field (the lines are a pictorial tool, not a physical entity in and of themselves). The properties of magnetic field lines can be summarized by these rules: 1. The direction of the magnetic field is tangent to the field line at any point in space. A small compass will point in the direction of the field line. 2. The strength of the field is proportional to the closeness of the lines. It is exactly proportional to the number of lines per unit area perpendicular to the lines (called the areal density). 3. Magnetic field lines can never cross, meaning that the field is unique at any point in space. 4. Magnetic field lines are continuous, forming closed loops without beginning or end. They go from the north pole to the south pole. The last property is related to the fact that the north and south poles cannot be separated. It is a distinct difference from electric field lines, which begin and end on the positive and negative charges. If magnetic monopoles existed, then magnetic field lines would begin and end on them. ### Section Summary 1. Magnetic fields can be pictorially represented by magnetic field lines, the properties of which are as follows: 1. The field is tangent to the magnetic field line. 2. Field strength is proportional to the line density. 3. Field lines cannot cross. 4. Field lines are continuous loops. ### Conceptual Questions
# Magnetism ## Magnetic Field Strength: Force on a Moving Charge in a Magnetic Field ### Learning Objectives By the end of this section, you will be able to: 1. Describe the effects of magnetic fields on moving charges. 2. Use the right hand rule 1 to determine the velocity of a charge, the direction of the magnetic field, and the direction of the magnetic force on a moving charge. 3. Calculate the magnetic force on a moving charge. What is the mechanism by which one magnet exerts a force on another? The answer is related to the fact that all magnetism is caused by current, the flow of charge. Magnetic fields exert forces on moving charges, and so they exert forces on other magnets, all of which have moving charges. ### Right Hand Rule 1 The magnetic force on a moving charge is one of the most fundamental known. Magnetic force is as important as the electrostatic or Coulomb force. Yet the magnetic force is more complex, in both the number of factors that affects it and in its direction, than the relatively simple Coulomb force. The magnitude of the magnetic force on a charge moving at a speed in a magnetic field of strength is given by where is the angle between the directions of and This force is often called the Lorentz force. In fact, this is how we define the magnetic field strength —in terms of the force on a charged particle moving in a magnetic field. The SI unit for magnetic field strength is called the tesla (T) after the eccentric but brilliant inventor Nikola Tesla (1856–1943). To determine how the tesla relates to other SI units, we solve for . Because is unitless, the tesla is (note that C/s = A). Another smaller unit, called the gauss (G), where , is sometimes used. The strongest permanent magnets have fields near 2 T; superconducting electromagnets may attain 10 T or more. The Earth’s magnetic field on its surface is only about , or 0.5 G. The direction of the magnetic force is perpendicular to the plane formed by and , as determined by the right hand rule 1 (or RHR-1), which is illustrated in . RHR-1 states that, to determine the direction of the magnetic force on a positive moving charge, you point the thumb of the right hand in the direction of , the fingers in the direction of , and a perpendicular to the palm points in the direction of . One way to remember this is that there is one velocity, and so the thumb represents it. There are many field lines, and so the fingers represent them. The force is in the direction you would push with your palm. The force on a negative charge is in exactly the opposite direction to that on a positive charge. ### Test Prep for AP Courses ### Section Summary 1. Magnetic fields exert a force on a moving charge q, the magnitude of which is where is the angle between the directions of and . 2. The SI unit for magnetic field strength is the tesla (T), which is related to other units by 3. The direction of the force on a moving charge is given by right hand rule 1 (RHR-1): Point the thumb of the right hand in the direction of , the fingers in the direction of , and a perpendicular to the palm points in the direction of . 4. The force is perpendicular to the plane formed by and . Since the force is zero if is parallel to , charged particles often follow magnetic field lines rather than cross them. ### Conceptual Questions ### Problems & Exercises
# Magnetism ## Force on a Moving Charge in a Magnetic Field: Examples and Applications ### Learning Objectives By the end of this section, you will be able to: 1. Describe the effects of a magnetic field on a moving charge. 2. Calculate the radius of curvature of the path of a charge that is moving in a magnetic field. Magnetic force can cause a charged particle to move in a circular or spiral path. Cosmic rays are energetic charged particles in outer space, some of which approach the Earth. They can be forced into spiral paths by the Earth’s magnetic field. Protons in giant accelerators are kept in a circular path by magnetic force. The bubble chamber photograph in shows charged particles moving in such curved paths. The curved paths of charged particles in magnetic fields are the basis of a number of phenomena and can even be used analytically, such as in a mass spectrometer. So does the magnetic force cause circular motion? Magnetic force is always perpendicular to velocity, so that it does no work on the charged particle. The particle’s kinetic energy and speed thus remain constant. The direction of motion is affected, but not the speed. This is typical of uniform circular motion. The simplest case occurs when a charged particle moves perpendicular to a uniform -field, such as shown in . (If this takes place in a vacuum, the magnetic field is the dominant factor determining the motion.) Here, the magnetic force supplies the centripetal force . Noting that , we see that . Because the magnetic force supplies the centripetal force , we have Solving for yields Here, is the radius of curvature of the path of a charged particle with mass and charge , moving at a speed perpendicular to a magnetic field of strength . If the velocity is not perpendicular to the magnetic field, then is the component of the velocity perpendicular to the field. The component of the velocity parallel to the field is unaffected, since the magnetic force is zero for motion parallel to the field. This produces a spiral motion rather than a circular one. shows how electrons not moving perpendicular to magnetic field lines follow the field lines. The component of velocity parallel to the lines is unaffected, and so the charges spiral along the field lines. If field strength increases in the direction of motion, the field will exert a force to slow the charges, forming a kind of magnetic mirror, as shown below. The properties of charged particles in magnetic fields are related to such different things as the Aurora Australis or Aurora Borealis and particle accelerators. Charged particles approaching magnetic field lines may get trapped in spiral orbits about the lines rather than crossing them, as seen above. Some cosmic rays, for example, follow the Earth’s magnetic field lines, entering the atmosphere near the magnetic poles and causing the southern or northern lights through their ionization of molecules in the atmosphere. This glow of energized atoms and molecules is seen in Introduction to Magnetism. Those particles that approach middle latitudes must cross magnetic field lines, and many are prevented from penetrating the atmosphere. Cosmic rays are a component of background radiation; consequently, they give a higher radiation dose at the poles than at the equator. Some incoming charged particles become trapped in the Earth’s magnetic field, forming two belts above the atmosphere known as the Van Allen radiation belts after the discoverer James A. Van Allen, an American astrophysicist. (See .) Particles trapped in these belts form radiation fields (similar to nuclear radiation) so intense that piloted space flights avoid them and satellites with sensitive electronics are kept out of them. In the few minutes it took lunar missions to cross the Van Allen radiation belts, astronauts received radiation doses more than twice the allowed annual exposure for radiation workers. Other planets have similar belts, especially those having strong magnetic fields like Jupiter. Back on Earth, we have devices that employ magnetic fields to contain charged particles. Among them are the giant particle accelerators that have been used to explore the substructure of matter. (See .) Magnetic fields not only control the direction of the charged particles, they also are used to focus particles into beams and overcome the repulsion of like charges in these beams. Thermonuclear fusion (like that occurring in the Sun) is a hope for a future clean energy source. One of the most promising devices is the tokamak, which uses magnetic fields to contain (or trap) and direct the reactive charged particles. (See .) Less exotic, but more immediately practical, amplifiers in microwave ovens use a magnetic field to contain oscillating electrons. These oscillating electrons generate the microwaves sent into the oven. Mass spectrometers have a variety of designs, and many use magnetic fields to measure mass. The curvature of a charged particle’s path in the field is related to its mass and is measured to obtain mass information. (See More Applications of Magnetism.) Historically, such techniques were employed in the first direct observations of electron charge and mass. Today, mass spectrometers (sometimes coupled with gas chromatographs) are used to determine the make-up and sequencing of large biological molecules. ### Test Prep for AP Courses ### Section Summary 1. Magnetic force can supply centripetal force and cause a charged particle to move in a circular path of radius where is the component of the velocity perpendicular to for a charged particle with mass and charge . ### Conceptual Questions ### Problems & Exercises If you need additional support for these problems, see More Applications of Magnetism.
# Magnetism ## The Hall Effect ### Learning Objectives By the end of this section, you will be able to: 1. Describe the Hall effect. 2. Calculate the Hall emf across a current-carrying conductor. We have seen effects of a magnetic field on free-moving charges. The magnetic field also affects charges moving in a conductor. One result is the Hall effect, which has important implications and applications. shows what happens to charges moving through a conductor in a magnetic field. The field is perpendicular to the electron drift velocity and to the width of the conductor. Note that conventional current is to the right in both parts of the figure. In part (a), electrons carry the current and move to the left. In part (b), positive charges carry the current and move to the right. Moving electrons feel a magnetic force toward one side of the conductor, leaving a net positive charge on the other side. This separation of charge creates a voltage , known as the Hall emf, across the conductor. The creation of a voltage across a current-carrying conductor by a magnetic field is known as the Hall effect, after Edwin Hall, the American physicist who discovered it in 1879. One very important use of the Hall effect is to determine whether positive or negative charges carries the current. Note that in (b), where positive charges carry the current, the Hall emf has the sign opposite to when negative charges carry the current. Historically, the Hall effect was used to show that electrons carry current in metals and it also shows that positive charges carry current in some semiconductors. The Hall effect is used today as a research tool to probe the movement of charges, their drift velocities and densities, and so on, in materials. In 1980, it was discovered that the Hall effect is quantized, an example of quantum behavior in a macroscopic object. The Hall effect has other uses that range from the determination of blood flow rate to precision measurement of magnetic field strength. To examine these quantitatively, we need an expression for the Hall emf, , across a conductor. Consider the balance of forces on a moving charge in a situation where , , and are mutually perpendicular, such as shown in . Although the magnetic force moves negative charges to one side, they cannot build up without limit. The electric field caused by their separation opposes the magnetic force, , and the electric force, , eventually grows to equal it. That is, or Note that the electric field is uniform across the conductor because the magnetic field is uniform, as is the conductor. For a uniform electric field, the relationship between electric field and voltage is , where is the width of the conductor and is the Hall emf. Entering this into the last expression gives Solving this for the Hall emf yields where is the Hall effect voltage across a conductor of width through which charges move at a speed . One of the most common uses of the Hall effect is in the measurement of magnetic field strength . Such devices, called Hall probes, can be made very small, allowing fine position mapping. Hall probes can also be made very accurate, usually accomplished by careful calibration. Another application of the Hall effect is to measure fluid flow in any fluid that has free charges (most do). (See .) A magnetic field applied perpendicular to the flow direction produces a Hall emf as shown. Note that the sign of depends not on the sign of the charges, but only on the directions of and . The magnitude of the Hall emf is , where is the pipe diameter, so that the average velocity can be determined from providing the other factors are known. ### Test Prep for AP Courses ### Section Summary 1. The Hall effect is the creation of voltage , known as the Hall emf, across a current-carrying conductor by a magnetic field. 2. The Hall emf is given by for a conductor of width through which charges move at a speed . ### Conceptual Questions ### Problems & Exercises
# Magnetism ## Magnetic Force on a Current-Carrying Conductor ### Learning Objectives By the end of this section, you will be able to: 1. Describe the effects of a magnetic force on a current-carrying conductor. 2. Calculate the magnetic force on a current-carrying conductor. Because charges ordinarily cannot escape a conductor, the magnetic force on charges moving in a conductor is transmitted to the conductor itself. We can derive an expression for the magnetic force on a current by taking a sum of the magnetic forces on individual charges. (The forces add because they are in the same direction.) The force on an individual charge moving at the drift velocity is given by . Taking to be uniform over a length of wire and zero elsewhere, the total magnetic force on the wire is then , where is the number of charge carriers in the section of wire of length . Now, , where is the number of charge carriers per unit volume and is the volume of wire in the field. Noting that , where is the cross-sectional area of the wire, then the force on the wire is . Gathering terms, Because (see Current), is the equation for magnetic force on a length , as shown in . If we divide both sides of this expression by , we find that the magnetic force per unit length of wire in a uniform field is . The direction of this force is given by RHR-1, with the thumb in the direction of the current . Then, with the fingers in the direction of , a perpendicular to the palm points in the direction of , as in . Magnetic force on current-carrying conductors is used to convert electric energy to work. (Motors are a prime example—they employ loops of wire and are considered in the next section.) Magnetohydrodynamics (MHD) is the technical name given to a clever application where magnetic force pumps fluids without moving mechanical parts. (See .) A strong magnetic field is applied across a tube and a current is passed through the fluid at right angles to the field, resulting in a force on the fluid parallel to the tube axis as shown. The absence of moving parts makes this attractive for moving a hot, chemically active substance, such as the liquid sodium employed in some nuclear reactors. Experimental artificial hearts are testing with this technique for pumping blood, perhaps circumventing the adverse effects of mechanical pumps. (Cell membranes, however, are affected by the large fields needed in MHD, delaying its practical application in humans.) MHD propulsion for nuclear submarines has been proposed, because it could be considerably quieter than conventional propeller drives. The deterrent value of nuclear submarines is based on their ability to hide and survive a first or second nuclear strike. As we slowly disassemble our nuclear weapons arsenals, the submarine branch will be the last to be decommissioned because of this ability (See .) Existing MHD drives are heavy and inefficient—much development work is needed. ### Section Summary 1. The magnetic force on current-carrying conductors is given by where is the current, is the length of a straight conductor in a uniform magnetic field , and is the angle between and . The force follows RHR-1 with the thumb in the direction of . ### Conceptual Questions ### Problems & Exercises
# Magnetism ## Torque on a Current Loop: Motors and Meters ### Learning Objectives By the end of this section, you will be able to: 1. Describe how motors and meters work in terms of torque on a current loop. 2. Calculate the torque on a current-carrying loop in a magnetic field. Motors are the most common application of magnetic force on current-carrying wires. Motors have loops of wire in a magnetic field. When current is passed through the loops, the magnetic field exerts torque on the loops, which rotates a shaft. Electrical energy is converted to mechanical work in the process. (See .) Let us examine the force on each segment of the loop in to find the torques produced about the axis of the vertical shaft. (This will lead to a useful equation for the torque on the loop.) We take the magnetic field to be uniform over the rectangular loop, which has width and height . First, we note that the forces on the top and bottom segments are vertical and, therefore, parallel to the shaft, producing no torque. Those vertical forces are equal in magnitude and opposite in direction, so that they also produce no net force on the loop. shows views of the loop from above. Torque is defined as , where is the force, is the distance from the pivot that the force is applied, and is the angle between and . As seen in (a), right hand rule 1 gives the forces on the sides to be equal in magnitude and opposite in direction, so that the net force is again zero. However, each force produces a clockwise torque. Since , the torque on each vertical segment is , and the two add to give a total torque. Now, each vertical segment has a length that is perpendicular to , so that the force on each is . Entering into the expression for torque yields If we have a multiple loop of turns, we get times the torque of one loop. Finally, note that the area of the loop is ; the expression for the torque becomes This is the torque on a current-carrying loop in a uniform magnetic field. This equation can be shown to be valid for a loop of any shape. The loop carries a current , has turns, each of area , and the perpendicular to the loop makes an angle with the field . The net force on the loop is zero. The torque found in the preceding example is the maximum. As the coil rotates, the torque decreases to zero at . The torque then reverses its direction once the coil rotates past . (See (d).) This means that, unless we do something, the coil will oscillate back and forth about equilibrium at . To get the coil to continue rotating in the same direction, we can reverse the current as it passes through with automatic switches called brushes. (See .) Meters, such as those in analog fuel gauges on a car, are another common application of magnetic torque on a current-carrying loop. shows that a meter is very similar in construction to a motor. The meter in the figure has its magnets shaped to limit the effect of by making perpendicular to the loop over a large angular range. Thus the torque is proportional to and not . A linear spring exerts a counter-torque that balances the current-produced torque. This makes the needle deflection proportional to . If an exact proportionality cannot be achieved, the gauge reading can be calibrated. To produce a galvanometer for use in analog voltmeters and ammeters that have a low resistance and respond to small currents, we use a large loop area , high magnetic field , and low-resistance coils. ### Section Summary 1. The torque on a current-carrying loop of any shape in a uniform magnetic field. is where is the number of turns, is the current, is the area of the loop, is the magnetic field strength, and is the angle between the perpendicular to the loop and the magnetic field. ### Conceptual Questions ### Problems & Exercises
# Magnetism ## Magnetic Fields Produced by Currents: Ampere’s Law ### Learning Objectives By the end of this section, you will be able to: 1. Calculate current that produces a magnetic field. 2. Use the right hand rule 2 to determine the direction of current or the direction of magnetic field loops. How much current is needed to produce a significant magnetic field, perhaps as strong as the Earth’s field? Surveyors will tell you that overhead electric power lines create magnetic fields that interfere with their compass readings. Indeed, when Oersted discovered in 1820 that a current in a wire affected a compass needle, he was not dealing with extremely large currents. How does the shape of wires carrying current affect the shape of the magnetic field created? We noted earlier that a current loop created a magnetic field similar to that of a bar magnet, but what about a straight wire or a toroid (doughnut)? How is the direction of a current-created field related to the direction of the current? Answers to these questions are explored in this section, together with a brief discussion of the law governing the fields created by currents. ### Magnetic Field Created by a Long Straight Current-Carrying Wire: Right Hand Rule 2 Magnetic fields have both direction and magnitude. As noted before, one way to explore the direction of a magnetic field is with compasses, as shown for a long straight current-carrying wire in . Hall probes can determine the magnitude of the field. The field around a long straight wire is found to be in circular loops. The right hand rule 2 (RHR-2) emerges from this exploration and is valid for any current segment—point the thumb in the direction of the current, and the fingers curl in the direction of the magnetic field loops created by it. The magnetic field strength (magnitude) produced by a long straight current-carrying wire is found by experiment to be where is the current, is the shortest distance to the wire, and the constant is the permeability of free space. is one of the basic constants in nature. We will see later that is related to the speed of light.) Since the wire is very long, the magnitude of the field depends only on distance from the wire , not on position along the wire. ### Ampere’s Law and Others The magnetic field of a long straight wire has more implications than you might at first suspect. Each segment of current produces a magnetic field like that of a long straight wire, and the total field of any shape current is the vector sum of the fields due to each segment. The formal statement of the direction and magnitude of the field due to each segment is called the Biot-Savart law. Integral calculus is needed to sum the field for an arbitrary shape current. This results in a more complete law, called Ampere’s law, which relates magnetic field and current in a general way. Ampere’s law in turn is a part of Maxwell’s equations, which give a complete theory of all electromagnetic phenomena. Considerations of how Maxwell’s equations appear to different observers led to the modern theory of relativity, and the realization that electric and magnetic fields are different manifestations of the same thing. Most of this is beyond the scope of this text in both mathematical level, requiring calculus, and in the amount of space that can be devoted to it. But for the interested student, and particularly for those who continue in physics, engineering, or similar pursuits, delving into these matters further will reveal descriptions of nature that are elegant as well as profound. In this text, we shall keep the general features in mind, such as RHR-2 and the rules for magnetic field lines listed in Magnetic Fields and Magnetic Field Lines, while concentrating on the fields created in certain important situations. ### Magnetic Field Produced by a Current-Carrying Circular Loop The magnetic field near a current-carrying loop of wire is shown in . Both the direction and the magnitude of the magnetic field produced by a current-carrying loop are complex. RHR-2 can be used to give the direction of the field near the loop, but mapping with compasses and the rules about field lines given in Magnetic Fields and Magnetic Field Lines are needed for more detail. There is a simple formula for the magnetic field strength at the center of a circular loop. It is where is the radius of the loop. This equation is very similar to that for a straight wire, but it is valid only at the center of a circular loop of wire. The similarity of the equations does indicate that similar field strength can be obtained at the center of a loop. One way to get a larger field is to have loops; then, the field is . Note that the larger the loop, the smaller the field at its center, because the current is farther away. ### Magnetic Field Produced by a Current-Carrying Solenoid A solenoid is a long coil of wire (with many turns or loops, as opposed to a flat loop). Because of its shape, the field inside a solenoid can be very uniform, and also very strong. The field just outside the coils is nearly zero. shows how the field looks and how its direction is given by RHR-2. The magnetic field inside of a current-carrying solenoid is very uniform in direction and magnitude. Only near the ends does it begin to weaken and change direction. The field outside has similar complexities to flat loops and bar magnets, but the magnetic field strength inside a solenoid is simply where is the number of loops per unit length of the solenoid , with being the number of loops and the length). Note that is the field strength anywhere in the uniform region of the interior and not just at the center. Large uniform fields spread over a large volume are possible with solenoids, as implies. There are interesting variations of the flat coil and solenoid. For example, the toroidal coil used to confine the reactive particles in tokamaks is much like a solenoid bent into a circle. The field inside a toroid is very strong but circular. Charged particles travel in circles, following the field lines, and collide with one another, perhaps inducing fusion. But the charged particles do not cross field lines and escape the toroid. A whole range of coil shapes are used to produce all sorts of magnetic field shapes. Adding ferromagnetic materials produces greater field strengths and can have a significant effect on the shape of the field. Ferromagnetic materials tend to trap magnetic fields (the field lines bend into the ferromagnetic material, leaving weaker fields outside it) and are used as shields for devices that are adversely affected by magnetic fields, including the Earth’s magnetic field. ### Test Prep for AP Courses ### Section Summary 1. The strength of the magnetic field created by current in a long straight wire is given by where is the current, is the shortest distance to the wire, and the constant is the permeability of free space. 2. The direction of the magnetic field created by a long straight wire is given by right hand rule 2 (RHR-2): Point the thumb of the right hand in the direction of current, and the fingers curl in the direction of the magnetic field loops created by it. 3. The magnetic field created by current following any path is the sum (or integral) of the fields due to segments along the path (magnitude and direction as for a straight wire), resulting in a general relationship between current and field known as Ampere’s law. 4. The magnetic field strength at the center of a circular loop is given by where is the radius of the loop. This equation becomes for a flat coil of loops. RHR-2 gives the direction of the field about the loop. A long coil is called a solenoid. 5. The magnetic field strength inside a solenoid is where is the number of loops per unit length of the solenoid. The field inside is very uniform in magnitude and direction. ### Conceptual Questions
# Magnetism ## Magnetic Force between Two Parallel Conductors ### Learning Objectives By the end of this section, you will be able to: 1. Describe the effects of the magnetic force between two conductors. 2. Calculate the force between two parallel conductors. You might expect that there are significant forces between current-carrying wires, since ordinary currents produce significant magnetic fields and these fields exert significant forces on ordinary currents. But you might not expect that the force between wires is used to define the ampere. It might also surprise you to learn that this force has something to do with why large circuit breakers burn up when they attempt to interrupt large currents. The force between two long straight and parallel conductors separated by a distance can be found by applying what we have developed in preceding sections. shows the wires, their currents, the fields they create, and the subsequent forces they exert on one another. Let us consider the field produced by wire 1 and the force it exerts on wire 2 (call the force ). The field due to at a distance is given to be This field is uniform along wire 2 and perpendicular to it, and so the force it exerts on wire 2 is given by with : By Newton’s third law, the forces on the wires are equal in magnitude, and so we just write for the magnitude of . (Note that .) Since the wires are very long, it is convenient to think in terms of , the force per unit length. Substituting the expression for into the last equation and rearranging terms gives is the force per unit length between two parallel currents and separated by a distance . The force is attractive if the currents are in the same direction and repulsive if they are in opposite directions. This force is responsible for the pinch effect in electric arcs and plasmas. The force exists whether the currents are in wires or not. In an electric arc, where currents are moving parallel to one another, there is an attraction that squeezes currents into a smaller tube. In large circuit breakers, like those used in neighborhood power distribution systems, the pinch effect can concentrate an arc between plates of a switch trying to break a large current, burn holes, and even ignite the equipment. Another example of the pinch effect is found in the solar plasma, where jets of ionized material, such as solar flares, are shaped by magnetic forces. The operational definition of the ampere is based on the force between current-carrying wires. Note that for parallel wires separated by 1 meter with each carrying 1 ampere, the force per meter is Since is exactly by definition, and because , the force per meter is exactly . This is the basis of the operational definition of the ampere. Infinite-length straight wires are impractical and so, in practice, a current balance is constructed with coils of wire separated by a few centimeters. Force is measured to determine current. This also provides us with a method for measuring the coulomb. We measure the charge that flows for a current of one ampere in one second. That is, . For both the ampere and the coulomb, the method of measuring force between conductors is the most accurate in practice. ### Test Prep for AP Courses ### Section Summary 1. The force between two parallel currents and , separated by a distance , has a magnitude per unit length given by 2. The force is attractive if the currents are in the same direction, repulsive if they are in opposite directions. ### Conceptual Questions ### Problems & Exercises
# Magnetism ## More Applications of Magnetism ### Learning Objectives By the end of this section, you will be able to: 1. Describe some applications of magnetism. ### Mass Spectrometry The curved paths followed by charged particles in magnetic fields can be put to use. A charged particle moving perpendicular to a magnetic field travels in a circular path having a radius . It was noted that this relationship could be used to measure the mass of charged particles such as ions. A mass spectrometer is a device that measures such masses. Most mass spectrometers use magnetic fields for this purpose, although some of them have extremely sophisticated designs. Since there are five variables in the relationship, there are many possibilities. However, if , , and can be fixed, then the radius of the path is simply proportional to the mass of the charged particle. Let us examine one such mass spectrometer that has a relatively simple design. (See .) The process begins with an ion source, a device like an electron gun. The ion source gives ions their charge, accelerates them to some velocity , and directs a beam of them into the next stage of the spectrometer. This next region is a velocity selector that only allows particles with a particular value of to get through. The velocity selector has both an electric field and a magnetic field, perpendicular to one another, producing forces in opposite directions on the ions. Only those ions for which the forces balance travel in a straight line into the next region. If the forces balance, then the electric force equals the magnetic force , so that . Noting that cancels, we see that is the velocity particles must have to make it through the velocity selector, and further, that can be selected by varying and . In the final region, there is only a uniform magnetic field, and so the charged particles move in circular arcs with radii proportional to particle mass. The paths also depend on charge , but since is in multiples of electron charges, it is easy to determine and to discriminate between ions in different charge states. Mass spectrometry today is used extensively in chemistry and biology laboratories to identify chemical and biological substances according to their mass-to-charge ratios. In medicine, mass spectrometers are used to measure the concentration of isotopes used as tracers. Usually, biological molecules such as proteins are very large, so they are broken down into smaller fragments before analyzing. Recently, large virus particles have been analyzed as a whole on mass spectrometers. Sometimes a gas chromatograph or high-performance liquid chromatograph provides an initial separation of the large molecules, which are then input into the mass spectrometer. ### Cathode Ray Tubes—CRTs—and the Like What do non-flat-screen TVs, old computer monitors, x-ray machines, and the 2-mile-long Stanford Linear Accelerator have in common? All of them accelerate electrons, making them different versions of the electron gun. Many of these devices use magnetic fields to steer the accelerated electrons. shows the construction of the type of cathode ray tube (CRT) found in some TVs, oscilloscopes, and old computer monitors. Two pairs of coils are used to steer the electrons, one vertically and the other horizontally, to their desired destination. ### Magnetic Resonance Imaging Magnetic resonance imaging (MRI) is one of the most useful and rapidly growing medical imaging tools. It non-invasively produces two-dimensional and three-dimensional images of the body that provide important medical information with none of the hazards of x-rays. MRI is based on an effect called nuclear magnetic resonance (NMR) in which an externally applied magnetic field interacts with the nuclei of certain atoms, particularly those of hydrogen (protons). These nuclei possess their own small magnetic fields, similar to those of electrons and the current loops discussed earlier in this chapter. When placed in an external magnetic field, such nuclei experience a torque that pushes or aligns the nuclei into one of two new energy states—depending on the orientation of its spin (analogous to the N pole and S pole in a bar magnet). Transitions from the lower to higher energy state can be achieved by using an external radio frequency signal to “flip” the orientation of the small magnets. (This is actually a quantum mechanical process. The direction of the nuclear magnetic field is quantized as is energy in the radio waves. We will return to these topics in later chapters.) The specific frequency of the radio waves that are absorbed and reemitted depends sensitively on the type of nucleus, the chemical environment, and the external magnetic field strength. Therefore, this is a resonance phenomenon in which nuclei in a magnetic field act like resonators (analogous to those discussed in the treatment of sound in Oscillatory Motion and Waves) that absorb and reemit only certain frequencies. Hence, the phenomenon is named nuclear magnetic resonance (NMR). NMR has been used for more than 50 years as an analytical tool. It was formulated in 1946 by F. Bloch and E. Purcell, with the 1952 Nobel Prize in Physics going to them for their work. Over the past two decades, NMR has been developed to produce detailed images in a process now called magnetic resonance imaging (MRI), a name coined to avoid the use of the word “nuclear” and the concomitant implication that nuclear radiation is involved. (It is not.) The 2003 Nobel Prize in Medicine went to P. Lauterbur and P. Mansfield for their work with MRI applications. The largest part of the MRI unit is a superconducting magnet that creates a magnetic field, typically between 1 and 2 T in strength, over a relatively large volume. MRI images can be both highly detailed and informative about structures and organ functions. It is helpful that normal and non-normal tissues respond differently for slight changes in the magnetic field. In most medical images, the protons that are hydrogen nuclei are imaged. (About 2/3 of the atoms in the body are hydrogen.) Their location and density give a variety of medically useful information, such as organ function, the condition of tissue (as in the brain), and the shape of structures, such as vertebral disks and knee-joint surfaces. MRI can also be used to follow the movement of certain ions across membranes, yielding information on active transport, osmosis, dialysis, and other phenomena. With excellent spatial resolution, MRI can provide information about tumors, strokes, shoulder injuries, infections, etc. An image requires position information as well as the density of a nuclear type (usually protons). By varying the magnetic field slightly over the volume to be imaged, the resonant frequency of the protons is made to vary with position. Broadcast radio frequencies are swept over an appropriate range and nuclei absorb and reemit them only if the nuclei are in a magnetic field with the correct strength. The imaging receiver gathers information through the body almost point by point, building up a tissue map. The reception of reemitted radio waves as a function of frequency thus gives position information. These “slices” or cross sections through the body are only several mm thick. The intensity of the reemitted radio waves is proportional to the concentration of the nuclear type being flipped, as well as information on the chemical environment in that area of the body. Various techniques are available for enhancing contrast in images and for obtaining more information. Scans called T1, T2, or proton density scans rely on different relaxation mechanisms of nuclei. Relaxation refers to the time it takes for the protons to return to equilibrium after the external field is turned off. This time depends upon tissue type and status (such as inflammation). While MRI images are superior to x rays for certain types of tissue and have none of the hazards of x rays, they do not completely supplant x-ray images. MRI is less effective than x rays for detecting breaks in bone, for example, and in imaging breast tissue, so the two diagnostic tools complement each other. MRI images are also expensive compared to simple x-ray images and tend to be used most often where they supply information not readily obtained from x rays. Another disadvantage of MRI is that the patient is totally enclosed with detectors close to the body for about 30 minutes or more, leading to claustrophobia. It is also difficult for the obese patient to be in the magnet tunnel. New “open-MRI” machines are now available in which the magnet does not completely surround the patient. Over the last decade, the development of much faster scans, called “functional MRI” (fMRI), has allowed us to map the functioning of various regions in the brain responsible for thought and motor control. This technique measures the change in blood flow for activities (thought, experiences, action) in the brain. The nerve cells increase their consumption of oxygen when active. Blood hemoglobin releases oxygen to active nerve cells and has somewhat different magnetic properties when oxygenated than when deoxygenated. With MRI, we can measure this and detect a blood oxygen-dependent signal. Most of the brain scans today use fMRI. ### Other Medical Uses of Magnetic Fields Currents in nerve cells and the heart create magnetic fields like any other currents. These can be measured but with some difficulty since their strengths are about to less than the Earth’s magnetic field. Recording of the heart’s magnetic field as it beats is called a magnetocardiogram (MCG), while measurements of the brain’s magnetic field is called a magnetoencephalogram (MEG). Both give information that differs from that obtained by measuring the electric fields of these organs (ECGs and EEGs), but they are not yet of sufficient importance to make these difficult measurements common. In both of these techniques, the sensors do not touch the body. MCG can be used in fetal studies, and is probably more sensitive than echocardiography. MCG also looks at the heart’s electrical activity whose voltage output is too small to be recorded by surface electrodes as in EKG. It has the potential of being a rapid scan for early diagnosis of cardiac ischemia (obstruction of blood flow to the heart) or problems with the fetus. MEG can be used to identify abnormal electrical discharges in the brain that produce weak magnetic signals. Therefore, it looks at brain activity, not just brain structure. It has been used for studies of Alzheimer’s disease and epilepsy. Advances in instrumentation to measure very small magnetic fields have allowed these two techniques to be used more in recent years. What is used is a sensor called a SQUID, for superconducting quantum interference device. This operates at liquid helium temperatures and can measure magnetic fields thousands of times smaller than the Earth’s. Finally, there is a burgeoning market for magnetic cures in which magnets are applied in a variety of ways to the body, from magnetic bracelets to magnetic mattresses. The best that can be said for such practices is that they are apparently harmless, unless the magnets get close to the patient’s computer or magnetic storage disks. Claims are made for a broad spectrum of benefits from cleansing the blood to giving the patient more energy, but clinical studies have not verified these claims, nor is there an identifiable mechanism by which such benefits might occur. ### Section Summary 1. Crossed (perpendicular) electric and magnetic fields act as a velocity filter, giving equal and opposite forces on any charge with velocity perpendicular to the fields and of magnitude ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Introduction to Electromagnetic Induction, AC Circuits and Electrical Technologies Nature’s displays of symmetry are beautiful and alluring. A butterfly’s wings exhibit an appealing symmetry in a complex system. (See .) The laws of physics display symmetries at the most basic level—these symmetries are a source of wonder and imply deeper meaning. Since we place a high value on symmetry, we look for it when we explore nature. The remarkable thing is that we find it. The hint of symmetry between electricity and magnetism found in the preceding chapter will be elaborated upon in this chapter. Specifically, we know that a current creates a magnetic field. If nature is symmetric here, then perhaps a magnetic field can create a current. The Hall effect is a voltage caused by a magnetic force. That voltage could drive a current. Historically, it was very shortly after Oersted discovered currents cause magnetic fields that other scientists asked the following question: Can magnetic fields cause currents? The answer was soon found by experiment to be yes. In 1831, some 12 years after Oersted’s discovery, the English scientist Michael Faraday (1791–1862) and the American scientist Joseph Henry (1797–1878) independently demonstrated that magnetic fields can produce currents. The basic process of generating emfs (electromotive force) and, hence, currents with magnetic fields is known as induction; this process is also called magnetic induction to distinguish it from charging by induction, which utilizes the Coulomb force. Today, currents induced by magnetic fields are essential to our technological society. The ubiquitous generator—found in automobiles, on bicycles, in nuclear power plants, and so on—uses magnetism to generate current. Other devices that use magnetism to induce currents include pickup coils in electric guitars, transformers of every size, certain microphones, airport security gates, and damping mechanisms on sensitive chemical balances. Not so familiar perhaps, but important nevertheless, is that the behavior of AC circuits depends strongly on the effect of magnetic fields on currents.
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Induced Emf and Magnetic Flux ### Learning Objectives By the end of this section, you will be able to: 1. Calculate the flux of a uniform magnetic field through a loop of arbitrary orientation. 2. Describe methods to produce an electromotive force (emf) with a magnetic field or magnet and a loop of wire. The apparatus used by Faraday to demonstrate that magnetic fields can create currents is illustrated in . When the switch is closed, a magnetic field is produced in the coil on the top part of the iron ring and transmitted to the coil on the bottom part of the ring. The galvanometer is used to detect any current induced in the coil on the bottom. It was found that each time the switch is closed, the galvanometer detects a current in one direction in the coil on the bottom. (You can also observe this in a physics lab.) Each time the switch is opened, the galvanometer detects a current in the opposite direction. Interestingly, if the switch remains closed or open for any length of time, there is no current through the galvanometer. Closing and opening the switch induces the current. It is the change in magnetic field that creates the current. More basic than the current that flows is the emf that causes it. The current is a result of an emf induced by a changing magnetic field, whether or not there is a path for current to flow. An experiment easily performed and often done in physics labs is illustrated in . An emf is induced in the coil when a bar magnet is pushed in and out of it. Emfs of opposite signs are produced by motion in opposite directions, and the emfs are also reversed by reversing poles. The same results are produced if the coil is moved rather than the magnet—it is the relative motion that is important. The faster the motion, the greater the emf, and there is no emf when the magnet is stationary relative to the coil. The method of inducing an emf used in most electric generators is shown in . A coil is rotated in a magnetic field, producing an alternating current emf, which depends on rotation rate and other factors that will be explored in later sections. Note that the generator is remarkably similar in construction to a motor (another symmetry). So we see that changing the magnitude or direction of a magnetic field produces an emf. Experiments revealed that there is a crucial quantity called the magnetic flux, , given by where is the magnetic field strength over an area , at an angle with the perpendicular to the area as shown in . Any change in magnetic flux This process is defined to be electromagnetic induction. Units of magnetic flux are . As seen in , ⊥, which is the component of perpendicular to the area . Thus magnetic flux is , the product of the area and the component of the magnetic field perpendicular to it. All induction, including the examples given so far, arises from some change in magnetic flux . For example, Faraday changed and hence when opening and closing the switch in his apparatus (shown in ). This is also true for the bar magnet and coil shown in . When rotating the coil of a generator, the angle and, hence, is changed. Just how great an emf and what direction it takes depend on the change in and how rapidly the change is made, as examined in the next section. ### Test Prep for AP Courses ### Section Summary 1. The crucial quantity in induction is magnetic flux , defined to be , where is the magnetic field strength over an area at an angle with the perpendicular to the area. 2. Units of magnetic flux are . 3. Any change in magnetic flux induces an emf—the process is defined to be electromagnetic induction. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Faraday’s Law of Induction: Lenz’s Law ### Learning Objectives By the end of this section, you will be able to: 1. Calculate emf, current, and magnetic fields using Faraday’s Law. 2. Explain the physical results of Lenz’s Law ### Faraday’s and Lenz’s Law Faraday’s experiments showed that the emf induced by a change in magnetic flux depends on only a few factors. First, emf is directly proportional to the change in flux . Second, emf is greatest when the change in time is smallest—that is, emf is inversely proportional to . Finally, if a coil has turns, an emf will be produced that is times greater than for a single coil, so that emf is directly proportional to . The equation for the emf induced by a change in magnetic flux is This relationship is known as Faraday’s law of induction. The units for emf are volts, as is usual. The minus sign in Faraday’s law of induction is very important. The minus means that the emf creates a current I and magnetic field B that oppose the change in flux . The direction (given by the minus sign) of the emfis so important that it is called Lenz’s law after the Russian Heinrich Lenz (1804–1865), who, like Faraday and Henry,independently investigated aspects of induction. Faraday was aware of the direction, but Lenz stated it so clearly that he is credited for its discovery. (See .) For practice, apply these steps to the situations shown in and to others that are part of the following text material. ### Applications of Electromagnetic Induction There are many applications of Faraday’s Law of induction, as we will explore in this chapter and others. At this juncture, let us mention several that have to do with data storage and magnetic fields. A very important application has to do with audio and video recording tapes. A plastic tape, coated with iron oxide, moves past a recording head. This recording head is basically a round iron ring about which is wrapped a coil of wire—an electromagnet (). A signal in the form of a varying input current from a microphone or camera goes to the recording head. These signals (which are a function of the signal amplitude and frequency) produce varying magnetic fields at the recording head. As the tape moves past the recording head, the magnetic field orientations of the iron oxide molecules on the tape are changed thus recording the signal. In the playback mode, the magnetized tape is run past another head, similar in structure to the recording head. The different magnetic field orientations of the iron oxide molecules on the tape induces an emf in the coil of wire in the playback head. This signal then is sent to a loudspeaker or video player. Similar principles apply to computer hard drives, except at a much faster rate. Here recordings are on a coated, spinning disk. Read heads historically were made to work on the principle of induction. However, the input information is carried in digital rather than analog form – a series of 0’s or 1’s are written upon the spinning hard drive. Today, most hard drive readout devices do not work on the principle of induction, but use a technique known as giant magnetoresistance. (The discovery that weak changes in a magnetic field in a thin film of iron and chromium could bring about much larger changes in electrical resistance was one of the first large successes of nanotechnology.) Another application of induction is found on the magnetic stripe on the back of your personal credit card as used at the grocery store or the ATM machine. This works on the same principle as the audio or video tape mentioned in the last paragraph in which a head reads personal information from your card. Another application of electromagnetic induction is when electrical signals need to be transmitted across a barrier. Consider the cochlear implant shown below. Sound is picked up by a microphone on the outside of the skull and is used to set up a varying magnetic field. A current is induced in a receiver secured in the bone beneath the skin and transmitted to electrodes in the inner ear. Electromagnetic induction can be used in other instances where electric signals need to be conveyed across various media. Another contemporary area of research in which electromagnetic induction is being successfully implemented (and with substantial potential) is transcranial magnetic simulation. A host of disorders, including depression and hallucinations can be traced to irregular localized electrical activity in the brain. In transcranial magnetic stimulation, a rapidly varying and very localized magnetic field is placed close to certain sites identified in the brain. Weak electric currents are induced in the identified sites and can result in recovery of electrical functioning in the brain tissue. Sleep apnea (“the cessation of breath”) affects both adults and infants (especially premature babies and it may be a cause of sudden infant deaths [SID]). In such individuals, breath can stop repeatedly during their sleep. A cessation of more than 20 seconds can be very dangerous. Stroke, heart failure, and tiredness are just some of the possible consequences for a person having sleep apnea. The concern in infants is the stopping of breath for these longer times. One type of monitor to alert parents when a child is not breathing uses electromagnetic induction. A wire wrapped around the infant’s chest has an alternating current running through it. The expansion and contraction of the infant’s chest as the infant breathes changes the area through the coil. A pickup coil located nearby has an alternating current induced in it due to the changing magnetic field of the initial wire. If the child stops breathing, there will be a change in the induced current, and so a parent can be alerted. ### Section Summary 1. Faraday’s law of induction states that the emfinduced by a change in magnetic flux is when flux changes by 2. If emf is induced in a coil, is its number of turns. 3. The minus sign means that the emf creates a current and magnetic field that oppose the change in flux —this opposition is known as Lenz’s law. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Motional Emf ### Learning Objectives By the end of this section, you will be able to: 1. Calculate emf, force, magnetic field, and work due to the motion of an object in a magnetic field. As we have seen, any change in magnetic flux induces an emf opposing that change—a process known as induction. Motion is one of the major causes of induction. For example, a magnet moved toward a coil induces an emf, and a coil moved toward a magnet produces a similar emf. In this section, we concentrate on motion in a magnetic field that is stationary relative to the Earth, producing what is loosely called motional emf. One situation where motional emf occurs is known as the Hall effect and has already been examined. Charges moving in a magnetic field experience the magnetic force , which moves opposite charges in opposite directions and produces an . We saw that the Hall effect has applications, including measurements of and . We will now see that the Hall effect is one aspect of the broader phenomenon of induction, and we will find that motional emf can be used as a power source. Consider the situation shown in . A rod is moved at a speed along a pair of conducting rails separated by a distance in a uniform magnetic field . The rails are stationary relative to and are connected to a stationary resistor . The resistor could be anything from a light bulb to a voltmeter. Consider the area enclosed by the moving rod, rails, and resistor. is perpendicular to this area, and the area is increasing as the rod moves. Thus the magnetic flux enclosed by the rails, rod, and resistor is increasing. When flux changes, an emf is induced according to Faraday’s law of induction. To find the magnitude of emf induced along the moving rod, we use Faraday’s law of induction without the sign: Here and below, “emf” implies the magnitude of the emf. In this equation, and the flux . We have and , since is perpendicular to . Now , since is uniform. Note that the area swept out by the rod is . Entering these quantities into the expression for emf yields Finally, note that , the velocity of the rod. Entering this into the last expression shows that is the motional emf. This is the same expression given for the Hall effect previously. To find the direction of the induced field, the direction of the current, and the polarity of the induced emf, we apply Lenz’s law as explained in Faraday's Law of Induction: Lenz's Law. (See (b).) Flux is increasing, since the area enclosed is increasing. Thus the induced field must oppose the existing one and be out of the page. And so the RHR-2 requires that I be counterclockwise, which in turn means the top of the rod is positive as shown. Motional emf also occurs if the magnetic field moves and the rod (or other object) is stationary relative to the Earth (or some observer). We have seen an example of this in the situation where a moving magnet induces an emf in a stationary coil. It is the relative motion that is important. What is emerging in these observations is a connection between magnetic and electric fields. A moving magnetic field produces an electric field through its induced emf. We already have seen that a moving electric field produces a magnetic field—moving charge implies moving electric field and moving charge produces a magnetic field. Motional emfs in the Earth’s weak magnetic field are not ordinarily very large, or we would notice voltage along metal rods, such as a screwdriver, during ordinary motions. For example, a simple calculation of the motional emf of a 1 m rod moving at 3.0 m/s perpendicular to the Earth’s field gives . This small value is consistent with experience. There is a spectacular exception, however. In 1992 and 1996, attempts were made with the space shuttle to create large motional emfs. The Tethered Satellite was to be let out on a 20 km length of wire as shown in , to create a 5 kV emf by moving at orbital speed through the Earth’s field. This emf could be used to convert some of the shuttle’s kinetic and potential energy into electrical energy if a complete circuit could be made. To complete the circuit, the stationary ionosphere was to supply a return path for the current to flow. (The ionosphere is the rarefied and partially ionized atmosphere at orbital altitudes. It conducts because of the ionization. The ionosphere serves the same function as the stationary rails and connecting resistor in , without which there would not be a complete circuit.) Drag on the current in the cable due to the magnetic force does the work that reduces the shuttle’s kinetic and potential energy and allows it to be converted to electrical energy. The tests were both unsuccessful. In the first, the cable hung up and could only be extended a couple of hundred meters; in the second, the cable broke when almost fully extended. indicates feasibility in principle. ### Section Summary 1. An emf induced by motion relative to a magnetic field is called a motional emf and is given by where is the length of the object moving at speed relative to the field. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Eddy Currents and Magnetic Damping ### Learning Objectives By the end of this section, you will be able to: 1. Explain the magnitude and direction of an induced eddy current, and the effect this will have on the object it is induced in. 2. Describe several applications of magnetic damping. ### Eddy Currents and Magnetic Damping As discussed in Motional Emf, motional emf is induced when a conductor moves in a magnetic field or when a magnetic field moves relative to a conductor. If motional emf can cause a current loop in the conductor, we refer to that current as an eddy current. Eddy currents can produce significant drag, called magnetic damping, on the motion involved. Consider the apparatus shown in , which swings a pendulum bob between the poles of a strong magnet. (This is another favorite physics lab activity.) If the bob is metal, there is significant drag on the bob as it enters and leaves the field, quickly damping the motion. If, however, the bob is a slotted metal plate, as shown in (b), there is a much smaller effect due to the magnet. There is no discernible effect on a bob made of an insulator. Why is there drag in both directions, and are there any uses for magnetic drag? shows what happens to the metal plate as it enters and leaves the magnetic field. In both cases, it experiences a force opposing its motion. As it enters from the left, flux increases, and so an eddy current is set up (Faraday’s law) in the counterclockwise direction (Lenz’s law), as shown. Only the right-hand side of the current loop is in the field, so that there is an unopposed force on it to the left (RHR-1). When the metal plate is completely inside the field, there is no eddy current if the field is uniform, since the flux remains constant in this region. But when the plate leaves the field on the right, flux decreases, causing an eddy current in the clockwise direction that, again, experiences a force to the left, further slowing the motion. A similar analysis of what happens when the plate swings from the right toward the left shows that its motion is also damped when entering and leaving the field. When a slotted metal plate enters the field, as shown in , an emf is induced by the change in flux, but it is less effective because the slots limit the size of the current loops. Moreover, adjacent loops have currents in opposite directions, and their effects cancel. When an insulating material is used, the eddy current is extremely small, and so magnetic damping on insulators is negligible. If eddy currents are to be avoided in conductors, then they can be slotted or constructed of thin layers of conducting material separated by insulating sheets. ### Applications of Magnetic Damping One use of magnetic damping is found in sensitive laboratory balances. To have maximum sensitivity and accuracy, the balance must be as friction-free as possible. But if it is friction-free, then it will oscillate for a very long time. Magnetic damping is a simple and ideal solution. With magnetic damping, drag is proportional to speed and becomes zero at zero velocity. Thus the oscillations are quickly damped, after which the damping force disappears, allowing the balance to be very sensitive. (See .) In most balances, magnetic damping is accomplished with a conducting disc that rotates in a fixed field. Since eddy currents and magnetic damping occur only in conductors, recycling centers can use magnets to separate metals from other materials. Trash is dumped in batches down a ramp, beneath which lies a powerful magnet. Conductors in the trash are slowed by magnetic damping while nonmetals in the trash move on, separating from the metals. (See .) This works for all metals, not just ferromagnetic ones. A magnet can separate out the ferromagnetic materials alone by acting on stationary trash. Other major applications of eddy currents are in metal detectors and braking systems in trains and roller coasters. Portable metal detectors () consist of a primary coil carrying an alternating current and a secondary coil in which a current is induced. An eddy current will be induced in a piece of metal close to the detector which will cause a change in the induced current within the secondary coil, leading to some sort of signal like a shrill noise. Braking using eddy currents is safer because factors such as rain do not affect the braking and the braking is smoother. However, eddy currents cannot bring the motion to a complete stop, since the force produced decreases with speed. Thus, speed can be reduced from say 20 m/s to 5 m/s, but another form of braking is needed to completely stop the vehicle. Generally, powerful rare earth magnets such as neodymium magnets are used in roller coasters. shows rows of magnets in such an application. The vehicle has metal fins (normally containing copper) which pass through the magnetic field slowing the vehicle down in much the same way as with the pendulum bob shown in . Induction cooktops have electromagnets under their surface. The magnetic field is varied rapidly producing eddy currents in the base of the pot, causing the pot and its contents to increase in temperature. Induction cooktops have high efficiencies and good response times but the base of the pot needs to be ferromagnetic, iron or steel for induction to work. ### Section Summary 1. Current loops induced in moving conductors are called eddy currents. 2. They can create significant drag, called magnetic damping. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Electric Generators ### Learning Objectives By the end of this section, you will be able to: 1. Calculate the emf induced in a generator. 2. Calculate the peak emf which can be induced in a particular generator system. Electric generators induce an emf by rotating a coil in a magnetic field, as briefly discussed in Induced Emf and Magnetic Flux. We will now explore generators in more detail. Consider the following example. The emf calculated in is the average over one-fourth of a revolution. What is the emf at any given instant? It varies with the angle between the magnetic field and a perpendicular to the coil. We can get an expression for emf as a function of time by considering the motional emf on a rotating rectangular coil of width and height in a uniform magnetic field, as illustrated in . Charges in the wires of the loop experience the magnetic force, because they are moving in a magnetic field. Charges in the vertical wires experience forces parallel to the wire, causing currents. But those in the top and bottom segments feel a force perpendicular to the wire, which does not cause a current. We can thus find the induced emf by considering only the side wires. Motional emf is given to be , where the velocity v is perpendicular to the magnetic field . Here the velocity is at an angle with , so that its component perpendicular to is (see ). Thus in this case the emf induced on each side is , and they are in the same direction. The total emf around the loop is then This expression is valid, but it does not give emf as a function of time. To find the time dependence of emf, we assume the coil rotates at a constant angular velocity . The angle is related to angular velocity by , so that Now, linear velocity is related to angular velocity by . Here , so that , and Noting that the area of the loop is , and allowing for loops, we find that is the emf induced in a generator coil of turns and area rotating at a constant angular velocity in a uniform magnetic field . This can also be expressed as where is the maximum (peak) emf. Note that the frequency of the oscillation is , and the period is . shows a graph of emf as a function of time, and it now seems reasonable that AC voltage is sinusoidal. The fact that the peak emf, , makes good sense. The greater the number of coils, the larger their area, and the stronger the field, the greater the output voltage. It is interesting that the faster the generator is spun (greater ), the greater the emf. This is noticeable on bicycle generators—at least the cheaper varieties. One of the authors as a juvenile found it amusing to ride his bicycle fast enough to burn out his lights, until he had to ride home lightless one dark night. shows a scheme by which a generator can be made to produce pulsed DC. More elaborate arrangements of multiple coils and split rings can produce smoother DC, although electronic rather than mechanical means are usually used to make ripple-free DC. In real life, electric generators look a lot different than the figures in this section, but the principles are the same. The source of mechanical energy that turns the coil can be falling water (hydropower), steam produced by the burning of fossil fuels, or the kinetic energy of wind. shows a cutaway view of a steam turbine; steam moves over the blades connected to the shaft, which rotates the coil within the generator. Generators illustrated in this section look very much like the motors illustrated previously. This is not coincidental. In fact, a motor becomes a generator when its shaft rotates. Certain early automobiles used their starter motor as a generator. In Back Emf, we shall further explore the action of a motor as a generator. ### Test Prep for AP Courses ### Section Summary 1. An electric generator rotates a coil in a magnetic field, inducing an emfgiven as a function of time by where is the area of an -turn coil rotated at a constant angular velocity in a uniform magnetic field . 2. The peak emf of a generator is ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Back Emf ### Learning Objectives By the end of this section, you will be able to: 1. Explain what back emf is and how it is induced. It has been noted that motors and generators are very similar. Generators convert mechanical energy into electrical energy, whereas motors convert electrical energy into mechanical energy. Furthermore, motors and generators have the same construction. When the coil of a motor is turned, magnetic flux changes, and an emf (consistent with Faraday’s law of induction) is induced. The motor thus acts as a generator whenever its coil rotates. This will happen whether the shaft is turned by an external input, like a belt drive, or by the action of the motor itself. That is, when a motor is doing work and its shaft is turning, an emf is generated. Lenz’s law tells us the emf opposes any change, so that the input emf that powers the motor will be opposed by the motor’s self-generated emf, called the back emf of the motor. (See .) Back emf is the generator output of a motor, and so it is proportional to the motor’s angular velocity . It is zero when the motor is first turned on, meaning that the coil receives the full driving voltage and the motor draws maximum current when it is on but not turning. As the motor turns faster and faster, the back emf grows, always opposing the driving emf, and reduces the voltage across the coil and the amount of current it draws. This effect is noticeable in a number of situations. When a vacuum cleaner, refrigerator, or washing machine is first turned on, lights in the same circuit dim briefly due to the drop produced in feeder lines by the large current drawn by the motor. When a motor first comes on, it draws more current than when it runs at its normal operating speed. When a mechanical load is placed on the motor, like an electric wheelchair going up a hill, the motor slows, the back emf drops, more current flows, and more work can be done. If the motor runs at too low a speed, the larger current can overheat it (via resistive power in the coil, ), perhaps even burning it out. On the other hand, if there is no mechanical load on the motor, it will increase its angular velocity until the back emf is nearly equal to the driving emf. Then the motor uses only enough energy to overcome friction. Consider, for example, the motor coils represented in . The coils have a equivalent resistance and are driven by a 48.0 V emf. Shortly after being turned on, they draw a current and, thus, dissipate of energy as heat transfer. Under normal operating conditions for this motor, suppose the back emf is 40.0 V. Then at operating speed, the total voltage across the coils is 8.0 V (48.0 V minus the 40.0 V back emf), and the current drawn is . Under normal load, then, the power dissipated is . The latter will not cause a problem for this motor, whereas the former 5.76 kW would burn out the coils if sustained. ### Section Summary 1. Any rotating coil will have an induced emf—in motors, this is called back emf, since it opposes the emf input to the motor. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Transformers ### Learning Objectives By the end of this section, you will be able to: 1. Explain how a transformer works. 2. Calculate voltage, current, and/or number of turns given the other quantities. Transformers do what their name implies—they transform voltages from one value to another (The term voltage is used rather than emf, because transformers have internal resistance). For example, many cell phones, laptops, video games, and power tools and small appliances have a transformer built into their plug-in unit (like that in ) that changes 120 V or 240 V AC into whatever voltage the device uses. Transformers are also used at several points in the power distribution systems, such as illustrated in . Power is sent long distances at high voltages, because less current is required for a given amount of power, and this means less line loss, as was discussed previously. But high voltages pose greater hazards, so that transformers are employed to produce lower voltage at the user’s location. The type of transformer considered in this text—see —is based on Faraday’s law of induction and is very similar in construction to the apparatus Faraday used to demonstrate magnetic fields could cause currents. The two coils are called the primary and secondary coils. In normal use, the input voltage is placed on the primary, and the secondary produces the transformed output voltage. Not only does the iron core trap the magnetic field created by the primary coil, its magnetization increases the field strength. Since the input voltage is AC, a time-varying magnetic flux is sent to the secondary, inducing its AC output voltage. For the simple transformer shown in , the output voltage depends almost entirely on the input voltage and the ratio of the number of loops in the primary and secondary coils. Faraday’s law of induction for the secondary coil gives its induced output voltage to be where is the number of loops in the secondary coil and / is the rate of change of magnetic flux. Note that the output voltage equals the induced emf (), provided coil resistance is small (a reasonable assumption for transformers). The cross-sectional area of the coils is the same on either side, as is the magnetic field strength, and so is the same on either side. The input primary voltage is also related to changing flux by The reason for this is a little more subtle. Lenz’s law tells us that the primary coil opposes the change in flux caused by the input voltage , hence the minus sign (This is an example of self-inductance, a topic to be explored in some detail in later sections). Assuming negligible coil resistance, Kirchhoff’s loop rule tells us that the induced emf exactly equals the input voltage. Taking the ratio of these last two equations yields a useful relationship: This is known as the transformer equation, and it simply states that the ratio of the secondary to primary voltages in a transformer equals the ratio of the number of loops in their coils. The output voltage of a transformer can be less than, greater than, or equal to the input voltage, depending on the ratio of the number of loops in their coils. Some transformers even provide a variable output by allowing connection to be made at different points on the secondary coil. A step-up transformer is one that increases voltage, whereas a step-down transformer decreases voltage. Assuming, as we have, that resistance is negligible, the electrical power output of a transformer equals its input. This is nearly true in practice—transformer efficiency often exceeds 99%. Equating the power input and output, Rearranging terms gives Combining this with , we find that is the relationship between the output and input currents of a transformer. So if voltage increases, current decreases. Conversely, if voltage decreases, current increases. The fact that transformers are based on Faraday’s law of induction makes it clear why we cannot use transformers to change DC voltages. If there is no change in primary voltage, there is no voltage induced in the secondary. One possibility is to connect DC to the primary coil through a switch. As the switch is opened and closed, the secondary produces a voltage like that in . This is not really a practical alternative, and AC is in common use wherever it is necessary to increase or decrease voltages. Transformers have many applications in electrical safety systems, which are discussed in Electrical Safety: Systems and Devices. ### Test Prep for AP Courses ### Section Summary 1. Transformers use induction to transform voltages from one value to another. 2. For a transformer, the voltages across the primary and secondary coils are related by where and are the voltages across primary and secondary coils having and turns. 3. The currents and in the primary and secondary coils are related by . 4. A step-up transformer increases voltage and decreases current, whereas a step-down transformer decreases voltage and increases current. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Electrical Safety: Systems and Devices ### Learning Objectives By the end of this section, you will be able to: 1. Explain how various modern safety features in electric circuits work, with an emphasis on how induction is employed. Electricity has two hazards. A thermal hazard occurs when there is electrical overheating. A shock hazard occurs when electric current passes through a person. Both hazards have already been discussed. Here we will concentrate on systems and devices that prevent electrical hazards. shows the schematic for a simple AC circuit with no safety features. This is not how power is distributed in practice. Modern household and industrial wiring requires the three-wire system, shown schematically in , which has several safety features. First is the familiar circuit breaker (or fuse) to prevent thermal overload. Second, there is a protective case around the appliance, such as a toaster or refrigerator. The case’s safety feature is that it prevents a person from touching exposed wires and coming into electrical contact with the circuit, helping prevent shocks. There are three connections to earth or ground (hereafter referred to as “earth/ground”) shown in . Recall that an earth/ground connection is a low-resistance path directly to the earth. The two earth/ground connections on the neutral wire force it to be at zero volts relative to the earth, giving the wire its name. This wire is therefore safe to touch even if its insulation, usually white, is missing. The neutral wire is the return path for the current to follow to complete the circuit. Furthermore, the two earth/ground connections supply an alternative path through the earth, a good conductor, to complete the circuit. The earth/ground connection closest to the power source could be at the generating plant, while the other is at the user’s location. The third earth/ground is to the case of the appliance, through the green earth/ground wire, forcing the case, too, to be at zero volts. The live or hot wire (hereafter referred to as “live/hot”) supplies voltage and current to operate the appliance. shows a more pictorial version of how the three-wire system is connected through a three-prong plug to an appliance. A note on insulation color-coding: Insulating plastic is color-coded to identify live/hot, neutral and ground wires but these codes vary around the world. Live/hot wires may be brown, red, black, blue or grey. Neutral wire may be blue, black or white. Since the same color may be used for live/hot or neutral in different parts of the world, it is essential to determine the color code in your region. The only exception is the earth/ground wire which is often green but may be yellow or just bare wire. Striped coatings are sometimes used for the benefit of those who are colorblind. The three-wire system replaced the older two-wire system, which lacks an earth/ground wire. Under ordinary circumstances, insulation on the live/hot and neutral wires prevents the case from being directly in the circuit, so that the earth/ground wire may seem like double protection. Grounding the case solves more than one problem, however. The simplest problem is worn insulation on the live/hot wire that allows it to contact the case, as shown in . Lacking an earth/ground connection (some people cut the third prong off the plug because they only have outdated two hole receptacles), a severe shock is possible. This is particularly dangerous in the kitchen, where a good connection to earth/ground is available through water on the floor or a water faucet. With the earth/ground connection intact, the circuit breaker will trip, forcing repair of the appliance. Why are some appliances still sold with two-prong plugs? These have nonconducting cases, such as power tools with impact resistant plastic cases, and are called doubly insulated. Modern two-prong plugs can be inserted into the asymmetric standard outlet in only one way, to ensure proper connection of live/hot and neutral wires. Electromagnetic induction causes a more subtle problem that is solved by grounding the case. The AC current in appliances can induce an emf on the case. If grounded, the case voltage is kept near zero, but if the case is not grounded, a shock can occur as pictured in . Current driven by the induced case emf is called a leakage current, although current does not necessarily pass from the resistor to the case. A ground fault interrupter (GFI) is a safety device found in updated kitchen and bathroom wiring that works based on electromagnetic induction. GFIs compare the currents in the live/hot and neutral wires. When live/hot and neutral currents are not equal, it is almost always because current in the neutral is less than in the live/hot wire. Then some of the current, again called a leakage current, is returning to the voltage source by a path other than through the neutral wire. It is assumed that this path presents a hazard, such as shown in . GFIs are usually set to interrupt the circuit if the leakage current is greater than 5 mA, the accepted maximum harmless shock. Even if the leakage current goes safely to earth/ground through an intact earth/ground wire, the GFI will trip, forcing repair of the leakage. shows how a GFI works. If the currents in the live/hot and neutral wires are equal, then they induce equal and opposite emfs in the coil. If not, then the circuit breaker will trip. Another induction-based safety device is the isolation transformer, shown in . Most isolation transformers have equal input and output voltages. Their function is to put a large resistance between the original voltage source and the device being operated. This prevents a complete circuit between them, even in the circumstance shown. There is a complete circuit through the appliance. But there is not a complete circuit for current to flow through the person in the figure, who is touching only one of the transformer’s output wires, and neither output wire is grounded. The appliance is isolated from the original voltage source by the high resistance of the material between the transformer coils, hence the name isolation transformer. For current to flow through the person, it must pass through the high-resistance material between the coils, through the wire, the person, and back through the earth—a path with such a large resistance that the current is negligible. The basics of electrical safety presented here help prevent many electrical hazards. Electrical safety can be pursued to greater depths. There are, for example, problems related to different earth/ground connections for appliances in close proximity. Many other examples are found in hospitals. Microshock-sensitive patients, for instance, require special protection. For these people, currents as low as 0.1 mA may cause ventricular fibrillation. The interested reader can use the material presented here as a basis for further study. ### Test Prep for AP Courses ### Section Summary 1. Electrical safety systems and devices are employed to prevent thermal and shock hazards. 2. Circuit breakers and fuses interrupt excessive currents to prevent thermal hazards. 3. The three-wire system guards against thermal and shock hazards, utilizing live/hot, neutral, and earth/ground wires, and grounding the neutral wire and case of the appliance. 4. A ground fault interrupter (GFI) prevents shock by detecting the loss of current to unintentional paths. 5. An isolation transformer insulates the device being powered from the original source, also to prevent shock. 6. Many of these devices use induction to perform their basic function. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Inductance ### Learning Objectives By the end of this section, you will be able to: 1. Calculate the inductance of an inductor. 2. Calculate the energy stored in an inductor. 3. Calculate the emf generated in an inductor. ### Inductors Induction is the process in which an emf is induced by changing magnetic flux. Many examples have been discussed so far, some more effective than others. Transformers, for example, are designed to be particularly effective at inducing a desired voltage and current with very little loss of energy to other forms. Is there a useful physical quantity related to how “effective” a given device is? The answer is yes, and that physical quantity is called inductance. Mutual inductance is the effect of Faraday’s law of induction for one device upon another, such as the primary coil in transmitting energy to the secondary in a transformer. See , where simple coils induce emfs in one another. In the many cases where the geometry of the devices is fixed, flux is changed by varying current. We therefore concentrate on the rate of change of current, , as the cause of induction. A change in the current in one device, coil 1 in the figure, induces an in the other. We express this in equation form as where is defined to be the mutual inductance between the two devices. The minus sign is an expression of Lenz’s law. The larger the mutual inductance , the more effective the coupling. For example, the coils in have a small compared with the transformer coils in . Units for are , which is named a henry (H), after Joseph Henry. That is, . Nature is symmetric here. If we change the current in coil 2, we induce an in coil 1, which is given by where is the same as for the reverse process. Transformers run backward with the same effectiveness, or mutual inductance . A large mutual inductance may or may not be desirable. We want a transformer to have a large mutual inductance. But an appliance, such as an electric clothes dryer, can induce a dangerous emf on its case if the mutual inductance between its coils and the case is large. One way to reduce mutual inductance is to counterwind coils to cancel the magnetic field produced. (See .) Self-inductance, the effect of Faraday’s law of induction of a device on itself, also exists. When, for example, current through a coil is increased, the magnetic field and flux also increase, inducing a counter emf, as required by Lenz’s law. Conversely, if the current is decreased, an emf is induced that opposes the decrease. Most devices have a fixed geometry, and so the change in flux is due entirely to the change in current through the device. The induced emf is related to the physical geometry of the device and the rate of change of current. It is given by where is the self-inductance of the device. A device that exhibits significant self-inductance is called an inductor, and given the symbol in .The minus sign is an expression of Lenz’s law, indicating that emf opposes the change in current. Units of self-inductance are henries (H) just as for mutual inductance. The larger the self-inductance of a device, the greater its opposition to any change in current through it. For example, a large coil with many turns and an iron core has a large and will not allow current to change quickly. To avoid this effect, a small must be achieved, such as by counterwinding coils as in . A 1 H inductor is a large inductor. To illustrate this, consider a device with that has a 10 A current flowing through it. What happens if we try to shut off the current rapidly, perhaps in only 1.0 ms? An emf, given by , will oppose the change. Thus an emf will be induced given by . The positive sign means this large voltage is in the same direction as the current, opposing its decrease. Such large emfs can cause arcs, damaging switching equipment, and so it may be necessary to change current more slowly. There are uses for such a large induced voltage. Camera flashes use a battery, two inductors that function as a transformer, and a switching system or oscillator to induce large voltages. (Remember that we need a changing magnetic field, brought about by a changing current, to induce a voltage in another coil.) The oscillator system will do this many times as the battery voltage is boosted to over one thousand volts. (You may hear the high pitched whine from the transformer as the capacitor is being charged.) A capacitor stores the high voltage for later use in powering the flash. (See .) It is possible to calculate for an inductor given its geometry (size and shape) and knowing the magnetic field that it produces. This is difficult in most cases, because of the complexity of the field created. So in this text the inductance is usually a given quantity. One exception is the solenoid, because it has a very uniform field inside, a nearly zero field outside, and a simple shape. It is instructive to derive an equation for its inductance. We start by noting that the induced emf is given by Faraday’s law of induction as and, by the definition of self-inductance, as . Equating these yields Solving for gives This equation for the self-inductance of a device is always valid. It means that self-inductance depends on how effective the current is in creating flux; the more effective, the greater / is. Let us use this last equation to find an expression for the inductance of a solenoid. Since the area of a solenoid is fixed, the change in flux is . To find , we note that the magnetic field of a solenoid is given by . (Here , where is the number of coils and is the solenoid’s length.) Only the current changes, so that . Substituting into gives This simplifies to This is the self-inductance of a solenoid of cross-sectional area and length . Note that the inductance depends only on the physical characteristics of the solenoid, consistent with its definition. One common application of inductance is used in traffic lights that can tell when vehicles are waiting at the intersection. An electrical circuit with an inductor is placed in the road under the place a waiting car will stop over. The body of the car increases the inductance and the circuit changes sending a signal to the traffic lights to change colors. Similarly, metal detectors used for airport security employ the same technique. A coil or inductor in the metal detector frame acts as both a transmitter and a receiver. The pulsed signal in the transmitter coil induces a signal in the receiver. The self-inductance of the circuit is affected by any metal object in the path. Such detectors can be adjusted for sensitivity and also can indicate the approximate location of metal found on a person. See . ### Energy Stored in an Inductor We know from Lenz’s law that inductances oppose changes in current. There is an alternative way to look at this opposition that is based on energy. Energy is stored in a magnetic field. It takes time to build up energy, and it also takes time to deplete energy; hence, there is an opposition to rapid change. In an inductor, the magnetic field is directly proportional to current and to the inductance of the device. It can be shown that the energy stored in an inductor is given by This expression is similar to that for the energy stored in a capacitor. ### Section Summary 1. Inductance is the property of a device that tells how effectively it induces an emf in another device. 2. Mutual inductance is the effect of two devices in inducing emfs in each other. 3. A change in current in one induces an emf in the second: where is defined to be the mutual inductance between the two devices, and the minus sign is due to Lenz’s law. 4. Symmetrically, a change in current through the second device induces an emf in the first: where is the same mutual inductance as in the reverse process. 5. Current changes in a device induce an emf in the device itself. 6. Self-inductance is the effect of the device inducing emf in itself. 7. The device is called an inductor, and the emf induced in it by a change in current through it is where is the self-inductance of the inductor, and is the rate of change of current through it. The minus sign indicates that emf opposes the change in current, as required by Lenz’s law. 8. The unit of self- and mutual inductance is the henry (H), where . 9. The self-inductance of an inductor is proportional to how much flux changes with current. For an -turn inductor, 10. The self-inductance of a solenoid is where is its number of turns in the solenoid, is its cross-sectional area, is its length, and is the permeability of free space. 11. The energy stored in an inductor is ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## RL Circuits ### Learning Objectives By the end of this section, you will be able to: 1. Calculate the current in an RL circuit after a specified number of characteristic time steps. 2. Calculate the characteristic time of an RL circuit. 3. Sketch the current in an RL circuit over time. We know that the current through an inductor cannot be turned on or off instantaneously. The change in current changes flux, inducing an emf opposing the change (Lenz’s law). How long does the opposition last? Current will flow and can be turned off, but how long does it take? shows a switching circuit that can be used to examine current through an inductor as a function of time. When the switch is first moved to position 1 (at ), the current is zero and it eventually rises to , where is the total resistance of the circuit. The opposition of the inductor is greatest at the beginning, because the amount of change is greatest. The opposition it poses is in the form of an induced emf, which decreases to zero as the current approaches its final value. The opposing emf is proportional to the amount of change left. This is the hallmark of an exponential behavior, and it can be shown with calculus that is the current in an RL circuit when switched on (Note the similarity to the exponential behavior of the voltage on a charging capacitor). The initial current is zero and approaches with a characteristic time constant for an RL circuit, given by where has units of seconds, since . In the first period of time , the current rises from zero to , since . The current will go 0.632 of the remainder in the next time . A well-known property of the exponential is that the final value is never exactly reached, but 0.632 of the remainder to that value is achieved in every characteristic time . In just a few multiples of the time , the final value is very nearly achieved, as the graph in (b) illustrates. The characteristic time depends on only two factors, the inductance and the resistance . The greater the inductance , the greater is, which makes sense since a large inductance is very effective in opposing change. The smaller the resistance , the greater is. Again this makes sense, since a small resistance means a large final current and a greater change to get there. In both cases—large and small —more energy is stored in the inductor and more time is required to get it in and out. When the switch in (a) is moved to position 2 and cuts the battery out of the circuit, the current drops because of energy dissipation by the resistor. But this is also not instantaneous, since the inductor opposes the decrease in current by inducing an emf in the same direction as the battery that drove the current. Furthermore, there is a certain amount of energy, , stored in the inductor, and it is dissipated at a finite rate. As the current approaches zero, the rate of decrease slows, since the energy dissipation rate is . Once again the behavior is exponential, and is found to be (See (c).) In the first period of time after the switch is closed, the current falls to 0.368 of its initial value, since . In each successive time , the current falls to 0.368 of the preceding value, and in a few multiples of , the current becomes very close to zero, as seen in the graph in (c). In summary, when the voltage applied to an inductor is changed, the current also changes, but the change in current lags the change in voltage in an RL circuit. In Reactance, Inductive and Capacitive, we explore how an RL circuit behaves when a sinusoidal AC voltage is applied. ### Section Summary 1. When a series connection of a resistor and an inductor—an RL circuit—is connected to a voltage source, the time variation of the current is where is the final current. 2. The characteristic time constant is , where is the inductance and is the resistance. 3. In the first time constant , the current rises from zero to , and 0.632 of the remainder in every subsequent time interval . 4. When the inductor is shorted through a resistor, current decreases as Here is the initial current. 5. Current falls to in the first time interval , and 0.368 of the remainder toward zero in each subsequent time . ### Problem Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## Reactance, Inductive and Capacitive ### Learning Objectives By the end of this section, you will be able to: 1. Sketch voltage and current versus time in simple inductive, capacitive, and resistive circuits. 2. Calculate inductive and capacitive reactance. 3. Calculate current and/or voltage in simple inductive, capacitive, and resistive circuits. Many circuits also contain capacitors and inductors, in addition to resistors and an AC voltage source. We have seen how capacitors and inductors respond to DC voltage when it is switched on and off. We will now explore how inductors and capacitors react to sinusoidal AC voltage. ### Inductors and Inductive Reactance Suppose an inductor is connected directly to an AC voltage source, as shown in . It is reasonable to assume negligible resistance, since in practice we can make the resistance of an inductor so small that it has a negligible effect on the circuit. Also shown is a graph of voltage and current as functions of time. The graph in (b) starts with voltage at a maximum. Note that the current starts at zero and rises to its peak after the voltage that drives it, just as was the case when DC voltage was switched on in the preceding section. When the voltage becomes negative at point a, the current begins to decrease; it becomes zero at point b, where voltage is its most negative. The current then becomes negative, again following the voltage. The voltage becomes positive at point c and begins to make the current less negative. At point d, the current goes through zero just as the voltage reaches its positive peak to start another cycle. This behavior is summarized as follows: Current lags behind voltage, since inductors oppose change in current. Changing current induces a back emf . This is considered to be an effective resistance of the inductor to AC. The rms current through an inductor is given by a version of Ohm’s law: where is the rms voltage across the inductor and is defined to be with the frequency of the AC voltage source in hertz (An analysis of the circuit using Kirchhoff’s loop rule and calculus actually produces this expression). is called the inductive reactance, because the inductor reacts to impede the current. has units of ohms (, so that frequency times inductance has units of ), consistent with its role as an effective resistance. It makes sense that is proportional to , since the greater the induction the greater its resistance to change. It is also reasonable that is proportional to frequency , since greater frequency means greater change in current. That is, is large for large frequencies (large , small ). The greater the change, the greater the opposition of an inductor. Note that although the resistance in the circuit considered is negligible, the AC current is not extremely large because inductive reactance impedes its flow. With AC, there is no time for the current to become extremely large. ### Capacitors and Capacitive Reactance Consider the capacitor connected directly to an AC voltage source as shown in . The resistance of a circuit like this can be made so small that it has a negligible effect compared with the capacitor, and so we can assume negligible resistance. Voltage across the capacitor and current are graphed as functions of time in the figure. The graph in starts with voltage across the capacitor at a maximum. The current is zero at this point, because the capacitor is fully charged and halts the flow. Then voltage drops and the current becomes negative as the capacitor discharges. At point a, the capacitor has fully discharged ( on it) and the voltage across it is zero. The current remains negative between points a and b, causing the voltage on the capacitor to reverse. This is complete at point b, where the current is zero and the voltage has its most negative value. The current becomes positive after point b, neutralizing the charge on the capacitor and bringing the voltage to zero at point c, which allows the current to reach its maximum. Between points c and d, the current drops to zero as the voltage rises to its peak, and the process starts to repeat. Throughout the cycle, the voltage follows what the current is doing by one-fourth of a cycle: The capacitor is affecting the current, having the ability to stop it altogether when fully charged. Since an AC voltage is applied, there is an rms current, but it is limited by the capacitor. This is considered to be an effective resistance of the capacitor to AC, and so the rms current in the circuit containing only a capacitor is given by another version of Ohm’s law to be where is the rms voltage and is defined (As with , this expression for results from an analysis of the circuit using Kirchhoff’s rules and calculus) to be where is called the capacitive reactance, because the capacitor reacts to impede the current. has units of ohms (verification left as an exercise for the reader). is inversely proportional to the capacitance ; the larger the capacitor, the greater the charge it can store and the greater the current that can flow. It is also inversely proportional to the frequency ; the greater the frequency, the less time there is to fully charge the capacitor, and so it impedes current less. Although a capacitor is basically an open circuit, there is an rms current in a circuit with an AC voltage applied to a capacitor. This is because the voltage is continually reversing, charging and discharging the capacitor. If the frequency goes to zero (DC), tends to infinity, and the current is zero once the capacitor is charged. At very high frequencies, the capacitor’s reactance tends to zero—it has a negligible reactance and does not impede the current (it acts like a simple wire). Capacitors have the opposite effect on AC circuits that inductors have. ### Resistors in an AC Circuit Just as a reminder, consider , which shows an AC voltage applied to a resistor and a graph of voltage and current versus time. The voltage and current are exactly in phase in a resistor. There is no frequency dependence to the behavior of plain resistance in a circuit: ### Section Summary 1. For inductors in AC circuits, we find that when a sinusoidal voltage is applied to an inductor, the voltage leads the current by one-fourth of a cycle, or by a phase angle. 2. The opposition of an inductor to a change in current is expressed as a type of AC resistance. 3. Ohm’s law for an inductor is where is the rms voltage across the inductor. 4. is defined to be the inductive reactance, given by with the frequency of the AC voltage source in hertz. 5. Inductive reactance has units of ohms and is greatest at high frequencies. 6. For capacitors, we find that when a sinusoidal voltage is applied to a capacitor, the voltage follows the current by one-fourth of a cycle, or by a phase angle. 7. Since a capacitor can stop current when fully charged, it limits current and offers another form of AC resistance; Ohm’s law for a capacitor is where is the rms voltage across the capacitor. 8. is defined to be the capacitive reactance, given by 9. has units of ohms and is greatest at low frequencies. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Induction, AC Circuits, and Electrical Technologies ## RLC Series AC Circuits ### Learning Objectives By the end of this section, you will be able to: 1. Calculate the impedance, phase angle, resonant frequency, power, power factor, voltage, and/or current in a RLC series circuit. 2. Draw the circuit diagram for an RLC series circuit. 3. Explain the significance of the resonant frequency. ### Impedance When alone in an AC circuit, inductors, capacitors, and resistors all impede current. How do they behave when all three occur together? Interestingly, their individual resistances in ohms do not simply add. Because inductors and capacitors behave in opposite ways, they partially to totally cancel each other’s effect. shows an RLC series circuit with an AC voltage source, the behavior of which is the subject of this section. The crux of the analysis of an RLC circuit is the frequency dependence of and , and the effect they have on the phase of voltage versus current (established in the preceding section). These give rise to the frequency dependence of the circuit, with important “resonance” features that are the basis of many applications, such as radio tuners. The combined effect of resistance , inductive reactance , and capacitive reactance is defined to be impedance, an AC analogue to resistance in a DC circuit. Current, voltage, and impedance in an RLC circuit are related by an AC version of Ohm’s law: Here is the peak current, the peak source voltage, and is the impedance of the circuit. The units of impedance are ohms, and its effect on the circuit is as you might expect: the greater the impedance, the smaller the current. To get an expression for in terms of , , and , we will now examine how the voltages across the various components are related to the source voltage. Those voltages are labeled , , and in . Conservation of charge requires current to be the same in each part of the circuit at all times, so that we can say the currents in , , and are equal and in phase. But we know from the preceding section that the voltage across the inductor leads the current by one-fourth of a cycle, the voltage across the capacitor follows the current by one-fourth of a cycle, and the voltage across the resistor is exactly in phase with the current. shows these relationships in one graph, as well as showing the total voltage around the circuit , where all four voltages are the instantaneous values. According to Kirchhoff’s loop rule, the total voltage around the circuit is also the voltage of the source. You can see from that while is in phase with the current, leads by , and follows by . Thus and are out of phase (crest to trough) and tend to cancel, although not completely unless they have the same magnitude. Since the peak voltages are not aligned (not in phase), the peak voltage of the source does not equal the sum of the peak voltages across , , and . The actual relationship is where , , and are the peak voltages across , , and , respectively. Now, using Ohm’s law and definitions from Reactance, Inductive and Capacitive, we substitute into the above, as well as , , and , yielding cancels to yield an expression for : which is the impedance of an RLC series AC circuit. For circuits without a resistor, take ; for those without an inductor, take ; and for those without a capacitor, take . ### Resonance in RLC Series AC Circuits How does an RLC circuit behave as a function of the frequency of the driving voltage source? Combining Ohm’s law, , and the expression for impedance from gives The reactances vary with frequency, with large at high frequencies and large at low frequencies, as we have seen in three previous examples. At some intermediate frequency , the reactances will be equal and cancel, giving —this is a minimum value for impedance, and a maximum value for results. We can get an expression for by taking Substituting the definitions of and , Solving this expression for yields where is the resonant frequency of an RLC series circuit. This is also the natural frequency at which the circuit would oscillate if not driven by the voltage source. At , the effects of the inductor and capacitor cancel, so that , and is a maximum. Resonance in AC circuits is analogous to mechanical resonance, where resonance is defined to be a forced oscillation—in this case, forced by the voltage source—at the natural frequency of the system. The receiver in a radio is an RLC circuit that oscillates best at its . A variable capacitor is often used to adjust to receive a desired frequency and to reject others. is a graph of current as a function of frequency, illustrating a resonant peak in at . The two curves are for two different circuits, which differ only in the amount of resistance in them. The peak is lower and broader for the higher-resistance circuit. Thus the higher-resistance circuit does not resonate as strongly and would not be as selective in a radio receiver, for example. ### Power in RLC Series AC Circuits If current varies with frequency in an RLC circuit, then the power delivered to it also varies with frequency. But the average power is not simply current times voltage, as it is in purely resistive circuits. As was seen in , voltage and current are out of phase in an RLC circuit. There is a phase angle between the source voltage and the current , which can be found from For example, at the resonant frequency or in a purely resistive circuit , so that . This implies that and that voltage and current are in phase, as expected for resistors. At other frequencies, average power is less than at resonance. This is both because voltage and current are out of phase and because is lower. The fact that source voltage and current are out of phase affects the power delivered to the circuit. It can be shown that the average power is Thus is called the power factor, which can range from 0 to 1. Power factors near 1 are desirable when designing an efficient motor, for example. At the resonant frequency, . Power delivered to an RLC series AC circuit is dissipated by the resistance alone. The inductor and capacitor have energy input and output but do not dissipate it out of the circuit. Rather they transfer energy back and forth to one another, with the resistor dissipating exactly what the voltage source puts into the circuit. This assumes no significant electromagnetic radiation from the inductor and capacitor, such as radio waves. Such radiation can happen and may even be desired, as we will see in the next chapter on electromagnetic radiation, but it can also be suppressed as is the case in this chapter. The circuit is analogous to the wheel of a car driven over a corrugated road as shown in . The regularly spaced bumps in the road are analogous to the voltage source, driving the wheel up and down. The shock absorber is analogous to the resistance damping and limiting the amplitude of the oscillation. Energy within the system goes back and forth between kinetic (analogous to maximum current, and energy stored in an inductor) and potential energy stored in the car spring (analogous to no current, and energy stored in the electric field of a capacitor). The amplitude of the wheels’ motion is a maximum if the bumps in the road are hit at the resonant frequency. A pure LC circuit with negligible resistance oscillates at , the same resonant frequency as an RLC circuit. It can serve as a frequency standard or clock circuit—for example, in a digital wristwatch. With a very small resistance, only a very small energy input is necessary to maintain the oscillations. The circuit is analogous to a car with no shock absorbers. Once it starts oscillating, it continues at its natural frequency for some time. shows the analogy between an LC circuit and a mass on a spring. ### Section Summary 1. The AC analogy to resistance is impedance , the combined effect of resistors, inductors, and capacitors, defined by the AC version of Ohm’s law: where is the peak current and is the peak source voltage. 2. Impedance has units of ohms and is given by . 3. The resonant frequency , at which , is 4. In an AC circuit, there is a phase angle between source voltage and the current , which can be found from 5. for a purely resistive circuit or an RLC circuit at resonance. 6. The average power delivered to an RLC circuit is affected by the phase angle and is given by is called the power factor, which ranges from 0 to 1. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Waves ## Introduction to Electromagnetic Waves The beauty of a coral reef, the warm radiance of sunshine, the sting of sunburn, the X-ray revealing a broken bone, even microwave popcorn—all are brought to us by electromagnetic waves. The list of the various types of electromagnetic waves, ranging from radio transmission waves to nuclear gamma-ray (-ray) emissions, is interesting in itself. Even more intriguing is that all of these widely varied phenomena are different manifestations of the same thing—electromagnetic waves. (See .) What are electromagnetic waves? How are they created, and how do they travel? How can we understand and organize their widely varying properties? What is their relationship to electric and magnetic effects? These and other questions will be explored. ### Discovering a New Phenomenon It is worth noting at the outset that the general phenomenon of electromagnetic waves was predicted by theory before it was realized that light is a form of electromagnetic wave. The prediction was made by James Clerk Maxwell in the mid-19th century when he formulated a single theory combining all the electric and magnetic effects known by scientists at that time. “Electromagnetic waves” was the name he gave to the phenomena his theory predicted. Such a theoretical prediction followed by experimental verification is an indication of the power of science in general, and physics in particular. The underlying connections and unity of physics allow certain great minds to solve puzzles without having all the pieces. The prediction of electromagnetic waves is one of the most spectacular examples of this power. Certain others, such as the prediction of antimatter, will be discussed in later modules.
# Electromagnetic Waves ## Maxwell’s Equations: Electromagnetic Waves Predicted and Observed ### Learning Objectives By the end of this section, you will be able to: 1. Restate Maxwell’s equations. The Scotsman James Clerk Maxwell (1831–1879) is regarded as the greatest theoretical physicist of the 19th century. (See .) Although he died young, Maxwell not only formulated a complete electromagnetic theory, represented by Maxwell’s equations, he also developed the kinetic theory of gases and made significant contributions to the understanding of color vision and the nature of Saturn’s rings. Maxwell brought together all the work that had been done by brilliant physicists such as Oersted, Coulomb, Gauss, and Faraday, and added his own insights to develop the overarching theory of electromagnetism. Maxwell’s equations are paraphrased here in words because their mathematical statement is beyond the level of this text. However, the equations illustrate how apparently simple mathematical statements can elegantly unite and express a multitude of concepts—why mathematics is the language of science. Maxwell’s equations encompass the major laws of electricity and magnetism. What is not so apparent is the symmetry that Maxwell introduced in his mathematical framework. Especially important is his addition of the hypothesis that changing electric fields create magnetic fields. This is exactly analogous (and symmetric) to Faraday’s law of induction and had been suspected for some time, but fits beautifully into Maxwell’s equations. Symmetry is apparent in nature in a wide range of situations. In contemporary research, symmetry plays a major part in the search for sub-atomic particles using massive multinational particle accelerators such as the new Large Hadron Collider at CERN. Since changing electric fields create relatively weak magnetic fields, they could not be easily detected at the time of Maxwell’s hypothesis. Maxwell realized, however, that oscillating charges, like those in AC circuits, produce changing electric fields. He predicted that these changing fields would propagate from the source like waves generated on a lake by a jumping fish. The waves predicted by Maxwell would consist of oscillating electric and magnetic fields—defined to be an electromagnetic wave (EM wave). Electromagnetic waves would be capable of exerting forces on charges great distances from their source, and they might thus be detectable. Maxwell calculated that electromagnetic waves would propagate at a speed given by the equation When the values for and are entered into the equation for , we find that which is the speed of light. In fact, Maxwell concluded that light is an electromagnetic wave having such wavelengths that it can be detected by the eye. Other wavelengths should exist—it remained to be seen if they did. If so, Maxwell’s theory and remarkable predictions would be verified, the greatest triumph of physics since Newton. Experimental verification came within a few years, but not before Maxwell’s death. ### Hertz’s Observations The German physicist Heinrich Hertz (1857–1894) was the first to generate and detect certain types of electromagnetic waves in the laboratory. Starting in 1887, he performed a series of experiments that not only confirmed the existence of electromagnetic waves, but also verified that they travel at the speed of light. Hertz used an AC (resistor-inductor-capacitor) circuit that resonates at a known frequency and connected it to a loop of wire as shown in . High voltages induced across the gap in the loop produced sparks that were visible evidence of the current in the circuit and that helped generate electromagnetic waves. Across the laboratory, Hertz had another loop attached to another circuit, which could be tuned (as the dial on a radio) to the same resonant frequency as the first and could, thus, be made to receive electromagnetic waves. This loop also had a gap across which sparks were generated, giving solid evidence that electromagnetic waves had been received. Hertz also studied the reflection, refraction, and interference patterns of the electromagnetic waves he generated, verifying their wave character. He was able to determine wavelength from the interference patterns, and knowing their frequency, he could calculate the propagation speed using the equation (velocity—or speed—equals frequency times wavelength). Hertz was thus able to prove that electromagnetic waves travel at the speed of light. The SI unit for frequency, the hertz (), is named in his honor. ### Section Summary 1. Electromagnetic waves consist of oscillating electric and magnetic fields and propagate at the speed of light . They were predicted by Maxwell, who also showed that where 2. Maxwell’s prediction of electromagnetic waves resulted from his formulation of a complete and symmetric theory of electricity and magnetism, known as Maxwell’s equations. 3. These four equations are paraphrased in this text, rather than presented numerically, and encompass the major laws of electricity and magnetism. First is Gauss’s law for electricity, second is Gauss’s law for magnetism, third is Faraday’s law of induction, including Lenz’s law, and fourth is Ampere’s law in a symmetric formulation that adds another source of magnetism—changing electric fields. ### Problems & Exercises
# Electromagnetic Waves ## Production of Electromagnetic Waves ### Learning Objectives By the end of this section, you will be able to: 1. Describe the electric and magnetic waves as they move out from a source, such as an AC generator. 2. Explain the mathematical relationship between the magnetic field strength and the electrical field strength. 3. Calculate the maximum strength of the magnetic field in an electromagnetic wave, given the maximum electric field strength. We can get a good understanding of electromagnetic waves (EM) by considering how they are produced. Whenever a current varies, associated electric and magnetic fields vary, moving out from the source like waves. Perhaps the easiest situation to visualize is a varying current in a long straight wire, produced by an AC generator at its center, as illustrated in . The electric field () shown surrounding the wire is produced by the charge distribution on the wire. Both the and the charge distribution vary as the current changes. The changing field propagates outward at the speed of light. There is an associated magnetic field () which propagates outward as well (see ). The electric and magnetic fields are closely related and propagate as an electromagnetic wave. This is what happens in broadcast antennae such as those in radio and TV stations. Closer examination of the one complete cycle shown in reveals the periodic nature of the generator-driven charges oscillating up and down in the antenna and the electric field produced. At time , there is the maximum separation of charge, with negative charges at the top and positive charges at the bottom, producing the maximum magnitude of the electric field (or -field) in the upward direction. One-fourth of a cycle later, there is no charge separation and the field next to the antenna is zero, while the maximum -field has moved away at speed . As the process continues, the charge separation reverses and the field reaches its maximum downward value, returns to zero, and rises to its maximum upward value at the end of one complete cycle. The outgoing wave has an amplitude proportional to the maximum separation of charge. Its wavelength is proportional to the period of the oscillation and, hence, is smaller for short periods or high frequencies. (As usual, wavelength and frequency are inversely proportional.) ### Electric and Magnetic Waves: Moving Together Following Ampere’s law, current in the antenna produces a magnetic field, as shown in . The relationship between and is shown at one instant in (a). As the current varies, the magnetic field varies in magnitude and direction. The magnetic field lines also propagate away from the antenna at the speed of light, forming the other part of the electromagnetic wave, as seen in (b). The magnetic part of the wave has the same period and wavelength as the electric part, since they are both produced by the same movement and separation of charges in the antenna. The electric and magnetic waves are shown together at one instant in time in . The electric and magnetic fields produced by a long straight wire antenna are exactly in phase. Note that they are perpendicular to one another and to the direction of propagation, making this a transverse wave. Electromagnetic waves generally propagate out from a source in all directions, sometimes forming a complex radiation pattern. A linear antenna like this one will not radiate parallel to its length, for example. The wave is shown in one direction from the antenna in to illustrate its basic characteristics. Instead of the AC generator, the antenna can also be driven by an AC circuit. In fact, charges radiate whenever they are accelerated. But while a current in a circuit needs a complete path, an antenna has a varying charge distribution forming a standing wave, driven by the AC. The dimensions of the antenna are critical for determining the frequency of the radiated electromagnetic waves. This is a resonant phenomenon and when we tune radios or TV, we vary electrical properties to achieve appropriate resonant conditions in the antenna. ### Receiving Electromagnetic Waves Electromagnetic waves carry energy away from their source, similar to a sound wave carrying energy away from a standing wave on a guitar string. An antenna for receiving EM signals works in reverse. And like antennas that produce EM waves, receiver antennas are specially designed to resonate at particular frequencies. An incoming electromagnetic wave accelerates electrons in the antenna, setting up a standing wave. If the radio or TV is switched on, electrical components pick up and amplify the signal formed by the accelerating electrons. The signal is then converted to audio and/or video format. Sometimes big receiver dishes are used to focus the signal onto an antenna. In fact, charges radiate whenever they are accelerated. When designing circuits, we often assume that energy does not quickly escape AC circuits, and mostly this is true. A broadcast antenna is specially designed to enhance the rate of electromagnetic radiation, and shielding is necessary to keep the radiation close to zero. Some familiar phenomena are based on the production of electromagnetic waves by varying currents. Your microwave oven, for example, sends electromagnetic waves, called microwaves, from a concealed antenna that has an oscillating current imposed on it. ### Relating -Field and -Field Strengths There is a relationship between the - and -field strengths in an electromagnetic wave. This can be understood by again considering the antenna just described. The stronger the -field created by a separation of charge, the greater the current and, hence, the greater the -field created. Since current is directly proportional to voltage (Ohm’s law) and voltage is directly proportional to -field strength, the two should be directly proportional. It can be shown that the magnitudes of the fields do have a constant ratio, equal to the speed of light. That is, is the ratio of -field strength to -field strength in any electromagnetic wave. This is true at all times and at all locations in space. A simple and elegant result. The result of this example is consistent with the statement made in the module Maxwell’s Equations: Electromagnetic Waves Predicted and Observed that changing electric fields create relatively weak magnetic fields. They can be detected in electromagnetic waves, however, by taking advantage of the phenomenon of resonance, as Hertz did. A system with the same natural frequency as the electromagnetic wave can be made to oscillate. All radio and TV receivers use this principle to pick up and then amplify weak electromagnetic waves, while rejecting all others not at their resonant frequency. ### Test Prep for AP Courses ### Section Summary 1. Electromagnetic waves are created by oscillating charges (which radiate whenever accelerated) and have the same frequency as the oscillation. 2. Since the electric and magnetic fields in most electromagnetic waves are perpendicular to the direction in which the wave moves, it is ordinarily a transverse wave. 3. The strengths of the electric and magnetic parts of the wave are related by which implies that the magnetic field ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Waves ## The Electromagnetic Spectrum ### Learning Objectives By the end of this section, you will be able to: 1. List three “rules of thumb” that apply to the different frequencies along the electromagnetic spectrum. 2. Explain why the higher the frequency, the shorter the wavelength of an electromagnetic wave. 3. Draw a simplified electromagnetic spectrum, indicating the relative positions, frequencies, and spacing of the different types of radiation bands. 4. List and explain the different methods by which electromagnetic waves are produced across the spectrum. In this module we examine how electromagnetic waves are classified into categories such as radio, infrared, ultraviolet, and so on, so that we can understand some of their similarities as well as some of their differences. We will also find that there are many connections with previously discussed topics, such as wavelength and resonance. A brief overview of the production and utilization of electromagnetic waves is found in . As noted before, an electromagnetic wave has a frequency and a wavelength associated with it and travels at the speed of light, or . The relationship among these wave characteristics can be described by , where is the propagation speed of the wave, is the frequency, and is the wavelength. Here , so that for all electromagnetic waves, Thus, for all electromagnetic waves, the greater the frequency, the smaller the wavelength. shows how the various types of electromagnetic waves are categorized according to their wavelengths and frequencies—that is, it shows the electromagnetic spectrum. Many of the characteristics of the various types of electromagnetic waves are related to their frequencies and wavelengths, as we shall see. ### Transmission, Reflection, and Absorption What happens when an electromagnetic wave impinges on a material? If the material is transparent to the particular frequency, then the wave can largely be transmitted. If the material is opaque to the frequency, then the wave can be totally reflected. The wave can also be absorbed by the material, indicating that there is some interaction between the wave and the material, such as the thermal agitation of molecules. Of course it is possible to have partial transmission, reflection, and absorption. We normally associate these properties with visible light, but they do apply to all electromagnetic waves. What is not obvious is that something that is transparent to light may be opaque at other frequencies. For example, ordinary glass is transparent to visible light but largely opaque to ultraviolet radiation. Human skin is opaque to visible light—we cannot see through people—but transparent to X-rays. ### Radio and TV Waves The broad category of radio waves is defined to contain any electromagnetic wave produced by currents in wires and circuits. Its name derives from their most common use as a carrier of audio information (i.e., radio). The name is applied to electromagnetic waves of similar frequencies regardless of source. Radio waves from outer space, for example, do not come from alien radio stations. They are created by many astronomical phenomena, and their study has revealed much about nature on the largest scales. There are many uses for radio waves, and so the category is divided into many subcategories, including microwaves and those electromagnetic waves used for AM and FM radio, cellular telephones, and TV. The lowest commonly encountered radio frequencies are produced by high-voltage AC power transmission lines at frequencies of 50 or 60 Hz. (See .) These extremely long wavelength electromagnetic waves (about 6000 km!) are one means of energy loss in long-distance power transmission. There was a concern regarding potential health hazards associated with exposure to these electromagnetic fields (-fields). Some people suspect that living near such transmission lines may cause a variety of illnesses, including cancer. But these power lines produce non-ionizing radiation, which government environmental organizations, medical researchers, and cancer organizations indicate are not risk factors for illness. Recent reports that have looked at many European and American epidemiological studies have found no increase in risk for cancer due to exposure to -fields. Extremely low frequency (ELF) radio waves of about 1 kHz are used to communicate with submerged submarines. The ability of radio waves to penetrate salt water is related to their wavelength (much like ultrasound penetrating tissue)—the longer the wavelength, the farther they penetrate. Since salt water is a good conductor, radio waves are strongly absorbed by it, and very long wavelengths are needed to reach a submarine under the surface. (See .) AM radio waves are used to carry commercial radio signals in the frequency range from 540 to 1600 kHz. The abbreviation AM stands for amplitude modulation, which is the method for placing information on these waves. (See .) A carrier wave having the basic frequency of the radio station, say 1530 kHz, is varied or modulated in amplitude by an audio signal. The resulting wave has a constant frequency, but a varying amplitude. A radio receiver tuned to have the same resonant frequency as the carrier wave can pick up the signal, while rejecting the many other frequencies impinging on its antenna. The receiver’s circuitry is designed to respond to variations in amplitude of the carrier wave to replicate the original audio signal. That audio signal is amplified to drive a speaker or perhaps to be recorded. ### FM Radio Waves FM radio waves are also used for commercial radio transmission, but in the frequency range of 88 to 108 MHz. FM stands for frequency modulation, another method of carrying information. (See .) Here a carrier wave having the basic frequency of the radio station, perhaps 105.1 MHz, is modulated in frequency by the audio signal, producing a wave of constant amplitude but varying frequency. Since audible frequencies range up to 20 kHz (or 0.020 MHz) at most, the frequency of the FM radio wave can vary from the carrier by as much as 0.020 MHz. Thus the carrier frequencies of two different radio stations cannot be closer than 0.020 MHz. An FM receiver is tuned to resonate at the carrier frequency and has circuitry that responds to variations in frequency, reproducing the audio information. FM radio is inherently less subject to noise from stray radio sources than AM radio. The reason is that amplitudes of waves add. So an AM receiver would interpret noise added onto the amplitude of its carrier wave as part of the information. An FM receiver can be made to reject amplitudes other than that of the basic carrier wave and only look for variations in frequency. It is thus easier to reject noise from FM, since noise produces a variation in amplitude. Television is also broadcast on electromagnetic waves. Since the waves must carry a great deal of visual as well as audio information, each channel requires a larger range of frequencies than simple radio transmission. TV channels utilize frequencies in the range of 54 to 88 MHz and 174 to 222 MHz. (The entire FM radio band lies between channels 88 MHz and 174 MHz.) These TV channels are called VHF (for very high frequency). Other channels called UHF (for ultra high frequency) utilize an even higher frequency range of 470 to 1000 MHz. The TV video signal is AM, while the TV audio is FM. Note that these frequencies are those of free transmission with the user utilizing an old-fashioned roof antenna. Satellite dishes and cable transmission of TV occurs at significantly higher frequencies and is rapidly evolving with the use of the high-definition or HD format. The wavelengths found in the preceding example are representative of AM, FM, and cell phones, and account for some of the differences in how they are broadcast and how well they travel. The most efficient length for a linear antenna, such as discussed in Production of Electromagnetic Waves, is , half the wavelength of the electromagnetic wave. Thus a very large antenna is needed to efficiently broadcast typical AM radio with its carrier wavelengths on the order of hundreds of meters. One benefit to these long AM wavelengths is that they can go over and around rather large obstacles (like buildings and hills), just as ocean waves can go around large rocks. FM and TV are best received when there is a line of sight between the broadcast antenna and receiver, and they are often sent from very tall structures. FM, TV, and mobile phone antennas themselves are much smaller than those used for AM, but they are elevated to achieve an unobstructed line of sight. (See .) ### Radio Wave Interference Astronomers and astrophysicists collect signals from outer space using electromagnetic waves. A common problem for astrophysicists is the “pollution” from electromagnetic radiation pervading our surroundings from communication systems in general. Even everyday gadgets like our car keyless entry devices and remote starters and being able to turn TVs on and off using remotes involve radio-wave frequencies. In order to prevent interference between all these electromagnetic signals, strict regulations are drawn up for different organizations to utilize different radio frequency bands. One reason why we are sometimes asked to switch off our mobile phones (operating in the range of 1.9 GHz) or put them into a noncommunicative mode on airplanes and in hospitals is that important communications or medical equipment often uses similar radio frequencies and their operation can be affected by frequencies used in the communication devices. For example, radio waves used in magnetic resonance imaging (MRI) have frequencies on the order of 100 MHz, although this varies significantly depending on the strength of the magnetic field used and the nuclear type being scanned. MRI is an important medical imaging and research tool, producing highly detailed two- and three-dimensional images. Radio waves are broadcast, absorbed, and reemitted in a resonance process that is sensitive to the density of nuclei (usually protons or hydrogen nuclei). The wavelength of 100-MHz radio waves is 3 m, yet using the sensitivity of the resonant frequency to the magnetic field strength, details smaller than a millimeter can be imaged. This is a good example of an exception to a rule of thumb (in this case, the rubric that details much smaller than the probe’s wavelength cannot be detected). The intensity of the radio waves used in MRI presents little or no hazard to human health. ### Microwaves Microwaves are the highest-frequency electromagnetic waves that can be produced by currents in macroscopic circuits and devices. Microwave frequencies range from about to the highest practical resonance at nearly . Since they have high frequencies, their wavelengths are short compared with those of other radio waves—hence the name “microwave.” Microwaves can also be produced by atoms and molecules. They are, for example, a component of electromagnetic radiation generated by thermal agitation. The thermal motion of atoms and molecules in any object at a temperature above absolute zero causes them to emit and absorb radiation. Since it is possible to carry more information per unit time on high frequencies, microwaves are quite suitable for communications. Most satellite-transmitted information is carried on microwaves, as are land-based long-distance transmissions. A clear line of sight between transmitter and receiver is needed because of the short wavelengths involved. Radar is a common application of microwaves that was first developed in World War II. By detecting and timing microwave echoes, radar systems can determine the distance to objects as diverse as clouds and aircraft. A Doppler shift in the radar echo can be used to determine the speed of a car or the intensity of a rainstorm. Sophisticated radar systems are used to map the Earth and other planets, with a resolution limited by wavelength. (See .) The shorter the wavelength of any probe, the smaller the detail it is possible to observe. ### Heating with Microwaves How does the ubiquitous microwave oven produce microwaves electronically, and why does food absorb them preferentially? Microwaves at a frequency of 2.45 GHz are produced by accelerating electrons. The microwaves are then used to induce an alternating electric field in the oven. Water and some other constituents of food have a slightly negative charge at one end and a slightly positive charge at one end (called polar molecules). The range of microwave frequencies is specially selected so that the polar molecules, in trying to keep orienting themselves with the electric field, absorb these energies and increase their temperatures—called dielectric heating. The energy thereby absorbed results in thermal agitation heating food and not the plate, which does not contain water. Hot spots in the food are related to constructive and destructive interference patterns. Rotating antennas and food turntables help spread out the hot spots. Another use of microwaves for heating is within the human body. Microwaves will penetrate more than shorter wavelengths into tissue and so can accomplish “deep heating” (called microwave diathermy). This is used for treating muscular pains, spasms, tendonitis, and rheumatoid arthritis. Microwaves generated by atoms and molecules far away in time and space can be received and detected by electronic circuits. Deep space acts like a blackbody with a 2.7 K temperature, radiating most of its energy in the microwave frequency range. In 1964, Penzias and Wilson detected this radiation and eventually recognized that it was the radiation of the Big Bang’s cooled remnants. ### Infrared Radiation The microwave and infrared regions of the electromagnetic spectrum overlap (see ). Infrared radiation is generally produced by thermal motion and the vibration and rotation of atoms and molecules. Electronic transitions in atoms and molecules can also produce infrared radiation. The range of infrared frequencies extends up to the lower limit of visible light, just below red. In fact, infrared means “below red.” Frequencies at its upper limit are too high to be produced by accelerating electrons in circuits, but small systems, such as atoms and molecules, can vibrate fast enough to produce these waves. Water molecules rotate and vibrate particularly well at infrared frequencies, emitting and absorbing them so efficiently that the emissivity for skin is in the infrared. Night-vision scopes can detect the infrared emitted by various warm objects, including humans, and convert it to visible light. We can examine radiant heat transfer from a house by using a camera capable of detecting infrared radiation. Reconnaissance satellites can detect buildings, vehicles, and even individual humans by their infrared emissions, whose power radiation is proportional to the fourth power of the absolute temperature. More mundanely, we use infrared lamps, some of which are called quartz heaters, to preferentially warm us because we absorb infrared better than our surroundings. The Sun radiates like a nearly perfect blackbody (that is, it has ), with a 6000 K surface temperature. About half of the solar energy arriving at the Earth is in the infrared region, with most of the rest in the visible part of the spectrum, and a relatively small amount in the ultraviolet. On average, 50 percent of the incident solar energy is absorbed by the Earth. The relatively constant temperature of the Earth is a result of the energy balance between the incoming solar radiation and the energy radiated from the Earth. Most of the infrared radiation emitted from the Earth is absorbed by and in the atmosphere and then radiated back to Earth or into outer space. This radiation back to Earth is known as the greenhouse effect, and it maintains the surface temperature of the Earth about higher than it would be if there is no absorption. Some scientists think that the increased concentration of and other greenhouse gases in the atmosphere, resulting from increases in fossil fuel burning, has increased global average temperatures. ### Visible Light Visible light is the narrow segment of the electromagnetic spectrum to which the normal human eye responds. Visible light is produced by vibrations and rotations of atoms and molecules, as well as by electronic transitions within atoms and molecules. The receivers or detectors of light largely utilize electronic transitions. We say the atoms and molecules are excited when they absorb and relax when they emit through electronic transitions. shows this part of the spectrum, together with the colors associated with particular pure wavelengths. We usually refer to visible light as having wavelengths of between 400 nm and 750 nm. (The retina of the eye actually responds to the lowest ultraviolet frequencies, but these do not normally reach the retina because they are absorbed by the cornea and lens of the eye.) Red light has the lowest frequencies and longest wavelengths, while violet has the highest frequencies and shortest wavelengths. Blackbody radiation from the Sun peaks in the visible part of the spectrum but is more intense in the red than in the violet, making the Sun yellowish in appearance. Living things—plants and animals—have evolved to utilize and respond to parts of the electromagnetic spectrum they are embedded in. Visible light is the most predominant and we enjoy the beauty of nature through visible light. Plants are more selective. Photosynthesis makes use of parts of the visible spectrum to make sugars. Optics is the study of the behavior of visible light and other forms of electromagnetic waves. Optics falls into two distinct categories. When electromagnetic radiation, such as visible light, interacts with objects that are large compared with its wavelength, its motion can be represented by straight lines like rays. Ray optics is the study of such situations and includes lenses and mirrors. When electromagnetic radiation interacts with objects about the same size as the wavelength or smaller, its wave nature becomes apparent. For example, observable detail is limited by the wavelength, and so visible light can never detect individual atoms, because they are so much smaller than its wavelength. Physical or wave optics is the study of such situations and includes all wave characteristics. ### Ultraviolet Radiation Ultraviolet means “above violet.” The electromagnetic frequencies of ultraviolet radiation (UV) extend upward from violet, the highest-frequency visible light. Ultraviolet is also produced by atomic and molecular motions and electronic transitions. The wavelengths of ultraviolet extend from 400 nm down to about 10 nm at its highest frequencies, which overlap with the lowest X-ray frequencies. It was recognized as early as 1801 by Johann Ritter that the solar spectrum had an invisible component beyond the violet range. Solar UV radiation is broadly subdivided into three regions: UV-A (320–400 nm), UV-B (290–320 nm), and UV-C (220–290 nm), ranked from long to shorter wavelengths (from smaller to larger energies). Most UV-B and all UV-C is absorbed by ozone () molecules in the upper atmosphere. Consequently, 99% of the solar UV radiation reaching the Earth’s surface is UV-A. One of the first illustrations of UV light’s impact on Earth occurred during the Apollo 16 mission in 1972. The mission included the first astronomical images taken from the moon, using a compact and resilient Far Ultraviolet Camera/Spectrograph designed for moon use by scientist and inventor George Robert Carruthers. Designed to capture UV images without the obscuring effects of the Earth’s atmosphere, its most famous image was of the planet itself. Carruthers, who also trained the astronauts on the device’s use, mentioned afterward that “the most immediately obvious and spectacular results were really for the Earth observations, because this was the first time that the Earth had been photographed from a distance in ultraviolet (UV) light, so that you could see the full extent of the hydrogen atmosphere, the polar auroris and what we call the tropical airglow belt.” ### Human Exposure to UV Radiation It is largely exposure to UV-B that causes skin cancer. It is estimated that as many as 20% of adults will develop skin cancer over the course of their lifetime. Again, treatment is often successful if caught early. Despite very little UV-B reaching the Earth’s surface, there are substantial increases in skin-cancer rates in countries such as Australia, indicating how important it is that UV-B and UV-C continue to be absorbed by the upper atmosphere. All UV radiation can damage collagen fibers, resulting in an acceleration of the aging process of skin and the formation of wrinkles. Because there is so little UV-B and UV-C reaching the Earth’s surface, sunburn is caused by large exposures, and skin cancer from repeated exposure. Some studies indicate a link between overexposure to the Sun when young and melanoma later in life. The tanning response is a defense mechanism in which the body produces pigments to absorb future exposures in inert skin layers above living cells. Basically UV-B radiation excites DNA molecules, distorting the DNA helix, leading to mutations and the possible formation of cancerous cells. Repeated exposure to UV-B may also lead to the formation of cataracts in the eyes—a cause of blindness among people living in the equatorial belt where medical treatment is limited. Cataracts, clouding in the eye’s lens and a loss of vision, are age related; 60% of those between the ages of 65 and 74 will develop cataracts. However, treatment is easy and successful, as one replaces the lens of the eye with a plastic lens. Prevention is important. Eye protection from UV is more effective with plastic sunglasses than those made of glass. A major acute effect of extreme UV exposure is the suppression of the immune system, both locally and throughout the body. Low-intensity ultraviolet is used to sterilize haircutting implements, implying that the energy associated with ultraviolet is deposited in a manner different from lower-frequency electromagnetic waves. (Actually this is true for all electromagnetic waves with frequencies greater than visible light.) Flash photography is generally not allowed of precious artworks and colored prints because the UV radiation from the flash can cause photo-degradation in the artworks. Often artworks will have an extra-thick layer of glass in front of them, which is especially designed to absorb UV radiation. ### UV Light and the Ozone Layer If all of the Sun’s ultraviolet radiation reached the Earth’s surface, there would be extremely grave effects on the biosphere from the severe cell damage it causes. However, the layer of ozone () in our upper atmosphere (10 to 50 km above the Earth) protects life by absorbing most of the dangerous UV radiation. Unfortunately, today we are observing a depletion in ozone concentrations in the upper atmosphere. This depletion has led to the formation of an “ozone hole” in the upper atmosphere. The hole is more centered over the southern hemisphere, and changes with the seasons, being largest in the spring. This depletion is attributed to the breakdown of ozone molecules by refrigerant gases called chlorofluorocarbons (CFCs). The UV radiation helps dissociate the CFC’s, releasing highly reactive chlorine (Cl) atoms, which catalyze the destruction of the ozone layer. For example, the reaction of with a photon of light can be written as: The Cl atom then catalyzes the breakdown of ozone as follows: A single chlorine atom could destroy ozone molecules for up to two years before being transported down to the surface. The CFCs are relatively stable and will contribute to ozone depletion for years to come. CFCs are found in refrigerants, air conditioning systems, foams, and aerosols. International concern over this problem led to the establishment of the “Montreal Protocol” agreement (1987) to phase out CFC production in most countries. However, developing-country participation is needed if worldwide production and elimination of CFCs is to be achieved. Probably the largest contributor to CFC emissions today is China. And while there are indicators that the Protocol has been a success, there is still substantial risk and variability in the ozone layer. The 2019 Antarctic ozone hole was small and short-lived, continuing the general trend toward recovery. But the 2020 Antarctic ozone hole was the largest and longest-lasting on record, partially due to atmospheric conditions. Furthermore, emissions are not the only concern. Susan Solomon and her colleagues at MIT have uncovered the substantial impact of CFC “banks,” in certain regions, where outdated and deteriorating equipment (such as air conditioners) or materials can release enough CFCs to be detectable in the atmosphere and deplete the ozone layer. (See .) ### Benefits of UV Light Besides the adverse effects of ultraviolet radiation, there are also benefits of exposure in nature and uses in technology. Vitamin D production in the skin (epidermis) results from exposure to UVB radiation, generally from sunlight. A number of studies indicate lack of vitamin D can result in the development of a range of cancers (prostate, breast, colon), so a certain amount of UV exposure is helpful. Lack of vitamin D is also linked to osteoporosis. Exposures (with no sunscreen) of 10 minutes a day to arms, face, and legs might be sufficient to provide the accepted dietary level. However, in the winter time north of about latitude, most UVB gets blocked by the atmosphere. UV radiation is used in the treatment of infantile jaundice and in some skin conditions. It is also used in sterilizing workspaces and tools, and killing germs in a wide range of applications. It is also used as an analytical tool to identify substances. When exposed to ultraviolet, some substances, such as minerals, glow in characteristic visible wavelengths, a process called fluorescence. So-called black lights emit ultraviolet to cause posters and clothing to fluoresce in the visible. Ultraviolet is also used in special microscopes to detect details smaller than those observable with longer-wavelength visible-light microscopes. ### X-Rays In the 1850s, scientists (such as Faraday) began experimenting with high-voltage electrical discharges in tubes filled with rarefied gases. It was later found that these discharges created an invisible, penetrating form of very high frequency electromagnetic radiation. This radiation was called an X-ray, because its identity and nature were unknown. As described in Things Great and Small, there are two methods by which X-rays are created—both are submicroscopic processes and can be caused by high-voltage discharges. While the low-frequency end of the X-ray range overlaps with the ultraviolet, X-rays extend to much higher frequencies (and energies). X-rays have adverse effects on living cells similar to those of ultraviolet radiation, and they have the additional liability of being more penetrating, affecting more than the surface layers of cells. Cancer and genetic defects can be induced by exposure to X-rays. Because of their effect on rapidly dividing cells, X-rays can also be used to treat and even cure cancer. The widest use of X-rays is for imaging objects that are opaque to visible light, such as the human body or aircraft parts. In humans, the risk of cell damage is weighed carefully against the benefit of the diagnostic information obtained. However, questions have risen in recent years as to accidental overexposure of some people during CT scans—a mistake at least in part due to poor monitoring of radiation dose. The ability of X-rays to penetrate matter depends on density, and so an X-ray image can reveal very detailed density information. shows an example of the simplest type of X-ray image, an X-ray shadow on film. The amount of information in a simple X-ray image is impressive, but more sophisticated techniques, such as CT scans, can reveal three-dimensional information with details smaller than a millimeter. The use of X-ray technology in medicine is called radiology—an established and relatively cheap tool in comparison to more sophisticated technologies. Consequently, X-rays are widely available and used extensively in medical diagnostics. During World War I, mobile X-ray units, advocated by Marie Curie, were used to diagnose soldiers. Because they can have wavelengths less than 0.01 nm, X-rays can be scattered (a process called X-ray diffraction) to detect the shape of molecules and the structure of crystals. X-ray diffraction was crucial to Crick, Watson, and Wilkins in the determination of the shape of the double-helix DNA molecule. X-rays are also used as a precise tool for trace-metal analysis in X-ray induced fluorescence, in which the energy of the X-ray emissions are related to the specific types of elements and amounts of materials present. ### Gamma Rays Soon after nuclear radioactivity was first detected in 1896, it was found that at least three distinct types of radiation were being emitted. The most penetrating nuclear radiation was called a gamma ray ( (again a name given because its identity and character were unknown), and it was later found to be an extremely high frequency electromagnetic wave. In fact, rays are any electromagnetic radiation emitted by a nucleus. This can be from natural nuclear decay or induced nuclear processes in nuclear reactors and weapons. The lower end of the frequency range overlaps the upper end of the X-ray range, but rays can have the highest frequency of any electromagnetic radiation. Gamma rays have characteristics identical to X-rays of the same frequency—they differ only in source. At higher frequencies, rays are more penetrating and more damaging to living tissue. They have many of the same uses as X-rays, including cancer therapy. Gamma radiation from radioactive materials is used in nuclear medicine. shows a medical image based on rays. Food spoilage can be greatly inhibited by exposing it to large doses of radiation, thereby obliterating responsible microorganisms. Damage to food cells through irradiation occurs as well, and the long-term hazards of consuming radiation-preserved food are unknown and controversial for some groups. Both X-ray and technologies are also used in scanning luggage at airports. ### Detecting Electromagnetic Waves from Space The entire electromagnetic spectrum is used by researchers for investigating stars, space, and time. Arthur B. C. Walker was a pioneer in X-ray and ultraviolet observations, and designed specialized telescopes and instruments to observe the Sun’s atmosphere and corona. His developments significantly advanced our understanding of stars, and some of his developments are currently in use in space telescopes as well as in microchip manufacturing. As noted earlier, Penzias and Wilson detected microwaves to identify the background radiation originating from the Big Bang. Radio telescopes such as the Arecibo Radio Telescope in Puerto Rico and Parkes Observatory in Australia were designed to detect radio waves. Infrared telescopes need to have their detectors cooled by liquid nitrogen to be able to gather useful signals. Since infrared radiation is predominantly from thermal agitation, if the detectors were not cooled, the vibrations of the molecules in the antenna would be stronger than the signal being collected. The most famous of these infrared sensitive telescopes is the James Clerk Maxwell Telescope in Hawaii. The earliest telescopes, developed in the seventeenth century, were optical telescopes, collecting visible light. Telescopes in the ultraviolet, X-ray, and -ray regions are placed outside the atmosphere on satellites orbiting the Earth. The Hubble Space Telescope (launched in 1990) gathers ultraviolet radiation as well as visible light. In the X-ray region, there is the Chandra X-ray Observatory (launched in 1999), and in the -ray region, there is the new Fermi Gamma-ray Space Telescope (launched in 2008—taking the place of the Compton Gamma Ray Observatory, 1991–2000). The James Webb Space Telescope, launched in late 2021, observes in a lower-frequency portion of the spectrum compared to Hubble. The JWST observes in long-wavelength visible light (red) through infrared, enabling it to detect objects that are further away, older, and fainter than previous telescopes could detect. ### Test Prep for AP Courses ### Section Summary 1. The relationship among the speed of propagation, wavelength, and frequency for any wave is given by , so that for electromagnetic waves, where is the frequency, is the wavelength, and is the speed of light. 2. The electromagnetic spectrum is separated into many categories and subcategories, based on the frequency and wavelength, source, and uses of the electromagnetic waves. 3. Any electromagnetic wave produced by currents in wires is classified as a radio wave, the lowest frequency electromagnetic waves. Radio waves are divided into many types, depending on their applications, ranging up to microwaves at their highest frequencies. 4. Infrared radiation lies below visible light in frequency and is produced by thermal motion and the vibration and rotation of atoms and molecules. Infrared’s lower frequencies overlap with the highest-frequency microwaves. 5. Visible light is largely produced by electronic transitions in atoms and molecules, and is defined as being detectable by the human eye. Its colors vary with frequency, from red at the lowest to violet at the highest. 6. Ultraviolet radiation starts with frequencies just above violet in the visible range and is produced primarily by electronic transitions in atoms and molecules. 7. X-rays are created in high-voltage discharges and by electron bombardment of metal targets. Their lowest frequencies overlap the ultraviolet range but extend to much higher values, overlapping at the high end with gamma rays. 8. Gamma rays are nuclear in origin and are defined to include the highest-frequency electromagnetic radiation of any type. ### Conceptual Questions ### Problems & Exercises
# Electromagnetic Waves ## Energy in Electromagnetic Waves ### Learning Objectives By the end of this section, you will be able to: 1. Explain how the energy and amplitude of an electromagnetic wave are related. 2. Given its power output and the heating area, calculate the intensity of a microwave oven’s electromagnetic field, as well as its peak electric and magnetic field strengths Anyone who has used a microwave oven knows there is energy in electromagnetic waves. Sometimes this energy is obvious, such as in the warmth of the summer sun. Other times it is subtle, such as the unfelt energy of gamma rays, which can destroy living cells. Electromagnetic waves can bring energy into a system by virtue of their electric and magnetic fields. These fields can exert forces and move charges in the system and, thus, do work on them. If the frequency of the electromagnetic wave is the same as the natural frequencies of the system (such as microwaves at the resonant frequency of water molecules), the transfer of energy is much more efficient. But there is energy in an electromagnetic wave, whether it is absorbed or not. Once created, the fields carry energy away from a source. If absorbed, the field strengths are diminished and anything left travels on. Clearly, the larger the strength of the electric and magnetic fields, the more work they can do and the greater the energy the electromagnetic wave carries. A wave’s energy is proportional to its amplitude squared ( or ). This is true for waves on guitar strings, for water waves, and for sound waves, where amplitude is proportional to pressure. In electromagnetic waves, the amplitude is the maximum field strength of the electric and magnetic fields. (See .) Thus the energy carried and the intensity of an electromagnetic wave is proportional to and . In fact, for a continuous sinusoidal electromagnetic wave, the average intensity is given by where is the speed of light, is the permittivity of free space, and is the maximum electric field strength; intensity, as always, is power per unit area (here in ). The average intensity of an electromagnetic wave can also be expressed in terms of the magnetic field strength by using the relationship , and the fact that , where is the permeability of free space. Algebraic manipulation produces the relationship where is the maximum magnetic field strength. One more expression for in terms of both electric and magnetic field strengths is useful. Substituting the fact that , the previous expression becomes Whichever of the three preceding equations is most convenient can be used, since they are really just different versions of the same principle: Energy in a wave is related to amplitude squared. Furthermore, since these equations are based on the assumption that the electromagnetic waves are sinusoidal, peak intensity is twice the average; that is, . ### Test Prep for AP Courses ### Section Summary 1. The energy carried by any wave is proportional to its amplitude squared. For electromagnetic waves, this means intensity can be expressed as where 2. This can also be expressed in terms of the maximum magnetic field strength as and in terms of both electric and magnetic fields as 3. The three expressions for are all equivalent. ### Problems & Exercises
# Geometric Optics ## Introduction to Geometric Optics Geometric OpticsLight from this page or screen is formed into an image by the lens of your eye, much as the lens of the camera that made this photograph. Mirrors, like lenses, can also form images that in turn are captured by your eye. Our lives are filled with light. Through vision, the most valued of our senses, light can evoke spiritual emotions, such as when we view a magnificent sunset or glimpse a rainbow breaking through the clouds. Light can also simply amuse us in a theater, or warn us to stop at an intersection. It has innumerable uses beyond vision. Light can carry telephone signals through glass fibers or cook a meal in a solar oven. Life itself could not exist without light’s energy. From photosynthesis in plants to the sun warming a cold-blooded animal, its supply of energy is vital. We already know that visible light is the type of electromagnetic waves to which our eyes respond. That knowledge still leaves many questions regarding the nature of light and vision. What is color, and how do our eyes detect it? Why do diamonds sparkle? How does light travel? How do lenses and mirrors form images? These are but a few of the questions that are answered by the study of optics. Optics is the branch of physics that deals with the behavior of visible light and other electromagnetic waves. In particular, optics is concerned with the generation and propagation of light and its interaction with matter. What we have already learned about the generation of light in our study of heat transfer by radiation will be expanded upon in later topics, especially those on atomic physics. Now, we will concentrate on the propagation of light and its interaction with matter. It is convenient to divide optics into two major parts based on the size of objects that light encounters. When light interacts with an object that is several times as large as the light’s wavelength, its observable behavior is like that of a ray; it does not prominently display its wave characteristics. We call this part of optics “geometric optics.” This chapter will concentrate on such situations. When light interacts with smaller objects, it has very prominent wave characteristics, such as constructive and destructive interference. Wave Optics will concentrate on such situations.
# Geometric Optics ## The Ray Aspect of Light ### Learning Objectives By the end of this section, you will be able to: 1. List the ways by which light travels from a source to another location. There are three ways in which light can travel from a source to another location. (See .) It can come directly from the source through empty space, such as from the Sun to Earth. Or light can travel through various media, such as air and glass, to the person. Light can also arrive after being reflected, such as by a mirror. In all of these cases, light is modeled as traveling in straight lines called rays. Light may change direction when it encounters objects (such as a mirror) or in passing from one material to another (such as in passing from air to glass), but it then continues in a straight line or as a ray. The word ray comes from mathematics and here means a straight line that originates at some point. It is acceptable to visualize light rays as laser rays (or even science fiction depictions of ray guns). Experiments, as well as our own experiences, show that when light interacts with objects several times as large as its wavelength, it travels in straight lines and acts like a ray. Its wave characteristics are not pronounced in such situations. Since the wavelength of light is less than a micron (a thousandth of a millimeter), it acts like a ray in the many common situations in which it encounters objects larger than a micron. For example, when light encounters anything we can observe with unaided eyes, such as a mirror, it acts like a ray, with only subtle wave characteristics. We will concentrate on the ray characteristics in this chapter. Since light moves in straight lines, changing directions when it interacts with materials, it is described by geometry and simple trigonometry. This part of optics, where the ray aspect of light dominates, is therefore called geometric optics. There are two laws that govern how light changes direction when it interacts with matter. These are the law of reflection, for situations in which light bounces off matter, and the law of refraction, for situations in which light passes through matter. ### Test Prep for AP Courses ### Section Summary 1. A straight line that originates at some point is called a ray. 2. The part of optics dealing with the ray aspect of light is called geometric optics. 3. Light can travel in three ways from a source to another location: (1) directly from the source through empty space; (2) through various media; (3) after being reflected from a mirror. ### Problems & Exercises
# Geometric Optics ## The Law of Reflection ### Learning Objectives By the end of this section, you will be able to: 1. Explain reflection of light from polished and rough surfaces. Whenever we look into a mirror, or squint at sunlight glinting from a lake, we are seeing a reflection. When you look at this page, too, you are seeing light reflected from it. Large telescopes use reflection to form an image of stars and other astronomical objects. The law of reflection is illustrated in , which also shows how the angles are measured relative to the perpendicular to the surface at the point where the light ray strikes. We expect to see reflections from smooth surfaces, but illustrates how a rough surface reflects light. Since the light strikes different parts of the surface at different angles, it is reflected in many different directions, or diffused. Diffused light is what allows us to see a sheet of paper from any angle, as illustrated in . Many objects, such as people, clothing, leaves, and walls, have rough surfaces and can be seen from all sides. A mirror, on the other hand, has a smooth surface (compared with the wavelength of light) and reflects light at specific angles, as illustrated in . When the moon reflects from a lake, as shown in , a combination of these effects takes place. The law of reflection is very simple: The angle of reflection equals the angle of incidence. When we see ourselves in a mirror, it appears that our image is actually behind the mirror. This is illustrated in . We see the light coming from a direction determined by the law of reflection. The angles are such that our image is exactly the same distance behind the mirror as we stand away from the mirror. If the mirror is on the wall of a room, the images in it are all behind the mirror, which can make the room seem bigger. Although these mirror images make objects appear to be where they cannot be (like behind a solid wall), the images are not figments of our imagination. Mirror images can be photographed and videotaped by instruments and look just as they do with our eyes (optical instruments themselves). The precise manner in which images are formed by mirrors and lenses will be treated in later sections of this chapter. ### Test Prep for AP Courses ### Section Summary 1. The angle of reflection equals the angle of incidence. 2. A mirror has a smooth surface and reflects light at specific angles. 3. Light is diffused when it reflects from a rough surface. 4. Mirror images can be photographed and videotaped by instruments. ### Conceptual Questions ### Problems & Exercises
# Geometric Optics ## The Law of Refraction ### Learning Objectives By the end of this section, you will be able to: 1. Determine the index of refraction, given the speed of light in a medium. It is easy to notice some odd things when looking into a fish tank. For example, you may see the same fish appearing to be in two different places. (See .) This is because light coming from the fish to us changes direction when it leaves the tank, and in this case, it can travel two different paths to get to our eyes. The changing of a light ray’s direction (loosely called bending) when it passes through variations in matter is called refraction. Refraction is responsible for a tremendous range of optical phenomena, from the action of lenses to voice transmission through optical fibers. Why does light change direction when passing from one material (medium) to another? It is because light changes speed when going from one material to another. So before we study the law of refraction, it is useful to discuss the speed of light and how it varies in different media. ### The Speed of Light Early attempts to measure the speed of light, such as those made by Galileo, determined that light moved extremely fast, perhaps instantaneously. The first real evidence that light traveled at a finite speed came from the Danish astronomer Ole Roemer in the late 17th century. Roemer had noted that the average orbital period of one of Jupiter’s moons, as measured from Earth, varied depending on whether Earth was moving toward or away from Jupiter. He correctly concluded that the apparent change in period was due to the change in distance between Earth and Jupiter and the time it took light to travel this distance. From his 1676 data, a value of the speed of light was calculated to be (only 25% different from today’s accepted value). In more recent times, physicists have measured the speed of light in numerous ways and with increasing accuracy. One particularly direct method, used in 1887 by the American physicist Albert Michelson (1852–1931), is illustrated in . Light reflected from a rotating set of mirrors was reflected from a stationary mirror 35 km away and returned to the rotating mirrors. The time for the light to travel can be determined by how fast the mirrors must rotate for the light to be returned to the observer’s eye. The speed of light is now known to great precision. In fact, the speed of light in a vacuum is so important that it is accepted as one of the basic physical quantities and has the fixed value where the approximate value of is used whenever three-digit accuracy is sufficient. The speed of light through matter is less than it is in a vacuum, because light interacts with atoms in a material. The speed of light depends strongly on the type of material, since its interaction with different atoms, crystal lattices, and other substructures varies. We define the index of refraction of a material to be where is the observed speed of light in the material. Since the speed of light is always less than in matter and equals only in a vacuum, the index of refraction is always greater than or equal to one. That is, . gives the indices of refraction for some representative substances. The values are listed for a particular wavelength of light, because they vary slightly with wavelength. (This can have important effects, such as colors produced by a prism.) Note that for gases, is close to 1.0. This seems reasonable, since atoms in gases are widely separated and light travels at in the vacuum between atoms. It is common to take for gases unless great precision is needed. Although the speed of light in a medium varies considerably from its value in a vacuum, it is still a large speed. ### Law of Refraction shows how a ray of light changes direction when it passes from one medium to another. As before, the angles are measured relative to a perpendicular to the surface at the point where the light ray crosses it. (Some of the incident light will be reflected from the surface, but for now we will concentrate on the light that is transmitted.) The change in direction of the light ray depends on how the speed of light changes. The change in the speed of light is related to the indices of refraction of the media involved. In the situations shown in , medium 2 has a greater index of refraction than medium 1. This means that the speed of light is less in medium 2 than in medium 1. Note that as shown in (a), the direction of the ray moves closer to the perpendicular when it slows down. Conversely, as shown in (b), the direction of the ray moves away from the perpendicular when it speeds up. The path is exactly reversible. In both cases, you can imagine what happens by thinking about pushing a lawn mower from a footpath onto grass, and vice versa. Going from the footpath to grass, the front wheels are slowed and pulled to the side as shown. This is the same change in direction as for light when it goes from a fast medium to a slow one. When going from the grass to the footpath, the front wheels can move faster and the mower changes direction as shown. This, too, is the same change in direction as for light going from slow to fast. The amount that a light ray changes its direction depends both on the incident angle and the amount that the speed changes. For a ray at a given incident angle, a large change in speed causes a large change in direction, and thus a large change in angle. The exact mathematical relationship is the law of refraction, or “Snell’s Law,” which is stated in equation form as Here and are the indices of refraction for medium 1 and 2, and and are the angles between the rays and the perpendicular in medium 1 and 2, as shown in . The incoming ray is called the incident ray and the outgoing ray the refracted ray, and the associated angles the incident angle and the refracted angle. The law of refraction is also called Snell’s law after the Dutch mathematician Willebrord Snell (1591–1626). While the law has been named after Snell, the Arabian physicist, Ibn Sahl, found the law of refraction in 984 and used it in his work On Burning Mirrors and Lenses. Snell’s experiments showed that the law of refraction was obeyed and that a characteristic index of refraction could be assigned to a given medium. Snell was not aware that the speed of light varied in different media, but through experiments he was able to determine indices of refraction from the way light rays changed direction. ### Test Prep for AP Courses ### Section Summary 1. The changing of a light ray’s direction when it passes through variations in matter is called refraction. 2. The speed of light in vacuum 3. Index of refraction , where is the speed of light in the material, is the speed of light in vacuum, and is the index of refraction. 4. Snell’s law, the law of refraction, is stated in equation form as . ### Conceptual Questions ### Problems & Exercises
# Geometric Optics ## Total Internal Reflection ### Learning Objectives By the end of this section, you will be able to: 1. Explain the phenomenon of total internal reflection. 2. Describe the workings and uses of fiber optics. 3. Analyze the reason for the sparkle of diamonds. A good-quality mirror may reflect more than 90% of the light that falls on it, absorbing the rest. But it would be useful to have a mirror that reflects all of the light that falls on it. Interestingly, we can produce total reflection using an aspect of refraction. Consider what happens when a ray of light strikes the surface between two materials, such as is shown in (a). Part of the light crosses the boundary and is refracted; the rest is reflected. If, as shown in the figure, the index of refraction for the second medium is less than for the first, the ray bends away from the perpendicular. (Since , the angle of refraction is greater than the angle of incidence—that is, .) Now imagine what happens as the incident angle is increased. This causes to increase also. The largest the angle of refraction can be is , as shown in (b).The critical angle for a combination of materials is defined to be the incident angle that produces an angle of refraction of . That is, is the incident angle for which . If the incident angle is greater than the critical angle, as shown in (c), then all of the light is reflected back into medium 1, a condition called total internal reflection. Snell’s law states the relationship between angles and indices of refraction. It is given by When the incident angle equals the critical angle (), the angle of refraction is (). Noting that , Snell’s law in this case becomes The critical angle for a given combination of materials is thus Total internal reflection occurs for any incident angle greater than the critical angle , and it can only occur when the second medium has an index of refraction less than the first. Note the above equation is written for a light ray that travels in medium 1 and reflects from medium 2, as shown in the figure. ### Fiber Optics: Endoscopes to Telephones Fiber optics is one application of total internal reflection that is in wide use. In communications, it is used to transmit telephone, internet, and cable TV signals. Fiber optics employs the transmission of light down fibers of plastic or glass. Because the fibers are thin, light entering one is likely to strike the inside surface at an angle greater than the critical angle and, thus, be totally reflected (See .) The index of refraction outside the fiber must be smaller than inside, a condition that is easily satisfied by coating the outside of the fiber with a material having an appropriate refractive index. In fact, most fibers have a varying refractive index to allow more light to be guided along the fiber through total internal refraction. Rays are reflected around corners as shown, making the fibers into tiny light pipes. Bundles of fibers can be used to transmit an image without a lens, as illustrated in . The output of a device called an endoscope is shown in (b). Endoscopes are used to explore the body through various orifices or minor incisions. Light is transmitted down one fiber bundle to illuminate internal parts, and the reflected light is transmitted back out through another to be observed. Surgery can be performed, such as arthroscopic surgery on the knee joint, employing cutting tools attached to and observed with the endoscope. Samples can also be obtained, such as by lassoing an intestinal polyp for external examination. Fiber optics has revolutionized surgical techniques and observations within the body. There are a host of medical diagnostic and therapeutic uses. The flexibility of the fiber optic bundle allows it to navigate around difficult and small regions in the body, such as the intestines, the heart, blood vessels, and joints. Transmission of an intense laser beam to burn away obstructing plaques in major arteries as well as delivering light to activate chemotherapy drugs are becoming commonplace. Optical fibers have in fact enabled microsurgery and remote surgery where the incisions are small and the surgeon’s fingers do not need to touch the diseased tissue. Fibers in bundles are surrounded by a cladding material that has a lower index of refraction than the core. (See .) The cladding prevents light from being transmitted between fibers in a bundle. Without cladding, light could pass between fibers in contact, since their indices of refraction are identical. Since no light gets into the cladding (there is total internal reflection back into the core), none can be transmitted between clad fibers that are in contact with one another. The cladding prevents light from escaping out of the fiber; instead most of the light is propagated along the length of the fiber, minimizing the loss of signal and ensuring that a quality image is formed at the other end. The cladding and an additional protective layer make optical fibers flexible and durable. Special tiny lenses that can be attached to the ends of bundles of fibers are being designed and fabricated. Light emerging from a fiber bundle can be focused and a tiny spot can be imaged. In some cases the spot can be scanned, allowing quality imaging of a region inside the body. Special minute optical filters inserted at the end of the fiber bundle have the capacity to image tens of microns below the surface without cutting the surface—non-intrusive diagnostics. This is particularly useful for determining the extent of cancers in the stomach and bowel. Most telephone conversations and Internet communications are now carried by laser signals along optical fibers. Extensive optical fiber cables have been placed on the ocean floor and underground to enable optical communications. Optical fiber communication systems offer several advantages over electrical (copper) based systems, particularly for long distances. The fibers can be made so transparent that light can travel many kilometers before it becomes dim enough to require amplification—much superior to copper conductors. This property of optical fibers is called low loss. Lasers emit light with characteristics that allow far more conversations in one fiber than are possible with electric signals on a single conductor. This property of optical fibers is called high bandwidth. Optical signals in one fiber do not produce undesirable effects in other adjacent fibers. This property of optical fibers is called reduced crosstalk. We shall explore the unique characteristics of laser radiation in a later chapter. ### Corner Reflectors and Diamonds A light ray that strikes an object consisting of two mutually perpendicular reflecting surfaces is reflected back exactly parallel to the direction from which it came. This is true whenever the reflecting surfaces are perpendicular, and it is independent of the angle of incidence. Such an object, shown in , is called a corner reflector, since the light bounces from its inside corner. Many inexpensive reflector buttons on bicycles, cars, and warning signs have corner reflectors designed to return light in the direction from which it originated. It was more expensive for astronauts to place one on the moon. Laser signals can be bounced from that corner reflector to measure the gradually increasing distance to the moon with great precision. Corner reflectors are perfectly efficient when the conditions for total internal reflection are satisfied. With common materials, it is easy to obtain a critical angle that is less than . One use of these perfect mirrors is in binoculars, as shown in . Another use is in periscopes found in submarines. ### The Sparkle of Diamonds Total internal reflection, coupled with a large index of refraction, explains why diamonds sparkle more than other materials. The critical angle for a diamond-to-air surface is only , and so when light enters a diamond, it has trouble getting back out. (See .) Although light freely enters the diamond, it can exit only if it makes an angle less than . Facets on diamonds are specifically intended to make this unlikely, so that the light can exit only in certain places. Good diamonds are very clear, so that the light makes many internal reflections and is concentrated at the few places it can exit—hence the sparkle. (Zircon is a natural gemstone that has an exceptionally large index of refraction, but not as large as diamond, so it is not as highly prized. Cubic zirconia is manufactured and has an even higher index of refraction (), but still less than that of diamond.) The colors you see emerging from a sparkling diamond are not due to the diamond’s color, which is usually nearly colorless. Those colors result from dispersion, the topic of Dispersion: The Rainbow and Prisms. Colored diamonds get their color from structural defects of the crystal lattice and the inclusion of minute quantities of graphite and other materials. The Argyle Mine in Western Australia produces around 90% of the world’s pink, red, champagne, and cognac diamonds, while around 50% of the world’s clear diamonds come from central and southern Africa. ### Test Prep for AP Courses ### Section Summary 1. The incident angle that produces an angle of refraction of is called critical angle. 2. Total internal reflection is a phenomenon that occurs at the boundary between two mediums, such that if the incident angle in the first medium is greater than the critical angle, then all the light is reflected back into that medium. 3. Fiber optics involves the transmission of light down fibers of plastic or glass, applying the principle of total internal reflection. 4. Endoscopes are used to explore the body through various orifices or minor incisions, based on the transmission of light through optical fibers. 5. Cladding prevents light from being transmitted between fibers in a bundle. 6. Diamonds sparkle due to total internal reflection coupled with a large index of refraction. ### Conceptual Questions ### Problems & Exercises
# Geometric Optics ## Dispersion: The Rainbow and Prisms ### Learning Objectives By the end of this section, you will be able to: 1. Explain the phenomenon of dispersion and discuss its advantages and disadvantages. Everyone enjoys the spectacle and surprise of rainbows. They’ve been hailed as symbols of hope and spirituality and are the subject of stories and myths across the world’s cultures. Just how does sunlight falling on water droplets cause the multicolored image we see, and what else does this phenomenon tell us about light, color, and radiation? Working in his native Persia (now Iran), Kamal al-Din Hasan ibn Ali ibn Hasan al-Farisi (1267–1319) designed a series of innovative experiments to answer this question and clarify the explanations of many earlier scientists. At that time, there were no microscopes to examine tiny drops of water similar to those in the atmosphere, so Farisi created an enormous drop of water. He filled a large glass vessel with water and placed it inside a camera obscura, in which he could carefully control the entry of light. Using a series of careful observations on the resulting multicolored spectra of light, he deduced and confirmed that the droplets split—or decompose—white light into the colors of the rainbow. Farisi’s contemporary, Theodoric of Freiberg (in Germany), performed similar experiments using other equipment. Both relied on the prior work of Ibn al-Haytham, often known as the founder of optics and among the first to formalize a scientific method. We see about six colors in a rainbow—red, orange, yellow, green, blue, and violet; sometimes indigo is listed, too. Those colors are associated with different wavelengths of light, as shown in . When our eye receives pure-wavelength light, we tend to see only one of the six colors, depending on wavelength. The thousands of other hues we can sense in other situations are our eye’s response to various mixtures of wavelengths. White light, in particular, is a fairly uniform mixture of all visible wavelengths. Sunlight, considered to be white, actually appears to be a bit yellow because of its mixture of wavelengths, but it does contain all visible wavelengths. The sequence of colors in rainbows is the same sequence as the colors plotted versus wavelength in . What this implies is that white light is spread out according to wavelength in a rainbow. Dispersion is defined as the spreading of white light into its full spectrum of wavelengths. More technically, dispersion occurs whenever there is a process that changes the direction of light in a manner that depends on wavelength. Dispersion, as a general phenomenon, can occur for any type of wave and always involves wavelength-dependent processes. Refraction is responsible for dispersion in rainbows and many other situations. The angle of refraction depends on the index of refraction, as we saw in The Law of Refraction. We know that the index of refraction depends on the medium. But for a given medium, also depends on wavelength. (See . Note that, for a given medium, increases as wavelength decreases and is greatest for violet light. Thus violet light is bent more than red light, as shown for a prism in (b), and the light is dispersed into the same sequence of wavelengths as seen in and . Rainbows are produced by a combination of refraction and reflection. You may have noticed that you see a rainbow only when you look away from the sun. Light enters a drop of water and is reflected from the back of the drop, as shown in . The light is refracted both as it enters and as it leaves the drop. Since the index of refraction of water varies with wavelength, the light is dispersed, and a rainbow is observed, as shown in (a). (There is no dispersion caused by reflection at the back surface, since the law of reflection does not depend on wavelength.) The actual rainbow of colors seen by an observer depends on the myriad of rays being refracted and reflected toward the observer’s eyes from numerous drops of water. The effect is most spectacular when the background is dark, as in stormy weather, but can also be observed in waterfalls and lawn sprinklers. The arc of a rainbow comes from the need to be looking at a specific angle relative to the direction of the sun, as illustrated in (b). (If there are two reflections of light within the water drop, another “secondary” rainbow is produced. This rare event produces an arc that lies above the primary rainbow arc—see (c).) Dispersion may produce beautiful rainbows, but it can cause problems in optical systems. White light used to transmit messages in a fiber is dispersed, spreading out in time and eventually overlapping with other messages. Since a laser produces a nearly pure wavelength, its light experiences little dispersion, an advantage over white light for transmission of information. In contrast, dispersion of electromagnetic waves coming to us from outer space can be used to determine the amount of matter they pass through. As with many phenomena, dispersion can be useful or a nuisance, depending on the situation and our human goals. ### Section Summary 1. The spreading of white light into its full spectrum of wavelengths is called dispersion. 2. Rainbows are produced by a combination of refraction and reflection and involve the dispersion of sunlight into a continuous distribution of colors. 3. Dispersion produces beautiful rainbows but also causes problems in certain optical systems. ### Problems & Exercises
# Geometric Optics ## Image Formation by Lenses ### Learning Objectives By the end of this section, you will be able to: 1. List the rules for ray tracking for thin lenses. 2. Illustrate the formation of images using the technique of ray tracking. 3. Determine power of a lens given the focal length. Lenses are found in a huge array of optical instruments, ranging from a simple magnifying glass to the eye to a camera’s zoom lens. In this section, we will use the law of refraction to explore the properties of lenses and how they form images. The word lens derives from the Latin word for a lentil bean, the shape of which is similar to the convex lens in . The convex lens shown has been shaped so that all light rays that enter it parallel to its axis cross one another at a single point on the opposite side of the lens. (The axis is defined to be a line normal to the lens at its center, as shown in .) Such a lens is called a converging (or convex) lens for the converging effect it has on light rays. An expanded view of the path of one ray through the lens is shown, to illustrate how the ray changes direction both as it enters and as it leaves the lens. Since the index of refraction of the lens is greater than that of air, the ray moves towards the perpendicular as it enters and away from the perpendicular as it leaves. (This is in accordance with the law of refraction.) Due to the lens’s shape, light is thus bent toward the axis at both surfaces. The point at which the rays cross is defined to be the focal point F of the lens. The distance from the center of the lens to its focal point is defined to be the focal length of the lens. shows how a converging lens, such as that in a magnifying glass, can converge the nearly parallel light rays from the sun to a small spot. The greater effect a lens has on light rays, the more powerful it is said to be. For example, a powerful converging lens will focus parallel light rays closer to itself and will have a smaller focal length than a weak lens. The light will also focus into a smaller and more intense spot for a more powerful lens. The power of a lens is defined to be the inverse of its focal length. In equation form, this is shows a concave lens and the effect it has on rays of light that enter it parallel to its axis (the path taken by ray 2 in the figure is the axis of the lens). The concave lens is a diverging lens, because it causes the light rays to bend away (diverge) from its axis. In this case, the lens has been shaped so that all light rays entering it parallel to its axis appear to originate from the same point, , defined to be the focal point of a diverging lens. The distance from the center of the lens to the focal point is again called the focal length of the lens. Note that the focal length and power of a diverging lens are defined to be negative. For example, if the distance to in is 5.00 cm, then the focal length is and the power of the lens is . An expanded view of the path of one ray through the lens is shown in the figure to illustrate how the shape of the lens, together with the law of refraction, causes the ray to follow its particular path and be diverged. As noted in the initial discussion of the law of refraction in The Law of Refraction, the paths of light rays are exactly reversible. This means that the direction of the arrows could be reversed for all of the rays in and . For example, if a point light source is placed at the focal point of a convex lens, as shown in , parallel light rays emerge from the other side. ### Ray Tracing and Thin Lenses Ray tracing is the technique of determining or following (tracing) the paths that light rays take. For rays passing through matter, the law of refraction is used to trace the paths. Here we use ray tracing to help us understand the action of lenses in situations ranging from forming images on film to magnifying small print to correcting nearsightedness. While ray tracing for complicated lenses, such as those found in sophisticated cameras, may require computer techniques, there is a set of simple rules for tracing rays through thin lenses. A thin lens is defined to be one whose thickness allows rays to refract, as illustrated in , but does not allow properties such as dispersion and aberrations. An ideal thin lens has two refracting surfaces but the lens is thin enough to assume that light rays bend only once. A thin symmetrical lens has two focal points, one on either side and both at the same distance from the lens. (See .) Another important characteristic of a thin lens is that light rays through its center are deflected by a negligible amount, as seen in . Using paper, pencil, and a straight edge, ray tracing can accurately describe the operation of a lens. The rules for ray tracing for thin lenses are based on the illustrations already discussed: 1. A ray entering a converging lens parallel to its axis passes through the focal point F of the lens on the other side. (See rays 1 and 3 in .) 2. A ray entering a diverging lens parallel to its axis seems to come from the focal point F. (See rays 1 and 3 in .) 3. A ray passing through the center of either a converging or a diverging lens does not change direction. (See , and see ray 2 in and .) 4. A ray entering a converging lens through its focal point exits parallel to its axis. (The reverse of rays 1 and 3 in .) 5. A ray that enters a diverging lens by heading toward the focal point on the opposite side exits parallel to the axis. (The reverse of rays 1 and 3 in .) ### Image Formation by Thin Lenses In some circumstances, a lens forms an obvious image, such as when a movie projector casts an image onto a screen. In other cases, the image is less obvious. Where, for example, is the image formed by eyeglasses? We will use ray tracing for thin lenses to illustrate how they form images, and we will develop equations to describe the image formation quantitatively. Consider an object some distance away from a converging lens, as shown in . To find the location and size of the image formed, we trace the paths of selected light rays originating from one point on the object, in this case the top of the person’s head. The figure shows three rays from the top of the object that can be traced using the ray tracing rules given above. (Rays leave this point going in many directions, but we concentrate on only a few with paths that are easy to trace.) The first ray is one that enters the lens parallel to its axis and passes through the focal point on the other side (rule 1). The second ray passes through the center of the lens without changing direction (rule 3). The third ray passes through the nearer focal point on its way into the lens and leaves the lens parallel to its axis (rule 4). The three rays cross at the same point on the other side of the lens. The image of the top of the person’s head is located at this point. All rays that come from the same point on the top of the person’s head are refracted in such a way as to cross at the point shown. Rays from another point on the object, such as her belt buckle, will also cross at another common point, forming a complete image, as shown. Although three rays are traced in , only two are necessary to locate the image. It is best to trace rays for which there are simple ray tracing rules. Before applying ray tracing to other situations, let us consider the example shown in in more detail. The image formed in is a real image, meaning that it can be projected. That is, light rays from one point on the object actually cross at the location of the image and can be projected onto a screen, a piece of film, or the retina of an eye, for example. shows how such an image would be projected onto film by a camera lens. This figure also shows how a real image is projected onto the retina by the lens of an eye. Note that the image is there whether it is projected onto a screen or not. Several important distances appear in . We define to be the object distance, the distance of an object from the center of a lens. Image distance is defined to be the distance of the image from the center of a lens. The height of the object and height of the image are given the symbols and , respectively. Images that appear upright relative to the object have heights that are positive and those that are inverted have negative heights. Using the rules of ray tracing and making a scale drawing with paper and pencil, like that in , we can accurately describe the location and size of an image. But the real benefit of ray tracing is in visualizing how images are formed in a variety of situations. To obtain numerical information, we use a pair of equations that can be derived from a geometric analysis of ray tracing for thin lenses. The thin lens equations are and We define the ratio of image height to object height () to be the magnification . (The minus sign in the equation above will be discussed shortly.) The thin lens equations are broadly applicable to all situations involving thin lenses (and “thin” mirrors, as we will see later). We will explore many features of image formation in the following worked examples. Real images, such as the one considered in the previous example, are formed by converging lenses whenever an object is farther from the lens than its focal length. This is true for movie projectors, cameras, and the eye. We shall refer to these as case 1 images. A case 1 image is formed when and is positive, as in (a). (A summary of the three cases or types of image formation appears at the end of this section.) A different type of image is formed when an object, such as a person's face, is held close to a convex lens. The image is upright and larger than the object, as seen in (b), and so the lens is called a magnifier. If you slowly pull the magnifier away from the face, you will see that the magnification steadily increases until the image begins to blur. Pulling the magnifier even farther away produces an inverted image as seen in (a). The distance at which the image blurs, and beyond which it inverts, is the focal length of the lens. To use a convex lens as a magnifier, the object must be closer to the converging lens than its focal length. This is called a case 2 image. A case 2 image is formed when and is positive. uses ray tracing to show how an image is formed when an object is held closer to a converging lens than its focal length. Rays coming from a common point on the object continue to diverge after passing through the lens, but all appear to originate from a point at the location of the image. The image is on the same side of the lens as the object and is farther away from the lens than the object. This image, like all case 2 images, cannot be projected and, hence, is called a virtual image. Light rays only appear to originate at a virtual image; they do not actually pass through that location in space. A screen placed at the location of a virtual image will receive only diffuse light from the object, not focused rays from the lens. Additionally, a screen placed on the opposite side of the lens will receive rays that are still diverging, and so no image will be projected on it. We can see the magnified image with our eyes, because the lens of the eye converges the rays into a real image projected on our retina. Finally, we note that a virtual image is upright and larger than the object, meaning that the magnification is positive and greater than 1. A third type of image is formed by a diverging or concave lens. Try looking through eyeglasses meant to correct nearsightedness. (See .) You will see an image that is upright but smaller than the object. This means that the magnification is positive but less than 1. The ray diagram in shows that the image is on the same side of the lens as the object and, hence, cannot be projected—it is a virtual image. Note that the image is closer to the lens than the object. This is a case 3 image, formed for any object by a negative focal length or diverging lens. summarizes the three types of images formed by single thin lenses. These are referred to as case 1, 2, and 3 images. Convex (converging) lenses can form either real or virtual images (cases 1 and 2, respectively), whereas concave (diverging) lenses can form only virtual images (always case 3). Real images are always inverted, but they can be either larger or smaller than the object. For example, a slide projector forms an image larger than the slide, whereas a camera makes an image smaller than the object being photographed. Virtual images are always upright and cannot be projected. Virtual images are larger than the object only in case 2, where a convex lens is used. The virtual image produced by a concave lens is always smaller than the object—a case 3 image. We can see and photograph virtual images only by using an additional lens to form a real image. In Image Formation by Mirrors, we shall see that mirrors can form exactly the same types of images as lenses. ### Problem-Solving Strategies for Lenses Step 1. Examine the situation to determine that image formation by a lens is involved. Step 2. Determine whether ray tracing, the thin lens equations, or both are to be employed. A sketch is very useful even if ray tracing is not specifically required by the problem. Write symbols and values on the sketch. Step 3. Identify exactly what needs to be determined in the problem (identify the unknowns). Step 4. Make alist of what is given or can be inferred from the problem as stated (identify the knowns). It is helpful to determine whether the situation involves a case 1, 2, or 3 image. While these are just names for types of images, they have certain characteristics (given in ) that can be of great use in solving problems. Step 5. If ray tracing is required, use the ray tracing rules listed near the beginning of this section. Step 6. Most quantitative problems require the use of the thin lens equations. These are solved in the usual manner by substituting knowns and solving for unknowns. Several worked examples serve as guides. Step 7. Check to see if the answer is reasonable: Does it make sense? If you have identified the type of image (case 1, 2, or 3), you should assess whether your answer is consistent with the type of image, magnification, and so on. ### Test Prep for AP Courses ### Section Summary 1. Light rays entering a converging lens parallel to its axis cross one another at a single point on the opposite side. 2. For a converging lens, the focal point is the point at which converging light rays cross; for a diverging lens, the focal point is the point from which diverging light rays appear to originate. 3. The distance from the center of the lens to its focal point is called the focal length . 4. Power of a lens is defined to be the inverse of its focal length, . 5. A lens that causes the light rays to bend away from its axis is called a diverging lens. 6. Ray tracing is the technique of graphically determining the paths that light rays take. 7. The image in which light rays from one point on the object actually cross at the location of the image and can be projected onto a screen, a piece of film, or the retina of an eye is called a real image. 8. Thin lens equations are and (magnification). 9. The distance of the image from the center of the lens is called image distance. 10. An image that is on the same side of the lens as the object and cannot be projected on a screen is called a virtual image. ### Conceptual Questions ### Problems & Exercises
# Geometric Optics ## Image Formation by Mirrors ### Learning Objectives By the end of this section, you will be able to: 1. Illustrate image formation in a flat mirror. 2. Explain with ray diagrams the formation of an image using spherical mirrors. 3. Determine focal length and magnification given radius of curvature, distance of object and image. We only have to look as far as the nearest bathroom to find an example of an image formed by a mirror. Images in flat mirrors are the same size as the object and are located behind the mirror. Like lenses, mirrors can form a variety of images. For example, dental mirrors may produce a magnified image, just as makeup mirrors do. Security mirrors in shops, on the other hand, form images that are smaller than the object. We will use the law of reflection to understand how mirrors form images, and we will find that mirror images are analogous to those formed by lenses. helps illustrate how a flat mirror forms an image. Two rays are shown emerging from the same point, striking the mirror, and being reflected into the observer’s eye. The rays can diverge slightly, and both still get into the eye. If the rays are extrapolated backward, they seem to originate from a common point behind the mirror, locating the image. (The paths of the reflected rays into the eye are the same as if they had come directly from that point behind the mirror.) Using the law of reflection—the angle of reflection equals the angle of incidence—we can see that the image and object are the same distance from the mirror. This is a virtual image, since it cannot be projected—the rays only appear to originate from a common point behind the mirror. Obviously, if you walk behind the mirror, you cannot see the image, since the rays do not go there. But in front of the mirror, the rays behave exactly as if they had come from behind the mirror, so that is where the image is situated. Now let us consider the focal length of a mirror—for example, the concave spherical mirrors in . Rays of light that strike the surface follow the law of reflection. For a mirror that is large compared with its radius of curvature, as in (a), we see that the reflected rays do not cross at the same point, and the mirror does not have a well-defined focal point. If the mirror had the shape of a parabola, the rays would all cross at a single point, and the mirror would have a well-defined focal point. But parabolic mirrors are much more expensive to make than spherical mirrors. The solution is to use a mirror that is small compared with its radius of curvature, as shown in (b). (This is the mirror equivalent of the thin lens approximation.) To a very good approximation, this mirror has a well-defined focal point at F that is the focal distance from the center of the mirror. The focal length of a concave mirror is positive, since it is a converging mirror. Just as for lenses, the shorter the focal length, the more powerful the mirror; thus, for a mirror, too. A more strongly curved mirror has a shorter focal length and a greater power. Using the law of reflection and some simple trigonometry, it can be shown that the focal length is half the radius of curvature, or where is the radius of curvature of a spherical mirror. The smaller the radius of curvature, the smaller the focal length and, thus, the more powerful the mirror. The convex mirror shown in also has a focal point. Parallel rays of light reflected from the mirror seem to originate from the point F at the focal distance behind the mirror. The focal length and power of a convex mirror are negative, since it is a diverging mirror. Ray tracing is as useful for mirrors as for lenses. The rules for ray tracing for mirrors are based on the illustrations just discussed: 1. A ray approaching a concave converging mirror parallel to its axis is reflected through the focal point F of the mirror on the same side. (See rays 1 and 3 in (b).) 2. A ray approaching a convex diverging mirror parallel to its axis is reflected so that it seems to come from the focal point F behind the mirror. (See rays 1 and 3 in .) 3. Any ray striking the center of a mirror is followed by applying the law of reflection; it makes the same angle with the axis when leaving as when approaching. (See ray 2 in .) 4. A ray approaching a concave converging mirror through its focal point is reflected parallel to its axis. (The reverse of rays 1 and 3 in .) 5. A ray approaching a convex diverging mirror by heading toward its focal point on the opposite side is reflected parallel to the axis. (The reverse of rays 1 and 3 in .) We will use ray tracing to illustrate how images are formed by mirrors, and we can use ray tracing quantitatively to obtain numerical information. But since we assume each mirror is small compared with its radius of curvature, we can use the thin lens equations for mirrors just as we did for lenses. Consider the situation shown in , concave spherical mirror reflection, in which an object is placed farther from a concave (converging) mirror than its focal length. That is, is positive and > , so that we may expect an image similar to the case 1 real image formed by a converging lens. Ray tracing in shows that the rays from a common point on the object all cross at a point on the same side of the mirror as the object. Thus a real image can be projected onto a screen placed at this location. The image distance is positive, and the image is inverted, so its magnification is negative. This is a case 1 image for mirrors. It differs from the case 1 image for lenses only in that the image is on the same side of the mirror as the object. It is otherwise identical. ### Problem-Solving Strategy for Mirrors Step 1. Examine the situation to determine that image formation by a mirror is involved. Step 2. Refer to the Problem-Solving Strategies for Lenses. The same strategies are valid for mirrors as for lenses with one qualification—use the ray tracing rules for mirrors listed earlier in this section. ### Test Prep for AP Courses ### Section Summary 1. The characteristics of an image formed by a flat mirror are: (a) The image and object are the same distance from the mirror, (b) The image is a virtual image, and (c) The image is situated behind the mirror. 2. Image length is half the radius of curvature. 3. A convex mirror is a diverging mirror and forms only one type of image, namely a virtual image. ### Conceptual Questions ### Problems & Exercises
# Vision and Optical Instruments ## Introduction to Vision and Optical Instruments Explore how the image on the computer screen is formed. How is the image formation on the computer screen different from the image formation in your eye as you look down the microscope? How can videos of living cell processes be taken for viewing later on, and by many different people? Seeing faces and objects we love and cherish is a delight—one’s favorite teddy bear, a picture on the wall, or the sun rising over the mountains. Intricate images help us understand nature and are invaluable for developing techniques and technologies in order to improve the quality of life. The image of a red blood cell that almost fills the cross-sectional area of a tiny capillary makes us wonder how blood makes it through and not get stuck. We are able to see bacteria and viruses and understand their structure. It is the knowledge of physics that provides fundamental understanding and models required to develop new techniques and instruments. Therefore, physics is called an enabling science—a science that enables development and advancement in other areas. It is through optics and imaging that physics enables advancement in major areas of biosciences. This chapter illustrates the enabling nature of physics through an understanding of how a human eye is able to see, how we are able to use optical instruments to see beyond what is possible with the naked eye, and how we develop methods of vision correction. It is convenient to categorize these instruments on the basis of geometric optics (see Geometric Optics) and wave optics (see Wave Optics).
# Vision and Optical Instruments ## Physics of the Eye ### Learning Objectives By the end of this section, you will be able to: 1. Explain the image formation by the eye. 2. Explain why peripheral images lack detail and color. 3. Define refractive indices. 4. Analyze the accommodation of the eye for distant and near vision. Early thinkers had a wide array of theories regarding vision. Euclid and Ptolemy believed that the eyes emitted rays of light; others promoted the idea that objects gave off some particle or substance that was discerned by the eye. Ibn al-Haytham (sometimes called Alhazen), who was mentioned earlier as an originator of the scientific method, conducted a number of experiments to illustrate how the anatomical construction of the eye led to its ability to form images. He recognized that light reflected from objects entered the eye through the lens and was passed to the optic nerve. Al-Haytham did not fully understand the mechanisms involved, but many subsequent discoveries in vision, reflection, and magnification built on his discoveries and methods. The eye is perhaps the most interesting of all optical instruments. The eye is remarkable in how it forms images and in the richness of detail and color it can detect. However, our eyes commonly need some correction, to reach what is called “normal” vision, but should be called ideal rather than normal. Image formation by our eyes and common vision correction are easy to analyze with the optics discussed in Geometric Optics. shows the basic anatomy of the eye. The cornea and lens form a system that, to a good approximation, acts as a single thin lens. For clear vision, a real image must be projected onto the light-sensitive retina, which lies at a fixed distance from the lens. The lens of the eye adjusts its power to produce an image on the retina for objects at different distances. The center of the image falls on the fovea, which has the greatest density of light receptors and the greatest acuity (sharpness) in the visual field. The variable opening (or pupil) of the eye along with chemical adaptation allows the eye to detect light intensities from the lowest observable to times greater (without damage). This is an incredible range of detection. Our eyes perform a vast number of functions, such as sense direction, movement, sophisticated colors, and distance. Processing of visual nerve impulses begins with interconnections in the retina and continues in the brain. The optic nerve conveys signals received by the eye to the brain. Refractive indices are crucial to image formation using lenses. shows refractive indices relevant to the eye. The biggest change in the refractive index, and bending of rays, occurs at the cornea rather than the lens. The ray diagram in shows image formation by the cornea and lens of the eye. The rays bend according to the refractive indices provided in . The cornea provides about two-thirds of the power of the eye, owing to the fact that speed of light changes considerably while traveling from air into cornea. The lens provides the remaining power needed to produce an image on the retina. The cornea and lens can be treated as a single thin lens, even though the light rays pass through several layers of material (such as cornea, aqueous humor, several layers in the lens, and vitreous humor), changing direction at each interface. The image formed is much like the one produced by a single convex lens. This is a case 1 image. Images formed in the eye are inverted but the brain inverts them once more to make them seem upright. As noted, the image must fall precisely on the retina to produce clear vision — that is, the image distance must equal the lens-to-retina distance. Because the lens-to-retina distance does not change, the image distance must be the same for objects at all distances. The eye manages this by varying the power (and focal length) of the lens to accommodate for objects at various distances. The process of adjusting the eye’s focal length is called accommodation. A person with normal (ideal) vision can see objects clearly at distances ranging from 25 cm to essentially infinity. However, although the near point (the shortest distance at which a sharp focus can be obtained) increases with age (becoming meters for some older people), we will consider it to be 25 cm in our treatment here. shows the accommodation of the eye for distant and near vision. Since light rays from a nearby object can diverge and still enter the eye, the lens must be more converging (more powerful) for close vision than for distant vision. To be more converging, the lens is made thicker by the action of the ciliary muscle surrounding it. The eye is most relaxed when viewing distant objects, one reason that microscopes and telescopes are designed to produce distant images. Vision of very distant objects is called totally relaxed, while close vision is termed accommodated, with the closest vision being fully accommodated. We will use the thin lens equations to examine image formation by the eye quantitatively. First, note the power of a lens is given as , so we rewrite the thin lens equations as and We understand that must equal the lens-to-retina distance to obtain clear vision, and that normal vision is possible for objects at distances to infinity. The eye can detect an impressive amount of detail, considering how small the image is on the retina. To get some idea of how small the image can be, consider the following example. ### Test Prep for AP Courses ### Section Summary 1. Image formation by the eye is adequately described by the thin lens equations: 2. The eye produces a real image on the retina by adjusting its focal length and power in a process called accommodation. 3. For close vision, the eye is fully accommodated and has its greatest power, whereas for distant vision, it is totally relaxed and has its smallest power. 4. The loss of the ability to accommodate with age is called presbyopia, which is corrected by the use of a converging lens to add power for close vision. ### Conceptual Questions ### Problem Exercises Unless otherwise stated, the lens-to-retina distance is 2.00 cm.
# Vision and Optical Instruments ## Vision Correction ### Learning Objectives By the end of this section, you will be able to: 1. Identify and discuss common vision defects. 2. Explain nearsightedness and farsightedness corrections. 3. Explain laser vision correction. The need for some type of vision correction is very common. Common vision defects are easy to understand, and some are simple to correct. illustrates two common vision defects. Nearsightedness, or myopia, is the inability to see distant objects clearly while close objects are clear. The eye overconverges the nearly parallel rays from a distant object, and the rays cross in front of the retina. More divergent rays from a close object are converged on the retina for a clear image. The distance to the farthest object that can be seen clearly is called the far point of the eye (normally infinity). Farsightedness, or hyperopia, is the inability to see close objects clearly while distant objects may be clear. A farsighted eye does not converge sufficient rays from a close object to make the rays meet on the retina. Less diverging rays from a distant object can be converged for a clear image. The distance to the closest object that can be seen clearly is called the near point of the eye (normally 25 cm). Since the nearsighted eye over converges light rays, the correction for nearsightedness is to place a diverging spectacle lens in front of the eye. This reduces the power of an eye that is too powerful. Another way of thinking about this is that a diverging spectacle lens produces a case 3 image, which is closer to the eye than the object (see ). To determine the spectacle power needed for correction, you must know the person’s far point—that is, you must know the greatest distance at which the person can see clearly. Then the image produced by a spectacle lens must be at this distance or closer for the nearsighted person to be able to see it clearly. It is worth noting that wearing glasses does not change the eye in any way. The eyeglass lens is simply used to create an image of the object at a distance where the nearsighted person can see it clearly. Whereas someone not wearing glasses can see clearly objects that fall between their near point and their far point, someone wearing glasses can see images that fall between their near point and their far point. Since the farsighted eye under converges light rays, the correction for farsightedness is to place a converging spectacle lens in front of the eye. This increases the power of an eye that is too weak. Another way of thinking about this is that a converging spectacle lens produces a case 2 image, which is farther from the eye than the object (see ). To determine the spectacle power needed for correction, you must know the person’s near point—that is, you must know the smallest distance at which the person can see clearly. Then the image produced by a spectacle lens must be at this distance or farther for the farsighted person to be able to see it clearly. Another common vision defect is astigmatism, an unevenness or asymmetry in the focus of the eye. For example, rays passing through a vertical region of the eye may focus closer than rays passing through a horizontal region, resulting in the image appearing elongated. This is mostly due to irregularities in the shape of the cornea but can also be due to lens irregularities or unevenness in the retina. Because of these irregularities, different parts of the lens system produce images at different locations. The eye-brain system can compensate for some of these irregularities, but they generally manifest themselves as less distinct vision or sharper images along certain axes. shows a chart used to detect astigmatism. Astigmatism can be at least partially corrected with a spectacle having the opposite irregularity of the eye. If an eyeglass prescription has a cylindrical correction, it is there to correct astigmatism. The normal corrections for short- or farsightedness are spherical corrections, uniform along all axes. Contact lenses have advantages over glasses beyond their cosmetic aspects. One problem with glasses is that as the eye moves, it is not at a fixed distance from the spectacle lens. Contacts rest on and move with the eye, eliminating this problem. Because contacts cover a significant portion of the cornea, they provide superior peripheral vision compared with eyeglasses. Contacts also correct some corneal astigmatism caused by surface irregularities. The tear layer between the smooth contact and the cornea fills in the irregularities. Since the index of refraction of the tear layer and the cornea are very similar, you now have a regular optical surface in place of an irregular one. If the curvature of a contact lens is not the same as the cornea (as may be necessary with some individuals to obtain a comfortable fit), the tear layer between the contact and cornea acts as a lens. If the tear layer is thinner in the center than at the edges, it has a negative power, for example. Skilled optometrists will adjust the power of the contact to compensate. Other advances in vision correction demonstrate the interconnectedness and value of scientific research. In the 1980s, Donna Strickland and Gérard Mourou worked on ways to make small but powerful lasers. Up until that time, powerful lasers had to be quite large in order to function properly. Essentially, the intensity of the beam itself would modify the instrument’s ability to function and create too much heat to be practical. Strickland and Mourou used ultrashort laser pulses passed over a grating that modified the beam but retained its power. Chirped pulse amplification, as it became known, has been used to develop most of the highest-powered lasers in the world, but also some of the smallest and most common. Decades after their initial discovery, Strickland and Mourou were awarded the Nobel Prize for Physics (with Strickland becoming the third woman to receive the award) partly due to CPA’s pivotal role in the increasingly common practice of laser vision correction—an application neither planned during their initial research. Laser vision correction has progressed rapidly in the last few years. It is the latest and by far the most successful in a series of procedures that correct vision by reshaping the cornea. As noted at the beginning of this section, the cornea accounts for about two-thirds of the power of the eye. Thus, small adjustments of its curvature have the same effect as putting a lens in front of the eye. To a reasonable approximation, the power of multiple lenses placed close together equals the sum of their powers. For example, a concave spectacle lens (for nearsightedness) having has the same effect on vision as reducing the power of the eye itself by 3.00 D. So to correct the eye for nearsightedness, the cornea is flattened to reduce its power. Similarly, to correct for farsightedness, the curvature of the cornea is enhanced to increase the power of the eye—the same effect as the positive power spectacle lens used for farsightedness. Laser vision correction uses high intensity electromagnetic radiation to ablate (to remove material from the surface) and reshape the corneal surfaces. Today, the most commonly used laser vision correction procedure is Laser in situ Keratomileusis (LASIK). The top layer of the cornea is surgically peeled back and the underlying tissue ablated by multiple bursts of finely controlled ultraviolet radiation produced by an excimer laser. Lasers are used because they not only produce well-focused intense light, but they also emit very pure wavelength electromagnetic radiation that can be controlled more accurately than mixed wavelength light. The 193 nm wavelength UV commonly used is extremely and strongly absorbed by corneal tissue, allowing precise evaporation of very thin layers. A computer controlled program applies more bursts, usually at a rate of 10 per second, to the areas that require deeper removal. Typically a spot less than 1 mm in diameter and about in thickness is removed by each burst. Nearsightedness, farsightedness, and astigmatism can be corrected with an accuracy that produces normal distant vision in more than 90% of the patients, in many cases right away. The corneal flap is replaced; healing takes place rapidly and is nearly painless. More than 1 million Americans per year undergo LASIK (see ). ### Test Prep for AP Courses ### Section Summary 1. Nearsightedness, or myopia, is the inability to see distant objects and is corrected with a diverging lens to reduce power. 2. Farsightedness, or hyperopia, is the inability to see close objects and is corrected with a converging lens to increase power. 3. In myopia and hyperopia, the corrective lenses produce images at a distance that the person can see clearly—the far point and near point, respectively. ### Conceptual Questions ### Problem Exercises
# Vision and Optical Instruments ## Color and Color Vision ### Learning Objectives By the end of this section, you will be able to: 1. Explain the simple theory of color vision. 2. Outline the coloring properties of light sources. 3. Describe the retinex theory of color vision. The gift of vision is made richer by the existence of color. Objects and lights abound with thousands of hues that stimulate our eyes, brains, and emotions. Two basic questions are addressed in this brief treatment—what does color mean in scientific terms, and how do we, as humans, perceive it? ### Simple Theory of Color Vision We have already noted that color is associated with the wavelength of visible electromagnetic radiation. When our eyes receive pure-wavelength light, we tend to see only a few colors. Six of these (most often listed) are red, orange, yellow, green, blue, and violet. These are the rainbow of colors produced when white light is dispersed according to different wavelengths. There are thousands of other hues that we can perceive. These include brown, teal, gold, pink, and white. One simple theory of color vision implies that all these hues are our eye’s response to different combinations of wavelengths. This is true to an extent, but we find that color perception is even subtler than our eye’s response for various wavelengths of light. The two major types of light-sensing cells (photoreceptors) in the retina are rods and cones. Rods are more sensitive than cones by a factor of about 1000 and are solely responsible for peripheral vision as well as vision in very dark environments. They are also important for motion detection. There are about 120 million rods in the human retina. Rods do not yield color information. You may notice that you lose color vision when it is very dark, but you retain the ability to discern grey scales. Cones are most concentrated in the fovea, the central region of the retina. There are no rods here. The fovea is at the center of the macula, a 5 mm diameter region responsible for our central vision. The cones work best in bright light and are responsible for high resolution vision. There are about 6 million cones in the human retina. There are three types of cones, and each type is sensitive to different ranges of wavelengths, as illustrated in . A simplified theory of color vision is that there are three primary colors corresponding to the three types of cones. The thousands of other hues that we can distinguish among are created by various combinations of stimulations of the three types of cones. Color television uses a three-color system in which the screen is covered with equal numbers of red, green, and blue phosphor dots. The broad range of hues a viewer sees is produced by various combinations of these three colors. For example, you will perceive yellow when red and green are illuminated with the correct ratio of intensities. White may be sensed when all three are illuminated. Then, it would seem that all hues can be produced by adding three primary colors in various proportions. But there is an indication that color vision is more sophisticated. There is no unique set of three primary colors. Another set that works is yellow, green, and blue. A further indication of the need for a more complex theory of color vision is that various different combinations can produce the same hue. Yellow can be sensed with yellow light, or with a combination of red and green, and also with white light from which violet has been removed. The three-primary-colors aspect of color vision is well established; more sophisticated theories expand on it rather than deny it. Consider why various objects display color—that is, why are feathers blue and red in a crimson rosella? The true color of an object is defined by its absorptive or reflective characteristics. shows white light falling on three different objects, one pure blue, one pure red, and one black, as well as pure red light falling on a white object. Other hues are created by more complex absorption characteristics. Pink, for example on a galah cockatoo, can be due to weak absorption of all colors except red. An object can appear a different color under non-white illumination. For example, a pure blue object illuminated with pure red light will appear black, because it absorbs all the red light falling on it. But, the true color of the object is blue, which is independent of illumination. Similarly, light sources have colors that are defined by the wavelengths they produce. A helium-neon laser emits pure red light. In fact, the phrase “pure red light” is defined by having a sharp constrained spectrum, a characteristic of laser light. The Sun produces a broad yellowish spectrum, fluorescent lights emit bluish-white light, and incandescent lights emit reddish-white hues as seen in . As you would expect, you sense these colors when viewing the light source directly or when illuminating a white object with them. All of this fits neatly into the simplified theory that a combination of wavelengths produces various hues. ### Color Constancy and a Modified Theory of Color Vision The eye-brain color-sensing system can, by comparing various objects in its view, perceive the true color of an object under varying lighting conditions—an ability that is called color constancy. We can sense that a white tablecloth, for example, is white whether it is illuminated by sunlight, fluorescent light, or candlelight. The wavelengths entering the eye are quite different in each case, as the graphs in imply, but our color vision can detect the true color by comparing the tablecloth with its surroundings. Theories that take color constancy into account are based on a large body of anatomical evidence as well as perceptual studies. There are nerve connections among the light receptors on the retina, and there are far fewer nerve connections to the brain than there are rods and cones. This means that there is signal processing in the eye before information is sent to the brain. For example, the eye makes comparisons between adjacent light receptors and is very sensitive to edges as seen in . Rather than responding simply to the light entering the eye, which is uniform in the various rectangles in this figure, the eye responds to the edges and senses false darkness variations. One theory that takes various factors into account was advanced by Edwin Land (1909 – 1991), the creative founder of the Polaroid Corporation. Land proposed, based partly on his many elegant experiments, that the three types of cones are organized into systems called retinexes. Each retinex forms an image that is compared with the others, and the eye-brain system thus can compare a candle-illuminated white table cloth with its generally reddish surroundings and determine that it is actually white. This retinex theory of color vision is an example of modified theories of color vision that attempt to account for its subtleties. One striking experiment performed by Land demonstrates that some type of image comparison may produce color vision. Two pictures are taken of a scene on black-and-white film, one using a red filter, the other a blue filter. Resulting black-and-white slides are then projected and superimposed on a screen, producing a black-and-white image, as expected. Then a red filter is placed in front of the slide taken with a red filter, and the images are again superimposed on a screen. You would expect an image in various shades of pink, but instead, the image appears to humans in full color with all the hues of the original scene. This implies that color vision can be induced by comparison of the black-and-white and red images. Color vision is not completely understood or explained, and the retinex theory is not totally accepted. It is apparent that color vision is much subtler than what a first look might imply. ### Test Prep for AP Courses ### Section Summary 1. The eye has four types of light receptors—rods and three types of color-sensitive cones. 2. The rods are good for night vision, peripheral vision, and motion changes, while the cones are responsible for central vision and color. 3. We perceive many hues, from light having mixtures of wavelengths. 4. A simplified theory of color vision states that there are three primary colors, which correspond to the three types of cones, and that various combinations of the primary colors produce all the hues. 5. The true color of an object is related to its relative absorption of various wavelengths of light. The color of a light source is related to the wavelengths it produces. 6. Color constancy is the ability of the eye-brain system to discern the true color of an object illuminated by various light sources. 7. The retinex theory of color vision explains color constancy by postulating the existence of three retinexes or image systems, associated with the three types of cones that are compared to obtain sophisticated information. ### Conceptual Questions
# Vision and Optical Instruments ## Microscopes ### Learning Objectives By the end of this section, you will be able to: 1. Investigate different types of microscopes. 2. Learn how image is formed in a compound microscope. Although the eye is marvelous in its ability to see objects large and small, it obviously has limitations to the smallest details it can detect. Human desire to see beyond what is possible with the naked eye led to the use of optical instruments. In this section we will examine microscopes, instruments for enlarging the detail that we cannot see with the unaided eye. The microscope is a multiple-element system having more than a single lens or mirror. (See ) A microscope can be made from two convex lenses. The image formed by the first element becomes the object for the second element. The second element forms its own image, which is the object for the third element, and so on. Ray tracing helps to visualize the image formed. If the device is composed of thin lenses and mirrors that obey the thin lens equations, then it is not difficult to describe their behavior numerically. Microscopes were first developed in the early 1600s by eyeglass makers in The Netherlands and Denmark. The simplest compound microscope is constructed from two convex lenses as shown schematically in . The first lens is called the objective lens, and has typical magnification values from to . In standard microscopes, the objectives are mounted such that when you switch between objectives, the sample remains in focus. Objectives arranged in this way are described as parfocal. The second, the eyepiece, also referred to as the ocular, has several lenses which slide inside a cylindrical barrel. The focusing ability is provided by the movement of both the objective lens and the eyepiece. The purpose of a microscope is to magnify small objects, and both lenses contribute to the final magnification. Additionally, the final enlarged image is produced in a location far enough from the observer to be easily viewed, since the eye cannot focus on objects or images that are too close. To see how the microscope in forms an image, we consider its two lenses in succession. The object is slightly farther away from the objective lens than its focal length , producing a case 1 image that is larger than the object. This first image is the object for the second lens, or eyepiece. The eyepiece is intentionally located so it can further magnify the image. The eyepiece is placed so that the first image is closer to it than its focal length . Thus the eyepiece acts as a magnifying glass, and the final image is made even larger. The final image remains inverted, but it is farther from the observer, making it easy to view (the eye is most relaxed when viewing distant objects and normally cannot focus closer than 25 cm). Since each lens produces a magnification that multiplies the height of the image, it is apparent that the overall magnification is the product of the individual magnifications: where is the magnification of the objective and is the magnification of the eyepiece. This equation can be generalized for any combination of thin lenses and mirrors that obey the thin lens equations. Normal optical microscopes can magnify up to with a theoretical resolution of . The lenses can be quite complicated and are composed of multiple elements to reduce aberrations. Microscope objective lenses are particularly important as they primarily gather light from the specimen. Three parameters describe microscope objectives: the numerical aperture , the magnification , and the working distance. The is related to the light gathering ability of a lens and is obtained using the angle of acceptance formed by the maximum cone of rays focusing on the specimen (see (a)) and is given by where is the refractive index of the medium between the lens and the specimen and . As the angle of acceptance given by increases, becomes larger and more light is gathered from a smaller focal region giving higher resolution. A objective gives more detail than a objective. While the numerical aperture can be used to compare resolutions of various objectives, it does not indicate how far the lens could be from the specimen. This is specified by the “working distance,” which is the distance (in mm usually) from the front lens element of the objective to the specimen, or cover glass. The higher the the closer the lens will be to the specimen and the more chances there are of breaking the cover slip and damaging both the specimen and the lens. The focal length of an objective lens is different than the working distance. This is because objective lenses are made of a combination of lenses and the focal length is measured from inside the barrel. The working distance is a parameter that microscopists can use more readily as it is measured from the outermost lens. The working distance decreases as the and magnification both increase. The term in general is called the -number and is used to denote the light per unit area reaching the image plane. In photography, an image of an object at infinity is formed at the focal point and the -number is given by the ratio of the focal length of the lens and the diameter of the aperture controlling the light into the lens (see (b)). If the acceptance angle is small the of the lens can also be used as given below. As the -number decreases, the camera is able to gather light from a larger angle, giving wide-angle photography. As usual there is a trade-off. A greater means less light reaches the image plane. A setting of usually allows one to take pictures in bright sunlight as the aperture diameter is small. In optical fibers, light needs to be focused into the fiber. shows the angle used in calculating the of an optical fiber. Can the be larger than 1.00? The answer is ‘yes’ if we use immersion lenses in which a medium such as oil, glycerine or water is placed between the objective and the microscope cover slip. This minimizes the mismatch in refractive indices as light rays go through different media, generally providing a greater light-gathering ability and an increase in resolution. shows light rays when using air and immersion lenses. When using a microscope we do not see the entire extent of the sample. Depending on the eyepiece and objective lens we see a restricted region which we say is the field of view. The objective is then manipulated in two-dimensions above the sample to view other regions of the sample. Electronic scanning of either the objective or the sample is used in scanning microscopy. The image formed at each point during the scanning is combined using a computer to generate an image of a larger region of the sample at a selected magnification. When using a microscope, we rely on gathering light to form an image. Hence most specimens need to be illuminated, particularly at higher magnifications, when observing details that are so small that they reflect only small amounts of light. To make such objects easily visible, the intensity of light falling on them needs to be increased. Special illuminating systems called condensers are used for this purpose. The type of condenser that is suitable for an application depends on how the specimen is examined, whether by transmission, scattering or reflecting. See for an example of each. White light sources are common and lasers are often used. Laser light illumination tends to be quite intense and it is important to ensure that the light does not result in the degradation of the specimen. We normally associate microscopes with visible light, but x ray and electron microscopes provide greater resolution. The focusing and basic physics is the same as that just described, even though the lenses require different technology. The electron microscope requires vacuum chambers so that the electrons can proceed unheeded. Magnifications of 50 million times provide the ability to determine positions of individual atoms within materials. An electron microscope is shown in . We do not use our eyes to form images; rather images are recorded electronically and displayed on computers. In fact observing and saving images formed by optical microscopes on computers is now done routinely. Video recordings of what occurs in a microscope can be made for viewing by many people at later dates. Advances in this powerful technology continue. In the 1990s, Pratibha L. Gai invented the environmental transmission electron microscope (ETEM), which was the first device capable of observing individual atoms in chemical reactions. ### Test Prep for AP Courses ### Section Summary 1. The microscope is a multiple-element system having more than a single lens or mirror. 2. Many optical devices contain more than a single lens or mirror. These are analysed by considering each element sequentially. The image formed by the first is the object for the second, and so on. The same ray tracing and thin lens techniques apply to each lens element. 3. The overall magnification of a multiple-element system is the product of the magnifications of its individual elements. For a two-element system with an objective and an eyepiece, this is where is the magnification of the objective and is the magnification of the eyepiece, such as for a microscope. 4. Microscopes are instruments for allowing us to see detail we would not be able to see with the unaided eye and consist of a range of components. 5. The eyepiece and objective contribute to the magnification. The numerical aperture of an objective is given by where is the refractive index and the angle of acceptance. 6. Immersion techniques are often used to improve the light gathering ability of microscopes. The specimen is illuminated by transmitted, scattered or reflected light though a condenser. 7. The describes the light gathering ability of a lens. It is given by ### Conceptual Questions ### Problem Exercises
# Vision and Optical Instruments ## Telescopes ### Learning Objectives By the end of this section, you will be able to: 1. Outline the invention of a telescope. 2. Describe the working of a telescope. Telescopes are meant for viewing distant objects, producing an image that is larger than the image that can be seen with the unaided eye. Telescopes gather far more light than the eye, allowing dim objects to be observed with greater magnification and better resolution. Although Galileo is often credited with inventing the telescope, he actually did not. What he did was more important. He constructed several early telescopes, was the first to study the heavens with them, and made monumental discoveries using them. Among these are the moons of Jupiter, the craters and mountains on the Moon, the details of sunspots, and the fact that the Milky Way is composed of vast numbers of individual stars. (a) shows a telescope made of two lenses, the convex objective and the concave eyepiece, the same construction used by Galileo. Such an arrangement produces an upright image and is used in spyglasses and opera glasses. The most common two-lens telescope, like the simple microscope, uses two convex lenses and is shown in (b). The object is so far away from the telescope that it is essentially at infinity compared with the focal lengths of the lenses (). The first image is thus produced at , as shown in the figure. To prove this, note that Because , this simplifies to which implies that , as claimed. It is true that for any distant object and any lens or mirror, the image is at the focal length. The first image formed by a telescope objective as seen in (b) will not be large compared with what you might see by looking at the object directly. For example, the spot formed by sunlight focused on a piece of paper by a magnifying glass is the image of the Sun, and it is small. The telescope eyepiece (like the microscope eyepiece) magnifies this first image. The distance between the eyepiece and the objective lens is made slightly less than the sum of their focal lengths so that the first image is closer to the eyepiece than its focal length. That is, is less than , and so the eyepiece forms a case 2 image that is large and to the left for easy viewing. If the angle subtended by an object as viewed by the unaided eye is , and the angle subtended by the telescope image is , then the angular magnification is defined to be their ratio. That is, . It can be shown that the angular magnification of a telescope is related to the focal lengths of the objective and eyepiece; and is given by The minus sign indicates the image is inverted. To obtain the greatest angular magnification, it is best to have a long focal length objective and a short focal length eyepiece. The greater the angular magnification , the larger an object will appear when viewed through a telescope, making more details visible. Limits to observable details are imposed by many factors, including lens quality and atmospheric disturbance. The image in most telescopes is inverted, which is unimportant for observing the stars but a real problem for other applications, such as telescopes on ships or telescopic gun sights. If an upright image is needed, Galileo’s arrangement in (a) can be used. But a more common arrangement is to use a third convex lens as an eyepiece, increasing the distance between the first two and inverting the image once again as seen in . A telescope can also be made with a concave mirror as its first element or objective, since a concave mirror acts like a convex lens as seen in . Flat mirrors are often employed in optical instruments to make them more compact or to send light to cameras and other sensing devices. There are many advantages to using mirrors rather than lenses for telescope objectives. Mirrors can be constructed much larger than lenses and can, thus, gather large amounts of light, as needed to view distant galaxies, for example. Large and relatively flat mirrors have very long focal lengths, so that great angular magnification is possible. Telescopes, like microscopes, can utilize a range of frequencies from the electromagnetic spectrum. (a) shows the Australia Telescope Compact Array, which uses six 22-m antennas for mapping the southern skies using radio waves. (b) shows the focusing of x rays on the Chandra X-ray Observatory—a satellite orbiting earth since 1999 and looking at high temperature events as exploding stars, quasars, and black holes. X rays, with much more energy and shorter wavelengths than RF and light, are mainly absorbed and not reflected when incident perpendicular to the medium. But they can be reflected when incident at small glancing angles, much like a rock will skip on a lake if thrown at a small angle. The mirrors for the Chandra consist of a long barrelled pathway and 4 pairs of mirrors to focus the rays at a point 10 meters away from the entrance. The mirrors are extremely smooth and consist of a glass ceramic base with a thin coating of metal (iridium). Four pairs of precision manufactured mirrors are exquisitely shaped and aligned so that x rays ricochet off the mirrors like bullets off a wall, focusing on a spot. A current exciting development is a collaborative effort involving 17 countries to construct a Square Kilometre Array (SKA) of telescopes capable of covering from 80 MHz to 2 GHz. The initial stage of the project is the construction of the Australian Square Kilometre Array Pathfinder in Western Australia (see ). The project will use cutting-edge technologies such as adaptive optics in which the lens or mirror is constructed from lots of carefully aligned tiny lenses and mirrors that can be manipulated using computers. A range of rapidly changing distortions can be minimized by deforming or tilting the tiny lenses and mirrors. The use of adaptive optics in vision correction is a current area of research. ### Test Prep for AP Courses ### Section Summary 1. Simple telescopes can be made with two lenses. They are used for viewing objects at large distances and utilize the entire range of the electromagnetic spectrum. 2. The angular magnification M for a telescope is given by where is the angle subtended by an object viewed by the unaided eye, is the angle subtended by a magnified image, and and are the focal lengths of the objective and the eyepiece. ### Conceptual Questions ### Problem Exercises Unless otherwise stated, the lens-to-retina distance is 2.00 cm.
# Vision and Optical Instruments ## Aberrations ### Learning Objectives By the end of this section, you will be able to: 1. Describe optical aberration. Real lenses behave somewhat differently from how they are modeled using the thin lens equations, producing aberrations. An aberration is a distortion in an image. There are a variety of aberrations due to a lens size, material, thickness, and position of the object. One common type of aberration is chromatic aberration, which is related to color. Since the index of refraction of lenses depends on color or wavelength, images are produced at different places and with different magnifications for different colors. (The law of reflection is independent of wavelength, and so mirrors do not have this problem. This is another advantage for mirrors in optical systems such as telescopes.) (a) shows chromatic aberration for a single convex lens and its partial correction with a two-lens system. Violet rays are bent more than red, since they have a higher index of refraction and are thus focused closer to the lens. The diverging lens partially corrects this, although it is usually not possible to do so completely. Lenses of different materials and having different dispersions may be used. For example an achromatic doublet consisting of a converging lens made of crown glass and a diverging lens made of flint glass in contact can dramatically reduce chromatic aberration (see (b)). Quite often in an imaging system the object is off-center. Consequently, different parts of a lens or mirror do not refract or reflect the image to the same point. This type of aberration is called a coma and is shown in . The image in this case often appears pear-shaped. Another common aberration is spherical aberration where rays converging from the outer edges of a lens converge to a focus closer to the lens and rays closer to the axis focus further (see ). Aberrations due to astigmatism in the lenses of the eyes are discussed in Vision Correction, and a chart used to detect astigmatism is shown in . Such aberrations and can also be an issue with manufactured lenses. The image produced by an optical system needs to be bright enough to be discerned. It is often a challenge to obtain a sufficiently bright image. The brightness is determined by the amount of light passing through the optical system. The optical components determining the brightness are the diameter of the lens and the diameter of pupils, diaphragms or aperture stops placed in front of lenses. Optical systems often have entrance and exit pupils to specifically reduce aberrations but they inevitably reduce brightness as well. Consequently, optical systems need to strike a balance between the various components used. The iris in the eye dilates and constricts, acting as an entrance pupil. You can see objects more clearly by looking through a small hole made with your hand in the shape of a fist. Squinting, or using a small hole in a piece of paper, also will make the object sharper. So how are aberrations corrected? The lenses may also have specially shaped surfaces, as opposed to the simple spherical shape that is relatively easy to produce. Expensive camera lenses are large in diameter, so that they can gather more light, and need several elements to correct for various aberrations. Further, advances in materials science have resulted in lenses with a range of refractive indices—technically referred to as graded index (GRIN) lenses. Spectacles often have the ability to provide a range of focusing ability using similar techniques. GRIN lenses are particularly important at the end of optical fibers in endoscopes. Advanced computing techniques allow for a range of corrections on images after the image has been collected and certain characteristics of the optical system are known. Some of these techniques are sophisticated versions of what are available on commercial packages like Adobe Photoshop. ### Section Summary 1. Aberrations or image distortions can arise due to the finite thickness of optical instruments, imperfections in the optical components, and limitations on the ways in which the components are used. 2. The means for correcting aberrations range from better components to computational techniques. ### Conceptual Questions ### Problem Exercises
# Wave Optics ## Introduction to Wave Optics Examine a compact disc under white light, noting the colors observed and locations of the colors. Determine if the spectra are formed by diffraction from circular lines centered at the middle of the disc and, if so, what is their spacing. If not, determine the type of spacing. Also with the CD, explore the spectra of a few light sources, such as a candle flame, incandescent bulb, halogen light, and fluorescent light. Knowing the spacing of the rows of pits in the compact disc, estimate the maximum spacing that will allow the given number of megabytes of information to be stored. If you have ever looked at the reds, blues, and greens in a sunlit soap bubble and wondered how straw-colored soapy water could produce them, you have hit upon one of the many phenomena that can only be explained by the wave character of light (see ). The same is true for the colors seen in an oil slick or in the light reflected from a compact disc. These and other interesting phenomena, such as the dispersion of white light into a rainbow of colors when passed through a narrow slit, cannot be explained fully by geometric optics. In these cases, light interacts with small objects and exhibits its wave characteristics. The branch of optics that considers the behavior of light when it exhibits wave characteristics (particularly when it interacts with small objects) is called wave optics (sometimes called physical optics). It is the topic of this chapter.
# Wave Optics ## The Wave Aspect of Light: Interference ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the wave character of light. 2. Identify the changes when light enters a medium. We know that visible light is the type of electromagnetic wave to which our eyes respond. Like all other electromagnetic waves, it obeys the equation where is the speed of light in vacuum, is the frequency of the electromagnetic waves, and is its wavelength. The range of visible wavelengths is approximately 380 to 760 nm. As is true for all waves, light travels in straight lines and acts like a ray when it interacts with objects several times as large as its wavelength. However, when it interacts with smaller objects, it displays its wave characteristics prominently. Interference is the hallmark of a wave, and in both the ray and wave characteristics of light can be seen. The laser beam emitted by the observatory epitomizes a ray, traveling in a straight line. However, passing a pure-wavelength beam through vertical slits with a size close to the wavelength of the beam reveals the wave character of light, as the beam spreads out horizontally into a pattern of bright and dark regions caused by systematic constructive and destructive interference. Rather than spreading out, a ray would continue traveling straight ahead after passing through slits. Light has wave characteristics in various media as well as in a vacuum. When light goes from a vacuum to some medium, like water, its speed and wavelength change, but its frequency remains the same. (We can think of light as a forced oscillation that must have the frequency of the original source.) The speed of light in a medium is , where is its index of refraction. If we divide both sides of equation by , we get . This implies that , where is the wavelength in a medium and that where is the wavelength in vacuum and is the medium’s index of refraction. Therefore, the wavelength of light is smaller in any medium than it is in vacuum. In water, for example, which has , the range of visible wavelengths is to , or . Although wavelengths change while traveling from one medium to another, colors do not, since colors are associated with frequency. ### Section Summary 1. Wave optics is the branch of optics that must be used when light interacts with small objects or whenever the wave characteristics of light are considered. 2. Wave characteristics are those associated with interference and diffraction. 3. Visible light is the type of electromagnetic wave to which our eyes respond and has a wavelength in the range of 380 to 760 nm. 4. Like all EM waves, the following relationship is valid in vacuum: , where is the speed of light, is the frequency of the electromagnetic wave, and is its wavelength in vacuum. 5. The wavelength of light in a medium with index of refraction is . Its frequency is the same as in vacuum. ### Conceptual Questions ### Problems & Exercises
# Wave Optics ## Huygens's Principle: Diffraction ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the propagation of transverse waves. 2. Discuss Huygens’s principle. 3. Explain the bending of light. shows how a transverse wave looks as viewed from above and from the side. A light wave can be imagined to propagate like this, although we do not actually see it wiggling through space. From above, we view the wavefronts (or wave crests) as we would by looking down on the ocean waves. The side view would be a graph of the electric or magnetic field. The view from above is perhaps the most useful in developing concepts about wave optics. The Dutch scientist Christiaan Huygens (1629–1695) developed a useful technique for determining in detail how and where waves propagate. Starting from some known position, Huygens’s principle states that: Every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets. shows how Huygens’s principle is applied. A wavefront is the long edge that moves, for example, the crest or the trough. Each point on the wavefront emits a semicircular wave that moves at the propagation speed . These are drawn at a time later, so that they have moved a distance . The new wavefront is a line tangent to the wavelets and is where we would expect the wave to be a time later. Huygens’s principle works for all types of waves, including water waves, sound waves, and light waves. We will find it useful not only in describing how light waves propagate, but also in explaining the laws of reflection and refraction. In addition, we will see that Huygens’s principle tells us how and where light rays interfere. shows how a mirror reflects an incoming wave at an angle equal to the incident angle, verifying the law of reflection. As the wavefront strikes the mirror, wavelets are first emitted from the left part of the mirror and then the right. The wavelets closer to the left have had time to travel farther, producing a wavefront traveling in the direction shown. The law of refraction can be explained by applying Huygens’s principle to a wavefront passing from one medium to another (see ). Each wavelet in the figure was emitted when the wavefront crossed the interface between the media. Since the speed of light is smaller in the second medium, the waves do not travel as far in a given time, and the new wavefront changes direction as shown. This explains why a ray changes direction to become closer to the perpendicular when light slows down. Snell’s law can be derived from the geometry in , but this is left as an exercise for ambitious readers. What happens when a wave passes through an opening, such as light shining through an open door into a dark room? For light, we expect to see a sharp shadow of the doorway on the floor of the room, and we expect no light to bend around corners into other parts of the room. When sound passes through a door, we expect to hear it everywhere in the room and, thus, expect that sound spreads out when passing through such an opening (see ). What is the difference between the behavior of sound waves and light waves in this case? The answer is that light has very short wavelengths and acts like a ray. Sound has wavelengths on the order of the size of the door and bends around corners (for frequency of 1000 Hz, , about three times smaller than the width of the doorway). If we pass light through smaller openings, often called slits, we can use Huygens’s principle to see that light bends as sound does (see ). The bending of a wave around the edges of an opening or an obstacle is called diffraction. Diffraction is a wave characteristic and occurs for all types of waves. If diffraction is observed for some phenomenon, it is evidence that the phenomenon is a wave. Thus the horizontal diffraction of the laser beam after it passes through slits in is evidence that light is a wave. ### Test Prep for AP Courses ### Section Summary 1. An accurate technique for determining how and where waves propagate is given by Huygens’s principle: Every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets. 2. Diffraction is the bending of a wave around the edges of an opening or other obstacle. ### Conceptual Questions
# Wave Optics ## Young’s Double Slit Experiment ### Learning Objectives By the end of this section, you will be able to: 1. Explain the phenomena of interference. 2. Define constructive interference for a double slit and destructive interference for a double slit. Although Christiaan Huygens thought that light was a wave, Isaac Newton did not. Newton felt that there were other explanations for color, and for the interference and diffraction effects that were observable at the time. Owing to Newton’s tremendous stature, his view generally prevailed. The fact that Huygens’s principle worked was not considered evidence that was direct enough to prove that light is a wave. The acceptance of the wave character of light came many years later when, in 1801, the English physicist and physician Thomas Young (1773–1829) did his now-classic double slit experiment (see ). Why do we not ordinarily observe wave behavior for light, such as observed in Young’s double slit experiment? First, light must interact with something small, such as the closely spaced slits used by Young, to show pronounced wave effects. Furthermore, Young first passed light from a single source (the Sun) through a single slit to make the light somewhat coherent. By coherent, we mean waves are in phase or have a definite phase relationship. Incoherent means the waves have random phase relationships. Why did Young then pass the light through a double slit? The answer to this question is that two slits provide two coherent light sources that then interfere constructively or destructively. Young used sunlight, where each wavelength forms its own pattern, making the effect more difficult to see. We illustrate the double slit experiment with monochromatic (single ) light to clarify the effect. shows the pure constructive and destructive interference of two waves having the same wavelength and amplitude. When light passes through narrow slits, it is diffracted into semicircular waves, as shown in (a). Pure constructive interference occurs where the waves are crest to crest or trough to trough. Pure destructive interference occurs where they are crest to trough. The light must fall on a screen and be scattered into our eyes for us to see the pattern. An analogous pattern for water waves is shown in (b). Note that regions of constructive and destructive interference move out from the slits at well-defined angles to the original beam. These angles depend on wavelength and the distance between the slits, as we shall see below. To understand the double slit interference pattern, we consider how two waves travel from the slits to the screen, as illustrated in . Each slit is a different distance from a given point on the screen. Thus different numbers of wavelengths fit into each path. Waves start out from the slits in phase (crest to crest), but they may end up out of phase (crest to trough) at the screen if the paths differ in length by half a wavelength, interfering destructively as shown in (a). If the paths differ by a whole wavelength, then the waves arrive in phase (crest to crest) at the screen, interfering constructively as shown in (b). More generally, if the paths taken by the two waves differ by any half-integral number of wavelengths [, , , etc.], then destructive interference occurs. Similarly, if the paths taken by the two waves differ by any integral number of wavelengths (, , , etc.), then constructive interference occurs. shows how to determine the path length difference for waves traveling from two slits to a common point on a screen. If the screen is a large distance away compared with the distance between the slits, then the angle between the path and a line from the slits to the screen (see the figure) is nearly the same for each path. The difference between the paths is shown in the figure; simple trigonometry shows it to be , where is the distance between the slits. To obtain constructive interference for a double slit, the path length difference must be an integral multiple of the wavelength, or Similarly, to obtain destructive interference for a double slit, the path length difference must be a half-integral multiple of the wavelength, or where is the wavelength of the light, is the distance between slits, and is the angle from the original direction of the beam as discussed above. We call the order of the interference. For example, is fourth-order interference. The equations for double slit interference imply that a series of bright and dark lines are formed. For vertical slits, the light spreads out horizontally on either side of the incident beam into a pattern called interference fringes, illustrated in . The intensity of the bright fringes falls off on either side, being brightest at the center. The closer the slits are, the more is the spreading of the bright fringes. We can see this by examining the equation For fixed and , the smaller is, the larger must be, since . This is consistent with our contention that wave effects are most noticeable when the object the wave encounters (here, slits a distance apart) is small. Small gives large , hence a large effect. ### Test Prep for AP Courses ### Section Summary 1. Young’s double slit experiment gave definitive proof of the wave character of light. 2. An interference pattern is obtained by the superposition of light from two slits. 3. There is constructive interference when , where is the distance between the slits, is the angle relative to the incident direction, and is the order of the interference. 4. There is destructive interference when . ### Conceptual Questions ### Problems & Exercises
# Wave Optics ## Multiple Slit Diffraction ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the pattern obtained from diffraction grating. 2. Explain diffraction grating effects. An interesting thing happens if you pass light through a large number of evenly spaced parallel slits, called a diffraction grating. An interference pattern is created that is very similar to the one formed by a double slit (see ). A diffraction grating can be manufactured by scratching glass with a sharp tool in a number of precisely positioned parallel lines, with the untouched regions acting like slits. These can be photographically mass produced rather cheaply. Diffraction gratings work both for transmission of light, as in , and for reflection of light, as on butterfly wings and the Australian opal in or the CD pictured in the opening photograph of this chapter. In addition to their use as novelty items, diffraction gratings are commonly used for spectroscopic dispersion and analysis of light. What makes them particularly useful is the fact that they form a sharper pattern than double slits do. That is, their bright regions are narrower and brighter, while their dark regions are darker. shows idealized graphs demonstrating the sharper pattern. Natural diffraction gratings occur in the feathers of certain birds. Tiny, finger-like structures in regular patterns act as reflection gratings, producing constructive interference that gives the feathers colors not solely due to their pigmentation. This is called iridescence. The analysis of a diffraction grating is very similar to that for a double slit (see ). As we know from our discussion of double slits in Young's Double Slit Experiment, light is diffracted by each slit and spreads out after passing through. Rays traveling in the same direction (at an angle relative to the incident direction) are shown in the figure. Each of these rays travels a different distance to a common point on a screen far away. The rays start in phase, and they can be in or out of phase when they reach a screen, depending on the difference in the path lengths traveled. As seen in the figure, each ray travels a distance different from that of its neighbor, where is the distance between slits. If this distance equals an integral number of wavelengths, the rays all arrive in phase, and constructive interference (a maximum) is obtained. Thus, the condition necessary to obtain constructive interference for a diffraction grating is where is the distance between slits in the grating, is the wavelength of light, and is the order of the maximum. Note that this is exactly the same equation as for double slits separated by . However, the slits are usually closer in diffraction gratings than in double slits, producing fewer maxima at larger angles. Where are diffraction gratings used? Diffraction gratings are key components of monochromators used, for example, in optical imaging of particular wavelengths from biological or medical samples. A diffraction grating can be chosen to specifically analyze a wavelength emitted by molecules in diseased cells in a biopsy sample or to help excite strategic molecules in the sample with a selected frequency of light. Another vital use is in optical fiber technologies where fibers are designed to provide optimum performance at specific wavelengths. A range of diffraction gratings are available for selecting specific wavelengths for such use. ### Test Prep for AP Courses ### Section Summary 1. A diffraction grating is a large collection of evenly spaced parallel slits that produces an interference pattern similar to but sharper than that of a double slit. 2. There is constructive interference for a diffraction grating when , where is the distance between slits in the grating, is the wavelength of light, and is the order of the maximum. ### Conceptual Questions ### Problems & Exercises
# Wave Optics ## Single Slit Diffraction ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the single slit diffraction pattern. Light passing through a single slit forms a diffraction pattern somewhat different from those formed by double slits or diffraction gratings. shows a single slit diffraction pattern. Note that the central maximum is larger than those on either side, and that the intensity decreases rapidly on either side. In contrast, a diffraction grating produces evenly spaced lines that dim slowly on either side of center. The analysis of single slit diffraction is illustrated in . Here we consider light coming from different parts of the same slit. According to Huygens’s principle, every part of the wavefront in the slit emits wavelets. These are like rays that start out in phase and head in all directions. (Each ray is perpendicular to the wavefront of a wavelet.) Assuming the screen is very far away compared with the size of the slit, rays heading toward a common destination are nearly parallel. When they travel straight ahead, as in (a), they remain in phase, and a central maximum is obtained. However, when rays travel at an angle relative to the original direction of the beam, each travels a different distance to a common location, and they can arrive in or out of phase. In (b), the ray from the bottom travels a distance of one wavelength farther than the ray from the top. Thus a ray from the center travels a distance farther than the one on the left, arrives out of phase, and interferes destructively. A ray from slightly above the center and one from slightly above the bottom will also cancel one another. In fact, each ray from the slit will have another to interfere destructively, and a minimum in intensity will occur at this angle. There will be another minimum at the same angle to the right of the incident direction of the light. At the larger angle shown in (c), the path lengths differ by for rays from the top and bottom of the slit. One ray travels a distance different from the ray from the bottom and arrives in phase, interfering constructively. Two rays, each from slightly above those two, will also add constructively. Most rays from the slit will have another to interfere with constructively, and a maximum in intensity will occur at this angle. However, all rays do not interfere constructively for this situation, and so the maximum is not as intense as the central maximum. Finally, in (d), the angle shown is large enough to produce a second minimum. As seen in the figure, the difference in path length for rays from either side of the slit is , and we see that a destructive minimum is obtained when this distance is an integral multiple of the wavelength. Thus, to obtain destructive interference for a single slit, where is the slit width, is the light’s wavelength, is the angle relative to the original direction of the light, and is the order of the minimum. shows a graph of intensity for single slit interference, and it is apparent that the maxima on either side of the central maximum are much less intense and not as wide. This is consistent with the illustration in (b). ### Test Prep for AP Courses ### Section Summary 1. A single slit produces an interference pattern characterized by a broad central maximum with narrower and dimmer maxima to the sides. 2. There is destructive interference for a single slit when , where is the slit width, is the light’s wavelength, is the angle relative to the original direction of the light, and is the order of the minimum. Note that there is no minimum. ### Conceptual Questions ### Problems & Exercises
# Wave Optics ## Limits of Resolution: The Rayleigh Criterion ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the Rayleigh criterion. Light diffracts as it moves through space, bending around obstacles, interfering constructively and destructively. While this can be used as a spectroscopic tool—a diffraction grating disperses light according to wavelength, for example, and is used to produce spectra—diffraction also limits the detail we can obtain in images. (a) shows the effect of passing light through a small circular aperture. Instead of a bright spot with sharp edges, a spot with a fuzzy edge surrounded by circles of light is obtained. This pattern is caused by diffraction similar to that produced by a single slit. Light from different parts of the circular aperture interferes constructively and destructively. The effect is most noticeable when the aperture is small, but the effect is there for large apertures, too. How does diffraction affect the detail that can be observed when light passes through an aperture? (b) shows the diffraction pattern produced by two point light sources that are close to one another. The pattern is similar to that for a single point source, and it is just barely possible to tell that there are two light sources rather than one. If they were closer together, as in (c), we could not distinguish them, thus limiting the detail or resolution we can obtain. This limit is an inescapable consequence of the wave nature of light. There are many situations in which diffraction limits the resolution. The acuity of our vision is limited because light passes through the pupil, the circular aperture of our eye. Be aware that the diffraction-like spreading of light is due to the limited diameter of a light beam, not the interaction with an aperture. Thus light passing through a lens with a diameter shows this effect and spreads, blurring the image, just as light passing through an aperture of diameter does. So diffraction limits the resolution of any system having a lens or mirror. Telescopes are also limited by diffraction, because of the finite diameter of their primary mirror. Just what is the limit? To answer that question, consider the diffraction pattern for a circular aperture, which has a central maximum that is wider and brighter than the maxima surrounding it (similar to a slit) [see (a)]. It can be shown that, for a circular aperture of diameter , the first minimum in the diffraction pattern occurs at (providing the aperture is large compared with the wavelength of light, which is the case for most optical instruments). The accepted criterion for determining the diffraction limit to resolution based on this angle was developed by Lord Rayleigh in the 19th century. The Rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. See (b). The first minimum is at an angle of , so that two point objects are just resolvable if they are separated by the angle where is the wavelength of light (or other electromagnetic radiation) and is the diameter of the aperture, lens, mirror, etc., with which the two objects are observed. In this expression, has units of radians. Diffraction is not only a problem for optical instruments but also for the electromagnetic radiation itself. Any beam of light having a finite diameter and a wavelength exhibits diffraction spreading. The beam spreads out with an angle given by the equation . Take, for example, a laser beam made of rays as parallel as possible (angles between rays as close to as possible) instead spreads out at an angle , where is the diameter of the beam and is its wavelength. This spreading is impossible to observe for a flashlight, because its beam is not very parallel to start with. However, for long-distance transmission of laser beams or microwave signals, diffraction spreading can be significant (see ). To avoid this, we can increase . This is done for laser light sent to the Moon to measure its distance from the Earth. The laser beam is expanded through a telescope to make much larger and smaller. In most biology laboratories, resolution is presented when the use of the microscope is introduced. The ability of a lens to produce sharp images of two closely spaced point objects is called resolution. The smaller the distance by which two objects can be separated and still be seen as distinct, the greater the resolution. The resolving power of a lens is defined as that distance . An expression for resolving power is obtained from the Rayleigh criterion. In (a) we have two point objects separated by a distance . According to the Rayleigh criterion, resolution is possible when the minimum angular separation is where is the distance between the specimen and the objective lens, and we have used the small angle approximation (i.e., we have assumed that is much smaller than ), so that . Therefore, the resolving power is Another way to look at this is by re-examining the concept of Numerical Aperture () discussed in Microscopes. There, is a measure of the maximum acceptance angle at which the fiber will take light and still contain it within the fiber. (b) shows a lens and an object at point P. The here is a measure of the ability of the lens to gather light and resolve fine detail. The angle subtended by the lens at its focus is defined to be . From the figure and again using the small angle approximation, we can write The for a lens is , where is the index of refraction of the medium between the objective lens and the object at point P. From this definition for , we can see that In a microscope, is important because it relates to the resolving power of a lens. A lens with a large will be able to resolve finer details. Lenses with larger will also be able to collect more light and so give a brighter image. Another way to describe this situation is that the larger the , the larger the cone of light that can be brought into the lens, and so more of the diffraction modes will be collected. Thus the microscope has more information to form a clear image, and so its resolving power will be higher. One of the consequences of diffraction is that the focal point of a beam has a finite width and intensity distribution. Consider focusing when only considering geometric optics, shown in (a). The focal point is infinitely small with a huge intensity and the capacity to incinerate most samples irrespective of the of the objective lens. For wave optics, due to diffraction, the focal point spreads to become a focal spot (see (b)) with the size of the spot decreasing with increasing . Consequently, the intensity in the focal spot increases with increasing . The higher the , the greater the chances of photodegrading the specimen. However, the spot never becomes a true point. ### Test Prep for AP Courses ### Section Summary 1. Diffraction limits resolution. 2. For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. 3. This occurs for two point objects separated by the angle , where is the wavelength of light (or other electromagnetic radiation) and is the diameter of the aperture, lens, mirror, etc. This equation also gives the angular spreading of a source of light having a diameter . ### Conceptual Questions ### Problems & Exercises
# Wave Optics ## Thin Film Interference ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the rainbow formation by thin films. The bright colors seen in an oil slick floating on water or in a sunlit soap bubble are caused by interference. The brightest colors are those that interfere constructively. This interference is between light reflected from different surfaces of a thin film; thus, the effect is known as thin film interference. As noticed before, interference effects are most prominent when light interacts with something having a size similar to its wavelength. A thin film is one having a thickness smaller than a few times the wavelength of light, . Since color is associated indirectly with and since all interference depends in some way on the ratio of to the size of the object involved, we should expect to see different colors for different thicknesses of a film, as in . Some of the earliest measurements of such films and their effects were conducted by Agnes Pockels, a self-taught German chemist who investigated the characteristics of soapy and greasy films in water. Using homemade materials, Pockels developed a trough for measuring surface films and began conducting experiments. While scientific and societal barriers for women prevented her from publishing on her own, renowned scientist Lord Rayleigh supported her efforts and pushed for her work to be shared in the journal Nature. The trough Pockels invented became the basis for the contemporary version, as described below. What causes thin film interference? shows how light reflected from the top and bottom surfaces of a film can interfere. Incident light is only partially reflected from the top surface of the film (ray 1). The remainder enters the film and is itself partially reflected from the bottom surface. Part of the light reflected from the bottom surface can emerge from the top of the film (ray 2) and interfere with light reflected from the top (ray 1). Since the ray that enters the film travels a greater distance, it may be in or out of phase with the ray reflected from the top. However, consider for a moment, again, the bubbles in . The bubbles are darkest where they are thinnest. Furthermore, if you observe a soap bubble carefully, you will note it gets dark at the point where it breaks. For very thin films, the difference in path lengths of ray 1 and ray 2 in is negligible; so why should they interfere destructively and not constructively? The answer is that a phase change can occur upon reflection. The rule is as follows: When light reflects from a medium having an index of refraction greater than that of the medium in which it is traveling, a If the film in is a soap bubble (essentially water with air on both sides), then there is a shift for ray 1 and none for ray 2. Thus, when the film is very thin, the path length difference between the two rays is negligible, they are exactly out of phase, and destructive interference will occur at all wavelengths and so the soap bubble will be dark here. The thickness of the film relative to the wavelength of light is the other crucial factor in thin film interference. Ray 2 in travels a greater distance than ray 1. For light incident perpendicular to the surface, ray 2 travels a distance approximately farther than ray 1. When this distance is an integral or half-integral multiple of the wavelength in the medium (, where is the wavelength in vacuum and is the index of refraction), constructive or destructive interference occurs, depending also on whether there is a phase change in either ray. Thin-film interference has created an entire field of research and industrial applications. Its foundations were laid by Irving Langmuir and Katharine Burr Blodgett, working at General Electric in the 1920s and 1930s. Langmuir had pioneered a method for producing ultra-thin layers on materials. Blodgett built on these practices by creating a method to precisely stack and compress these layers in order to produce a film of a desired thickness and quality. The device they developed became known as the Langmuir-Blodgett trough, built from principles developed by Agnes Pockels and still used in laboratories today. The earliest widely applied use of these principles was non-reflective glass, which Blodgett patented in 1938 and which was used almost immediately in the making of the film Gone With the Wind. The film is viewed as a tremendous leap in cinematography; cameras, microscopes, telescopes, and many other instruments rely on Blodgett's invention as well. Thin film interference is most constructive or most destructive when the path length difference for the two rays is an integral or half-integral wavelength, respectively. That is, for rays incident perpendicularly, or . To know whether interference is constructive or destructive, you must also determine if there is a phase change upon reflection. Thin film interference thus depends on film thickness, the wavelength of light, and the refractive indices. For white light incident on a film that varies in thickness, you will observe rainbow colors of constructive interference for various wavelengths as the thickness varies. Another example of thin film interference can be seen when microscope slides are separated (see ). The slides are very flat, so that the wedge of air between them increases in thickness very uniformly. A phase change occurs at the second surface but not the first, and so there is a dark band where the slides touch. The rainbow colors of constructive interference repeat, going from violet to red again and again as the distance between the slides increases. As the layer of air increases, the bands become more difficult to see, because slight changes in incident angle have greater effects on path length differences. If pure-wavelength light instead of white light is used, then bright and dark bands are obtained rather than repeating rainbow colors. An important application of thin film interference is found in the manufacturing of optical instruments. A lens or mirror can be compared with a master as it is being ground, allowing it to be shaped to an accuracy of less than a wavelength over its entire surface. illustrates the phenomenon called Newton’s rings, which occurs when the plane surfaces of two lenses are placed together. (The circular bands are called Newton’s rings because Isaac Newton described them and their use in detail. Newton did not discover them; Robert Hooke did, and Newton did not believe they were due to the wave character of light.) Each successive ring of a given color indicates an increase of only one wavelength in the distance between the lens and the blank, so that great precision can be obtained. Once the lens is perfect, there will be no rings. The wings of certain moths and butterflies have nearly iridescent colors due to thin film interference. In addition to pigmentation, the wing’s color is affected greatly by constructive interference of certain wavelengths reflected from its film-coated surface. Car manufacturers are offering special paint jobs that use thin film interference to produce colors that change with angle. This expensive option is based on variation of thin film path length differences with angle. Security features on credit cards, banknotes, driving licenses and similar items prone to forgery use thin film interference, diffraction gratings, or holograms. Australia led the way with dollar bills printed on polymer with a diffraction grating security feature making the currency difficult to forge. Other countries such as New Zealand and Taiwan are using similar technologies, while the United States currency includes a thin film interference effect. ### Problem-Solving Strategies for Wave Optics Step 1. Examine the situation to determine that interference is involved. Identify whether slits or thin film interference are considered in the problem. Step 2. If slits are involved, note that diffraction gratings and double slits produce very similar interference patterns, but that gratings have narrower (sharper) maxima. Single slit patterns are characterized by a large central maximum and smaller maxima to the sides. Step 3. If thin film interference is involved, take note of the path length difference between the two rays that interfere. Be certain to use the wavelength in the medium involved, since it differs from the wavelength in vacuum. Note also that there is an additional phase shift when light reflects from a medium with a greater index of refraction. Step 4. Identify exactly what needs to be determined in the problem (identify the unknowns). A written list is useful. Draw a diagram of the situation. Labeling the diagram is useful. Step 5. Make a list of what is given or can be inferred from the problem as stated (identify the knowns). Step 6. Solve the appropriate equation for the quantity to be determined (the unknown), and enter the knowns. Slits, gratings, and the Rayleigh limit involve equations. Step 7. For thin film interference, you will have constructive interference for a total shift that is an integral number of wavelengths. You will have destructive interference for a total shift of a half-integral number of wavelengths. Always keep in mind that crest to crest is constructive whereas crest to trough is destructive. Step 8. Check to see if the answer is reasonable: Does it make sense? Angles in interference patterns cannot be greater than , for example. ### Test Prep for AP Courses ### Section Summary 1. Thin film interference occurs between the light reflected from the top and bottom surfaces of a film. In addition to the path length difference, there can be a phase change. 2. When light reflects from a medium having an index of refraction greater than that of the medium in which it is traveling, a phase change (or a shift) occurs. ### Conceptual Questions ### Problems & Exercises
# Wave Optics ## Polarization ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the meaning of polarization. 2. Discuss the property of optical activity of certain materials. Polaroid sunglasses are familiar to most of us. They have a special ability to cut the glare of light reflected from water or glass (see ). Polaroids have this ability because of a wave characteristic of light called polarization. What is polarization? How is it produced? What are some of its uses? The answers to these questions are related to the wave character of light. Light is one type of electromagnetic (EM) wave. As noted earlier, EM waves are transverse waves consisting of varying electric and magnetic fields that oscillate perpendicular to the direction of propagation (see ). There are specific directions for the oscillations of the electric and magnetic fields. Polarization is the attribute that a wave’s oscillations have a definite direction relative to the direction of propagation of the wave. (This is not the same type of polarization as that discussed for the separation of charges.) Waves having such a direction are said to be polarized. For an EM wave, we define the direction of polarization to be the direction parallel to the electric field. Thus we can think of the electric field arrows as showing the direction of polarization, as in . To examine this further, consider the transverse waves in the ropes shown in . The oscillations in one rope are in a vertical plane and are said to be vertically polarized. Those in the other rope are in a horizontal plane and are horizontally polarized. If a vertical slit is placed on the first rope, the waves pass through. However, a vertical slit blocks the horizontally polarized waves. For EM waves, the direction of the electric field is analogous to the disturbances on the ropes. The Sun and many other light sources produce waves that are randomly polarized (see ). Such light is said to be unpolarized because it is composed of many waves with all possible directions of polarization. Polaroid materials, invented by the founder of Polaroid Corporation, Edwin Land, act as a polarizing slit for light, allowing only polarization in one direction to pass through. Polarizing filters are composed of long molecules aligned in one direction. Thinking of the molecules as many slits, analogous to those for the oscillating ropes, we can understand why only light with a specific polarization can get through. The axis of a polarizing filter is the direction along which the filter passes the electric field of an EM wave (see ). shows the effect of two polarizing filters on originally unpolarized light. The first filter polarizes the light along its axis. When the axes of the first and second filters are aligned (parallel), then all of the polarized light passed by the first filter is also passed by the second. If the second polarizing filter is rotated, only the component of the light parallel to the second filter’s axis is passed. When the axes are perpendicular, no light is passed by the second. Only the component of the EM wave parallel to the axis of a filter is passed. Let us call the angle between the direction of polarization and the axis of a filter . If the electric field has an amplitude , then the transmitted part of the wave has an amplitude (see ). Since the intensity of a wave is proportional to its amplitude squared, the intensity of the transmitted wave is related to the incident wave by where is the intensity of the polarized wave before passing through the filter. (The above equation is known as Malus’s law.) ### Polarization by Reflection By now you can probably guess that Polaroid sunglasses cut the glare in reflected light because that light is polarized. You can check this for yourself by holding Polaroid sunglasses in front of you and rotating them while looking at light reflected from water or glass. As you rotate the sunglasses, you will notice the light gets bright and dim, but not completely black. This implies the reflected light is partially polarized and cannot be completely blocked by a polarizing filter. illustrates what happens when unpolarized light is reflected from a surface. Vertically polarized light is preferentially refracted at the surface, so that the reflected light is left more horizontally polarized. The reasons for this phenomenon are beyond the scope of this text, but a convenient mnemonic for remembering this is to imagine the polarization direction to be like an arrow. Vertical polarization would be like an arrow perpendicular to the surface and would be more likely to stick and not be reflected. Horizontal polarization is like an arrow bouncing on its side and would be more likely to be reflected. Sunglasses with vertical axes would then block more reflected light than unpolarized light from other sources. Since the part of the light that is not reflected is refracted, the amount of polarization depends on the indices of refraction of the media involved. It can be shown that reflected light is completely polarized at a angle of reflection , given by where is the medium in which the incident and reflected light travel and is the index of refraction of the medium that forms the interface that reflects the light. This equation is known as Brewster’s law, and is known as Brewster’s angle, named after the 19th-century Scottish physicist who discovered them. ### Polarization by Scattering If you hold your Polaroid sunglasses in front of you and rotate them while looking at blue sky, you will see the sky get bright and dim. This is a clear indication that light scattered by air is partially polarized. helps illustrate how this happens. Since light is a transverse EM wave, it vibrates the electrons of air molecules perpendicular to the direction it is traveling. The electrons then radiate like small antennae. Since they are oscillating perpendicular to the direction of the light ray, they produce EM radiation that is polarized perpendicular to the direction of the ray. When viewing the light along a line perpendicular to the original ray, as in , there can be no polarization in the scattered light parallel to the original ray, because that would require the original ray to be a longitudinal wave. Along other directions, a component of the other polarization can be projected along the line of sight, and the scattered light will only be partially polarized. Furthermore, multiple scattering can bring light to your eyes from other directions and can contain different polarizations. Photographs of the sky can be darkened by polarizing filters, a trick used by many photographers to make clouds brighter by contrast. Scattering from other particles, such as smoke or dust, can also polarize light. Detecting polarization in scattered EM waves can be a useful analytical tool in determining the scattering source. There is a range of optical effects used in sunglasses. Besides being Polaroid, other sunglasses have colored pigments embedded in them, while others use non-reflective or even reflective coatings. A recent development is photochromic lenses, which darken in the sunlight and become clear indoors. Photochromic lenses are embedded with organic microcrystalline molecules that change their properties when exposed to UV in sunlight, but become clear in artificial lighting with no UV. ### Liquid Crystals and Other Polarization Effects in Materials While you are undoubtedly aware of liquid crystal displays (LCDs) found in watches, calculators, computer screens, cellphones, flat screen televisions, and other myriad places, you may not be aware that they are based on polarization. Liquid crystals are so named because their molecules can be aligned even though they are in a liquid. Liquid crystals have the property that they can rotate the polarization of light passing through them by . Furthermore, this property can be turned off by the application of a voltage, as illustrated in . It is possible to manipulate this characteristic quickly and in small well-defined regions to create the contrast patterns we see in so many LCD devices. In flat screen LCD televisions, there is a large light at the back of the TV. The light travels to the front screen through millions of tiny units called pixels (picture elements). One of these is shown in (a) and (b). Each unit has three cells, with red, blue, or green filters, each controlled independently. When the voltage across a liquid crystal is switched off, the liquid crystal passes the light through the particular filter. One can vary the picture contrast by varying the strength of the voltage applied to the liquid crystal. Many crystals and solutions rotate the plane of polarization of light passing through them. Such substances are said to be optically active. Examples include sugar water, insulin, and collagen (see ). In addition to depending on the type of substance, the amount and direction of rotation depends on a number of factors. Among these is the concentration of the substance, the distance the light travels through it, and the wavelength of light. Optical activity is due to the asymmetric shape of molecules in the substance, such as being helical. Measurements of the rotation of polarized light passing through substances can thus be used to measure concentrations, a standard technique for sugars. It can also give information on the shapes of molecules, such as proteins, and factors that affect their shapes, such as temperature and pH. Glass and plastic become optically active when stressed; the greater the stress, the greater the effect. Optical stress analysis on complicated shapes can be performed by making plastic models of them and observing them through crossed filters, as seen in . It is apparent that the effect depends on wavelength as well as stress. The wavelength dependence is sometimes also used for artistic purposes. Another interesting phenomenon associated with polarized light is the ability of some crystals to split an unpolarized beam of light into two. Such crystals are said to be birefringent (see ). Each of the separated rays has a specific polarization. One behaves normally and is called the ordinary ray, whereas the other does not obey Snell’s law and is called the extraordinary ray. Birefringent crystals can be used to produce polarized beams from unpolarized light. Some birefringent materials preferentially absorb one of the polarizations. These materials are called dichroic and can produce polarization by this preferential absorption. This is fundamentally how polarizing filters and other polarizers work. The interested reader is invited to further pursue the numerous properties of materials related to polarization. ### Test Prep for AP Courses ### Section Summary 1. Polarization is the attribute that wave oscillations have a definite direction relative to the direction of propagation of the wave. 2. EM waves are transverse waves that may be polarized. 3. The direction of polarization is defined to be the direction parallel to the electric field of the EM wave. 4. Unpolarized light is composed of many rays having random polarization directions. 5. Light can be polarized by passing it through a polarizing filter or other polarizing material. The intensity of polarized light after passing through a polarizing filter is where is the original intensity and is the angle between the direction of polarization and the axis of the filter. 6. Polarization is also produced by reflection. 7. Brewster’s law states that reflected light will be completely polarized at the angle of reflection , known as Brewster’s angle, given by a statement known as Brewster’s law: , where is the medium in which the incident and reflected light travel and is the index of refraction of the medium that forms the interface that reflects the light. 8. Polarization can also be produced by scattering. 9. There are a number of types of optically active substances that rotate the direction of polarization of light passing through them. ### Conceptual Questions ### Problems & Exercises
# Wave Optics ## *Extended Topic* Microscopy Enhanced by the Wave Characteristics of Light ### Learning Objectives By the end of this section, you will be able to: 1. Discuss the different types of microscopes. Physics research underpins the advancement of developments in microscopy. As we gain knowledge of the wave nature of electromagnetic waves and methods to analyze and interpret signals, new microscopes that enable us to “see” more are being developed. It is the evolution and newer generation of microscopes that are described in this section. The use of microscopes (microscopy) to observe small details is limited by the wave nature of light. Owing to the fact that light diffracts significantly around small objects, it becomes impossible to observe details significantly smaller than the wavelength of light. One rule of thumb has it that all details smaller than about are difficult to observe. Radar, for example, can detect the size of an aircraft, but not its individual rivets, since the wavelength of most radar is several centimeters or greater. Similarly, visible light cannot detect individual atoms, since atoms are about 0.1 nm in size and visible wavelengths range from 380 to 760 nm. Ironically, special techniques used to obtain the best possible resolution with microscopes take advantage of the same wave characteristics of light that ultimately limit the detail. The most obvious method of obtaining better detail is to utilize shorter wavelengths. Ultraviolet (UV) microscopes have been constructed with special lenses that transmit UV rays and utilize photographic or electronic techniques to record images. The shorter UV wavelengths allow somewhat greater detail to be observed, but drawbacks, such as the hazard of UV to living tissue and the need for special detection devices and lenses (which tend to be dispersive in the UV), severely limit the use of UV microscopes. Elsewhere, we will explore practical uses of very short wavelength EM waves, such as x rays, and other short-wavelength probes, such as electrons in electron microscopes, to detect small details. Another difficulty in microscopy is the fact that many microscopic objects do not absorb much of the light passing through them. The lack of contrast makes image interpretation very difficult. Contrast is the difference in intensity between objects and the background on which they are observed. Stains (such as dyes, fluorophores, etc.) are commonly employed to enhance contrast, but these tend to be application specific. More general wave interference techniques can be used to produce contrast. shows the passage of light through a sample. Since the indices of refraction differ, the number of wavelengths in the paths differs. Light emerging from the object is thus out of phase with light from the background and will interfere differently, producing enhanced contrast, especially if the light is coherent and monochromatic—as in laser light. Interference microscopes enhance contrast between objects and background by superimposing a reference beam of light upon the light emerging from the sample. Since light from the background and objects differ in phase, there will be different amounts of constructive and destructive interference, producing the desired contrast in final intensity. shows schematically how this is done. Parallel rays of light from a source are split into two beams by a half-silvered mirror. These beams are called the object and reference beams. Each beam passes through identical optical elements, except that the object beam passes through the object we wish to observe microscopically. The light beams are recombined by another half-silvered mirror and interfere. Since the light rays passing through different parts of the object have different phases, interference will be significantly different and, hence, have greater contrast between them. Another type of microscope utilizing wave interference and differences in phases to enhance contrast is called the phase-contrast microscope. While its principle is the same as the interference microscope, the phase-contrast microscope is simpler to use and construct. Its impact (and the principle upon which it is based) was so important that its developer, the Dutch physicist Frits Zernike (1888–1966), was awarded the Nobel Prize in 1953. shows the basic construction of a phase-contrast microscope. Phase differences between light passing through the object and background are produced by passing the rays through different parts of a phase plate (so called because it shifts the phase of the light passing through it). These two light rays are superimposed in the image plane, producing contrast due to their interference. A polarization microscope also enhances contrast by utilizing a wave characteristic of light. Polarization microscopes are useful for objects that are optically active or birefringent, particularly if those characteristics vary from place to place in the object. Polarized light is sent through the object and then observed through a polarizing filter that is perpendicular to the original polarization direction. Nearly transparent objects can then appear with strong color and in high contrast. Many polarization effects are wavelength dependent, producing color in the processed image. Contrast results from the action of the polarizing filter in passing only components parallel to its axis. Apart from the UV microscope, the variations of microscopy discussed so far in this section are available as attachments to fairly standard microscopes or as slight variations. The next level of sophistication is provided by commercial confocal microscopes, which use the extended focal region shown in (b) to obtain three-dimensional images rather than two-dimensional images. Here, only a single plane or region of focus is identified; out-of-focus regions above and below this plane are subtracted out by a computer so the image quality is much better. This type of microscope makes use of fluorescence, where a laser provides the excitation light. Laser light passing through a tiny aperture called a pinhole forms an extended focal region within the specimen. The reflected light passes through the objective lens to a second pinhole and the photomultiplier detector, see . The second pinhole is the key here and serves to block much of the light from points that are not at the focal point of the objective lens. The pinhole is conjugate (coupled) to the focal point of the lens. The second pinhole and detector are scanned, allowing reflected light from a small region or section of the extended focal region to be imaged at any one time. The out-of-focus light is excluded. Each image is stored in a computer, and a full scanned image is generated in a short time. Live cell processes can also be imaged at adequate scanning speeds allowing the imaging of three-dimensional microscopic movement. Confocal microscopy enhances images over conventional optical microscopy, especially for thicker specimens, and so has become quite popular. The next level of sophistication is provided by microscopes attached to instruments that isolate and detect only a small wavelength band of light—monochromators and spectral analyzers. Here, the monochromatic light from a laser is scattered from the specimen. This scattered light shifts up or down as it excites particular energy levels in the sample. The uniqueness of the observed scattered light can give detailed information about the chemical composition of a given spot on the sample with high contrast—like molecular fingerprints. Applications are in materials science, nanotechnology, and the biomedical field. Fine details in biochemical processes over time can even be detected. The ultimate in microscopy is the electron microscope—to be discussed later. Research is being conducted into the development of new prototype microscopes that can become commercially available, providing better diagnostic and research capacities. ### Section Summary 1. To improve microscope images, various techniques utilizing the wave characteristics of light have been developed. Many of these enhance contrast with interference effects. ### Conceptual Questions
# Special Relativity ## Introduction to Special Relativity Have you ever looked up at the night sky and dreamed of traveling to other planets in faraway star systems? Would there be other life forms? What would other worlds look like? You might imagine that such an amazing trip would be possible if we could just travel fast enough, but you will read in this chapter why this is not true. In 1905 Albert Einstein developed the theory of special relativity. This theory explains the limit on an object’s speed and describes the consequences. Relativity does not only apply to far-reaching and (as yet) unrealized activities like human interstellar travel. It affects everyday life in the form of communication, global trade, and even medicine. For example, Global Positioning Systems, which drive everything from airplane navigation to smart phone maps, rely on signals captured by multiple orbiting satellites and highly accurate measurements of time. Every signal passing between satellites, towers, and devices must be precisely measured and account for the relativistic effect of curved space and time dilation (discussed below). Variations in Earth’s landscape, its non-spherical shape, and the effects of gravity must also be considered in order to obtain accurate measurements. One of the most important contributors to these systems was Gladys West, a computer scientist and mathematician working at the Naval Proving Ground, where GPS and related technologies were advanced. West had previously developed altimeter models and managed the world’s first satellite-based ocean mapping project (Seastat). She then developed and programmed the algorithms capable of calculating positions and Earth’s shape to sufficient precisions to enable the existence of GPS. In these calculations, she accounted for the impacts of relativity and other complex principles related to it. Relativity. The word relativity might conjure an image of Einstein, but the idea did not begin with him. People have been exploring relativity for many centuries. Relativity is the study of how different observers measure the same event. Galileo and Newton developed the first correct version of classical relativity. Einstein developed the modern theory of relativity. Modern relativity is divided into two parts. Special relativity deals with observers who are moving at constant velocity. General relativity deals with observers who are undergoing acceleration. Einstein is famous because his theories of relativity made revolutionary predictions. Most importantly, his theories have been verified to great precision in a vast range of experiments, altering forever our concept of space and time. It is important to note that although classical mechanics, in general, and classical relativity, in particular, are limited, they are extremely good approximations for large, slow-moving objects. Otherwise, we could not use classical physics to launch satellites or build bridges. In the classical limit (objects larger than submicroscopic and moving slower than about 1% of the speed of light), relativistic mechanics becomes the same as classical mechanics. This fact will be noted at appropriate places throughout this chapter.
# Special Relativity ## Einstein’s Postulates ### Learning Objectives By the end of this section, you will be able to: 1. State and explain both of Einstein’s postulates. 2. Explain what an inertial frame of reference is. 3. Describe one way the speed of light can be changed. Have you ever used the Pythagorean Theorem and gotten a wrong answer? Probably not, unless you made a mistake in either your algebra or your arithmetic. Each time you perform the same calculation, you know that the answer will be the same. Trigonometry is reliable because of the certainty that one part always flows from another in a logical way. Each part is based on a set of postulates, and you can always connect the parts by applying those postulates. Physics is the same way with the exception that all parts must describe nature. If we are careful to choose the correct postulates, then our theory will follow and will be verified by experiment. Einstein essentially did the theoretical aspect of this method for relativity. With two deceptively simple postulates and a careful consideration of how measurements are made, he produced the theory of special relativity. ### Einstein’s First Postulate The first postulate upon which Einstein based the theory of special relativity relates to reference frames. All velocities are measured relative to some frame of reference. For example, a car’s motion is measured relative to its starting point or the road it is moving over, a projectile’s motion is measured relative to the surface it was launched from, and a planet’s orbit is measured relative to the star it is orbiting around. The simplest frames of reference are those that are not accelerated and are not rotating. Newton’s first law, the law of inertia, holds exactly in such a frame. The laws of physics seem to be simplest in inertial frames. For example, when you are in a plane flying at a constant altitude and speed, physics seems to work exactly the same as if you were standing on the surface of the Earth. However, in a plane that is taking off, matters are somewhat more complicated. In these cases, the net force on an object, , is not equal to the product of mass and acceleration, . Instead, is equal to plus a fictitious force. This situation is not as simple as in an inertial frame. Not only are laws of physics simplest in inertial frames, but they should be the same in all inertial frames, since there is no preferred frame and no absolute motion. Einstein incorporated these ideas into his first postulate of special relativity. As with many fundamental statements, there is more to this postulate than meets the eye. The laws of physics include only those that satisfy this postulate. We shall find that the definitions of relativistic momentum and energy must be altered to fit. Another outcome of this postulate is the famous equation . ### Einstein’s Second Postulate The second postulate upon which Einstein based his theory of special relativity deals with the speed of light. Late in the 19th century, the major tenets of classical physics were well established. Two of the most important were the laws of electricity and magnetism and Newton’s laws. In particular, the laws of electricity and magnetism predict that light travels at in a vacuum, but they do not specify the frame of reference in which light has this speed. There was a contradiction between this prediction and Newton’s laws, in which velocities add like simple vectors. If the latter were true, then two observers moving at different speeds would see light traveling at different speeds. Imagine what a light wave would look like to a person traveling along with it at a speed . If such a motion were possible then the wave would be stationary relative to the observer. It would have electric and magnetic fields that varied in strength at various distances from the observer but were constant in time. This is not allowed by Maxwell’s equations. So either Maxwell’s equations are wrong, or an object with mass cannot travel at speed . Einstein concluded that the latter is true. An object with mass cannot travel at speed . This conclusion implies that light in a vacuum must always travel at speed relative to any observer. Maxwell’s equations are correct, and Newton’s addition of velocities is not correct for light. Investigations such as Young’s double slit experiment in the early-1800s had convincingly demonstrated that light is a wave. Many types of waves were known, and all travelled in some medium. Scientists therefore assumed that a medium carried light, even in a vacuum, and light travelled at a speed relative to that medium. Starting in the mid-1880s, the American physicist A. A. Michelson, later aided by E. W. Morley, made a series of direct measurements of the speed of light. The results of their measurements were startling. The eventual conclusion derived from this result is that light, unlike mechanical waves such as sound, does not need a medium to carry it. Furthermore, the Michelson-Morley results implied that the speed of light is independent of the motion of the source relative to the observer. That is, everyone observes light to move at speed regardless of how they move relative to the source or one another. For a number of years, many scientists tried unsuccessfully to explain these results and still retain the general applicability of Newton’s laws. It was not until 1905, when Einstein published his first paper on special relativity, that the currently accepted conclusion was reached. Based mostly on his analysis that the laws of electricity and magnetism would not allow another speed for light, and only slightly aware of the Michelson-Morley experiment, Einstein detailed his second postulate of special relativity. Deceptively simple and counterintuitive, this and the first postulate leave all else open for change. Some fundamental concepts do change. Among the changes are the loss of agreement on the elapsed time for an event, the variation of distance with speed, and the realization that matter and energy can be converted into one another. You will read about these concepts in the following sections. ### Test Prep for AP Courses ### Section Summary 1. Relativity is the study of how different observers measure the same event. 2. Modern relativity is divided into two parts. Special relativity deals with observers who are in uniform (unaccelerated) motion, whereas general relativity includes accelerated relative motion and gravity. Modern relativity is correct in all circumstances and, in the limit of low velocity and weak gravitation, gives the same predictions as classical relativity. 3. An inertial frame of reference is a reference frame in which a body at rest remains at rest and a body in motion moves at a constant speed in a straight line unless acted on by an outside force. 4. Modern relativity is based on Einstein’s two postulates. The first postulate of special relativity is the idea that the laws of physics are the same and can be stated in their simplest form in all inertial frames of reference. The second postulate of special relativity is the idea that the speed of light is a constant, independent of the relative motion of the source. 5. The Michelson-Morley experiment demonstrated that the speed of light in a vacuum is independent of the motion of the Earth about the Sun. ### Conceptual Questions
# Special Relativity ## Simultaneity And Time Dilation ### Learning Objectives By the end of this section, you will be able to: 1. Describe simultaneity. 2. Describe time dilation. 3. Calculate γ. 4. Compare proper time and the observer’s measured time. 5. Explain why the twin paradox is a false paradox. Do time intervals depend on who observes them? Intuitively, we expect the time for a process, such as the elapsed time for a foot race, to be the same for all observers. Our experience has been that disagreements over elapsed time have to do with the accuracy of measuring time. When we carefully consider just how time is measured, however, we will find that elapsed time depends on the relative motion of an observer with respect to the process being measured. ### Simultaneity Consider how we measure elapsed time. If we use a stopwatch, for example, how do we know when to start and stop the watch? One method is to use the arrival of light from the event, such as observing a light turning green to start a drag race. The timing will be more accurate if some sort of electronic detection is used, avoiding human reaction times and other complications. Now suppose we use this method to measure the time interval between two flashes of light produced by flash lamps. (See .) Two flash lamps with observer A midway between them are on a rail car that moves to the right relative to observer B. Observer B arranges for the light flashes to be emitted just as A passes B, so that both A and B are equidistant from the lamps when the light is emitted. Observer B measures the time interval between the arrival of the light flashes. According to postulate 2, the speed of light is not affected by the motion of the lamps relative to B. Therefore, light travels equal distances to him at equal speeds. Thus observer B measures the flashes to be simultaneous. Now consider what observer A sees happening. Since both lamps are the same distance from her in her reference frame and the train is moving to the right, she perceives the flash from the right-hand bulb occurring before the left-hand bulb. Here a relative velocity between observers affects whether two events are observed to be simultaneous. Simultaneity is not absolute.. This illustrates the power of clear thinking. We might have guessed incorrectly that if light is emitted simultaneously, then two observers halfway between the sources would see the flashes simultaneously. But careful analysis shows this not to be the case. Einstein was brilliant at this type of thought experiment (in German, “Gedankenexperiment”). He very carefully considered how an observation is made and disregarded what might seem obvious. The validity of thought experiments, of course, is determined by actual observation. The genius of Einstein is evidenced by the fact that experiments have repeatedly confirmed his theory of relativity. In summary: Two events are defined to be simultaneous if an observer measures them as occurring at the same time (such as by receiving light from the events). Two events are not necessarily simultaneous to all observers. ### Time Dilation The consideration of the measurement of elapsed time and simultaneity leads to an important relativistic effect. Suppose, for example, an astronaut measures the time it takes for light to cross her ship, bounce off a mirror, and return. (See .) How does the elapsed time the astronaut measures compare with the elapsed time measured for the same event by a person on the Earth? Asking this question (another thought experiment) produces a profound result. We find that the elapsed time for a process depends on who is measuring it. In this case, the time measured by the astronaut is smaller than the time measured by the Earth-bound observer. The passage of time is different for the observers because the distance the light travels in the astronaut’s frame is smaller than in the Earth-bound frame. Light travels at the same speed in each frame, and so it will take longer to travel the greater distance in the Earth-bound frame. To quantitatively verify that time depends on the observer, consider the paths followed by light as seen by each observer. (See (c).) The astronaut sees the light travel straight across and back for a total distance of , twice the width of her ship. The Earth-bound observer sees the light travel a total distance . Since the ship is moving at speed to the right relative to the Earth, light moving to the right hits the mirror in this frame. Light travels at a speed in both frames, and because time is the distance divided by speed, the time measured by the astronaut is This time has a separate name to distinguish it from the time measured by the Earth-bound observer. In the case of the astronaut observe the reflecting light, the astronaut measures proper time. The time measured by the Earth-bound observer is To find the relationship between and , consider the triangles formed by and . (See (c).) The third side of these similar triangles is , the distance the astronaut moves as the light goes across her ship. In the frame of the Earth-bound observer, Using the Pythagorean Theorem, the distance is found to be Substituting into the expression for the time interval gives We square this equation, which yields Note that if we square the first expression we had for , we get . This term appears in the preceding equation, giving us a means to relate the two time intervals. Thus, Gathering terms, we solve for : Thus, Taking the square root yields an important relationship between elapsed times: where This equation for is truly remarkable. First, as contended, elapsed time is not the same for different observers moving relative to one another, even though both are in inertial frames. Proper time measured by an observer, like the astronaut moving with the apparatus, is smaller than time measured by other observers. Since those other observers measure a longer time , the effect is called time dilation. The Earth-bound observer sees time dilate (get longer) for a system moving relative to the Earth. Alternatively, according to the Earth-bound observer, time slows in the moving frame, since less time passes there. All clocks moving relative to an observer, including biological clocks such as aging, are observed to run slow compared with a clock stationary relative to the observer. Note that if the relative velocity is much less than the speed of light (), then is extremely small, and the elapsed times and are nearly equal. At low velocities, modern relativity approaches classical physics—our everyday experiences have very small relativistic effects. The equation also implies that relative velocity cannot exceed the speed of light. As approaches , approaches infinity. This would imply that time in the astronaut’s frame stops at the speed of light. If exceeded , then we would be taking the square root of a negative number, producing an imaginary value for . There is considerable experimental evidence that the equation is correct. One example is found in cosmic ray particles that continuously rain down on the Earth from deep space. Some collisions of these particles with nuclei in the upper atmosphere result in short-lived particles called muons. The half-life (amount of time for half of a material to decay) of a muon is when it is at rest relative to the observer who measures the half-life. This is the proper time . Muons produced by cosmic ray particles have a range of velocities, with some moving near the speed of light. It has been found that the muon’s half-life as measured by an Earth-bound observer () varies with velocity exactly as predicted by the equation . The faster the muon moves, the longer it lives. We on the Earth see the muon’s half-life time dilated—as viewed from our frame, the muon decays more slowly than it does when at rest relative to us. Another implication of the preceding example is that everything an astronaut does when moving at of the speed of light relative to the Earth takes 3.20 times longer when observed from the Earth. Does the astronaut sense this? Only if she looks outside her spaceship. All methods of measuring time in her frame will be affected by the same factor of 3.20. This includes her wristwatch, heart rate, cell metabolism rate, nerve impulse rate, and so on. She will have no way of telling, since all of her clocks will agree with one another because their relative velocities are zero. Motion is relative, not absolute. But what if she does look out the window? ### The Twin Paradox An intriguing consequence of time dilation is that a space traveler moving at a high velocity relative to the Earth would age less than her Earth-bound twin. Imagine the astronaut moving at such a velocity that , as in . A trip that takes 2.00 years in her frame would take 60.0 years in her Earth-bound twin’s frame. Suppose the astronaut traveled 1.00 year to another star system. She briefly explored the area, and then traveled 1.00 year back. If the astronaut was 40 years old when she left, she would be 42 upon her return. Everything on the Earth, however, would have aged 60.0 years. Her twin, if still alive, would be 100 years old. The situation would seem different to the astronaut. Because motion is relative, the spaceship would seem to be stationary and the Earth would appear to move. (This is the sensation you have when flying in a jet.) If the astronaut looks out the window of the spaceship, she will see time slow down on the Earth by a factor of . To her, the Earth-bound sister will have aged only 2/30 (1/15) of a year, while she aged 2.00 years. The two sisters cannot both be correct. As with all paradoxes, the premise is faulty and leads to contradictory conclusions. In fact, the astronaut’s motion is significantly different from that of the Earth-bound twin. The astronaut accelerates to a high velocity and then decelerates to view the star system. To return to the Earth, she again accelerates and decelerates. The Earth-bound twin does not experience these accelerations. So the situation is not symmetric, and it is not correct to claim that the astronaut will observe the same effects as her Earth-bound twin. If you use special relativity to examine the twin paradox, you must keep in mind that the theory is expressly based on inertial frames, which by definition are not accelerated or rotating. Einstein developed general relativity to deal with accelerated frames and with gravity, a prime source of acceleration. You can also use general relativity to address the twin paradox and, according to general relativity, the astronaut will age less. Some important conceptual aspects of general relativity are discussed in General Relativity and Quantum Gravity of this course. In 1971, American physicists Joseph Hafele and Richard Keating verified time dilation at low relative velocities by flying extremely accurate atomic clocks around the Earth on commercial aircraft. They measured elapsed time to an accuracy of a few nanoseconds and compared it with the time measured by clocks left behind. Hafele and Keating’s results were within experimental uncertainties of the predictions of relativity. Both special and general relativity had to be taken into account, since gravity and accelerations were involved as well as relative motion. ### Section Summary 1. Two events are defined to be simultaneous if an observer measures them as occurring at the same time. They are not necessarily simultaneous to all observers—simultaneity is not absolute. 2. Time dilation is the phenomenon of time passing slower for an observer who is moving relative to another observer. 3. Observers moving at a relative velocity do not measure the same elapsed time for an event. Proper time is the time measured by an observer at rest relative to the event being observed. Proper time is related to the time measured by an Earth-bound observer by the equation where 4. The equation relating proper time and time measured by an Earth-bound observer implies that relative velocity cannot exceed the speed of light. 5. The twin paradox asks why a twin traveling at a relativistic speed away and then back towards the Earth ages less than the Earth-bound twin. The premise to the paradox is faulty because the traveling twin is accelerating. Special relativity does not apply to accelerating frames of reference. 6. Time dilation is usually negligible at low relative velocities, but it does occur, and it has been verified by experiment. ### Conceptual Questions ### Problems & Exercises
# Special Relativity ## Length Contraction ### Learning Objectives By the end of this section, you will be able to: 1. Describe proper length. 2. Calculate length contraction. 3. Explain why we don’t notice these effects at everyday scales. Have you ever driven on a road that seems like it goes on forever? If you look ahead, you might say you have about 10 km left to go. Another traveler might say the road ahead looks like it’s about 15 km long. If you both measured the road, however, you would agree. Traveling at everyday speeds, the distance you both measure would be the same. You will read in this section, however, that this is not true at relativistic speeds. Close to the speed of light, distances measured are not the same when measured by different observers. ### Proper Length One thing all observers agree upon is relative speed. Even though clocks measure different elapsed times for the same process, they still agree that relative speed, which is distance divided by elapsed time, is the same. This implies that distance, too, depends on the observer’s relative motion. If two observers see different times, then they must also see different distances for relative speed to be the same to each of them. The muon discussed in illustrates this concept. To an observer on the Earth, the muon travels at for from the time it is produced until it decays. Thus it travels a distance relative to the Earth. In the muon’s frame of reference, its lifetime is only . It has enough time to travel only The distance between the same two events (production and decay of a muon) depends on who measures it and how they are moving relative to it. The Earth-bound observer measures the proper length , because the points at which the muon is produced and decays are stationary relative to the Earth. To the muon, the Earth, air, and clouds are moving, and so the distance it sees is not the proper length. ### Length Contraction To develop an equation relating distances measured by different observers, we note that the velocity relative to the Earth-bound observer in our muon example is given by The time relative to the Earth-bound observer is , since the object being timed is moving relative to this observer. The velocity relative to the moving observer is given by The moving observer travels with the muon and therefore observes the proper time . The two velocities are identical; thus, We know that . Substituting this equation into the relationship above gives Substituting for gives an equation relating the distances measured by different observers. If we measure the length of anything moving relative to our frame, we find its length to be smaller than the proper length that would be measured if the object were stationary. For example, in the muon’s reference frame, the distance between the points where it was produced and where it decayed is shorter. Those points are fixed relative to the Earth but moving relative to the muon. Clouds and other objects are also contracted along the direction of motion in the muon’s reference frame. People could be sent very large distances (thousands or even millions of light years) and age only a few years on the way if they traveled at extremely high velocities. But, like emigrants of centuries past, they would leave the Earth they know forever. Even if they returned, thousands to millions of years would have passed on the Earth, obliterating most of what now exists. There is also a more serious practical obstacle to traveling at such velocities; immensely greater energies than classical physics predicts would be needed to achieve such high velocities. This will be discussed in Relatavistic Energy. Why don’t we notice length contraction in everyday life? The distance to the grocery shop does not seem to depend on whether we are moving or not. Examining the equation , we see that at low velocities () the lengths are nearly equal, the classical expectation. But length contraction is real, if not commonly experienced. For example, a charged particle, like an electron, traveling at relativistic velocity has electric field lines that are compressed along the direction of motion as seen by a stationary observer. (See .) As the electron passes a detector, such as a coil of wire, its field interacts much more briefly, an effect observed at particle accelerators such as the 3 km long Stanford Linear Accelerator (SLAC). In fact, to an electron traveling down the beam pipe at SLAC, the accelerator and the Earth are all moving by and are length contracted. The relativistic effect is so great than the accelerator is only 0.5 m long to the electron. It is actually easier to get the electron beam down the pipe, since the beam does not have to be as precisely aimed to get down a short pipe as it would down one 3 km long. This, again, is an experimental verification of the Special Theory of Relativity. ### Summary 1. All observers agree upon relative speed. 2. Distance depends on an observer’s motion. Proper length is the distance between two points measured by an observer who is at rest relative to both of the points. Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth. 3. Length contraction is the shortening of the measured length of an object moving relative to the observer’s frame: ### Conceptual Questions ### Problems & Exercises
# Special Relativity ## Relativistic Addition of Velocities ### Learning Objectives By the end of this section, you will be able to: 1. Calculate relativistic velocity addition. 2. Explain when relativistic velocity addition should be used instead of classical addition of velocities. 3. Calculate relativistic Doppler shift. If you’ve ever seen a kayak move down a fast-moving river, you know that remaining in the same place would be hard. The river current pulls the kayak along. Pushing the oars back against the water can move the kayak forward in the water, but that only accounts for part of the velocity. The kayak’s motion is an example of classical addition of velocities. In classical physics, velocities add as vectors. The kayak’s velocity is the vector sum of its velocity relative to the water and the water’s velocity relative to the riverbank. ### Classical Velocity Addition For simplicity, we restrict our consideration of velocity addition to one-dimensional motion. Classically, velocities add like regular numbers in one-dimensional motion. (See .) Suppose, for example, a girl is riding in a sled at a speed 1.0 m/s relative to an observer. She throws a snowball first forward, then backward at a speed of 1.5 m/s relative to the sled. We denote direction with plus and minus signs in one dimension; in this example, forward is positive. Let be the velocity of the sled relative to the Earth, the velocity of the snowball relative to the Earth-bound observer, and the velocity of the snowball relative to the sled. Thus, when the girl throws the snowball forward, . It makes good intuitive sense that the snowball will head towards the Earth-bound observer faster, because it is thrown forward from a moving vehicle. When the girl throws the snowball backward, . The minus sign means the snowball moves away from the Earth-bound observer. ### Relativistic Velocity Addition The second postulate of relativity (verified by extensive experimental observation) says that classical velocity addition does not apply to light. Imagine a car traveling at night along a straight road, as in . If classical velocity addition applied to light, then the light from the car’s headlights would approach the observer on the sidewalk at a speed . But we know that light will move away from the car at speed relative to the driver of the car, and light will move towards the observer on the sidewalk at speed , too. Velocities cannot add to greater than the speed of light, provided that is less than and does not exceed . The following example illustrates that relativistic velocity addition is not as symmetric as classical velocity addition. ### Doppler Shift Although the speed of light does not change with relative velocity, the frequencies and wavelengths of light do. First discussed for sound waves, a Doppler shift occurs in any wave when there is relative motion between source and observer. In the Doppler equation, is the observed wavelength, is the source wavelength, and is the relative velocity of the source to the observer. The velocity is positive for motion away from an observer and negative for motion toward an observer. In terms of source frequency and observed frequency, this equation can be written Notice that the – and + signs are different than in the wavelength equation. The relativistic Doppler shift is easy to observe. This equation has everyday applications ranging from Doppler-shifted radar velocity measurements of transportation to Doppler-radar storm monitoring. In astronomical observations, the relativistic Doppler shift provides velocity information such as the motion and distance of stars. ### Test Prep for AP Courses ### Section Summary 1. With classical velocity addition, velocities add like regular numbers in one-dimensional motion: , where is the velocity between two observers, is the velocity of an object relative to one observer, and is the velocity relative to the other observer. 2. Velocities cannot add to be greater than the speed of light. Relativistic velocity addition describes the velocities of an object moving at a relativistic speed: 3. An observer of electromagnetic radiation sees relativistic Doppler effects if the source of the radiation is moving relative to the observer. The wavelength of the radiation is longer (called a red shift) than that emitted by the source when the source moves away from the observer and shorter (called a blue shift) when the source moves toward the observer. The shifted wavelength is described by the equation is the observed wavelength, is the source wavelength, and is the relative velocity of the source to the observer. ### Conceptual Questions ### Problems & Exercises
# Special Relativity ## Relativistic Momentum ### Learning Objectives By the end of this section, you will be able to: 1. Calculate relativistic momentum. 2. Explain why the only mass it makes sense to talk about is rest mass. In classical physics, momentum is a simple product of mass and velocity. However, we saw in the last section that when special relativity is taken into account, massive objects have a speed limit. What effect do you think mass and velocity have on the momentum of objects moving at relativistic speeds? Momentum is one of the most important concepts in physics. The broadest form of Newton’s second law is stated in terms of momentum. Momentum is conserved whenever the net external force on a system is zero. This makes momentum conservation a fundamental tool for analyzing collisions. All of Work, Energy, and Energy Resources is devoted to momentum, and momentum has been important for many other topics as well, particularly where collisions were involved. We will see that momentum has the same importance in modern physics. Relativistic momentum is conserved, and much of what we know about subatomic structure comes from the analysis of collisions of accelerator-produced relativistic particles. The first postulate of relativity states that the laws of physics are the same in all inertial frames. Does the law of conservation of momentum survive this requirement at high velocities? The answer is yes, provided that the momentum is defined as follows. Note that we use for velocity here to distinguish it from relative velocity between observers. Only one observer is being considered here. With defined in this way, total momentum is conserved whenever the net external force is zero, just as in classical physics. Again we see that the relativistic quantity becomes virtually the same as the classical at low velocities. That is, relativistic momentum becomes the classical at low velocities, because is very nearly equal to 1 at low velocities. Relativistic momentum has the same intuitive feel as classical momentum. It is greatest for large masses moving at high velocities, but, because of the factor , relativistic momentum approaches infinity as approaches . (See .) This is another indication that an object with mass cannot reach the speed of light. If it did, its momentum would become infinite, an unreasonable value. Relativistic momentum is defined in such a way that the conservation of momentum will hold in all inertial frames. Whenever the net external force on a system is zero, relativistic momentum is conserved, just as is the case for classical momentum. This has been verified in numerous experiments. In Relativistic Energy, the relationship of relativistic momentum to energy is explored. That subject will produce our first inkling that objects without mass may also have momentum. ### Section Summary 1. The law of conservation of momentum is valid whenever the net external force is zero and for relativistic momentum. Relativistic momentum is classical momentum multiplied by the relativistic factor . 2. , where is the rest mass of the object, is its velocity relative to an observer, and the relativistic factor . 3. At low velocities, relativistic momentum is equivalent to classical momentum. 4. Relativistic momentum approaches infinity as approaches . This implies that an object with mass cannot reach the speed of light. 5. Relativistic momentum is conserved, just as classical momentum is conserved. ### Conceptual Questions ### Problem Exercises
# Special Relativity ## Relativistic Energy ### Learning Objectives By the end of this section, you will be able to: 1. Compute total energy of a relativistic object. 2. Compute the kinetic energy of a relativistic object. 3. Describe rest energy, and explain how it can be converted to other forms. 4. Explain why massive particles cannot reach C. A tokamak is a form of experimental fusion reactor, which can change mass to energy. Accomplishing this requires an understanding of relativistic energy. Nuclear reactors are proof of the conservation of relativistic energy. Conservation of energy is one of the most important laws in physics. Not only does energy have many important forms, but each form can be converted to any other. We know that classically the total amount of energy in a system remains constant. Relativistically, energy is still conserved, provided its definition is altered to include the possibility of mass changing to energy, as in the reactions that occur within a nuclear reactor. Relativistic energy is intentionally defined so that it will be conserved in all inertial frames, just as is the case for relativistic momentum. As a consequence, we learn that several fundamental quantities are related in ways not known in classical physics. All of these relationships are verified by experiment and have fundamental consequences. The altered definition of energy contains some of the most fundamental and spectacular new insights into nature found in recent history. ### Total Energy and Rest Energy The first postulate of relativity states that the laws of physics are the same in all inertial frames. Einstein showed that the law of conservation of energy is valid relativistically, if we define energy to include a relativistic factor. This is the correct form of Einstein’s most famous equation, which for the first time showed that energy is related to the mass of an object at rest. For example, if energy is stored in the object, its rest mass increases. This also implies that mass can be destroyed to release energy. The implications of these first two equations regarding relativistic energy are so broad that they were not completely recognized for some years after Einstein published them in 1907, nor was the experimental proof that they are correct widely recognized at first. Einstein, it should be noted, did understand and describe the meanings and implications of his theory. Today, the practical applications of the conversion of mass into another form of energy, such as in nuclear weapons and nuclear power plants, are well known. But examples also existed when Einstein first proposed the correct form of relativistic energy, and he did describe some of them. Nuclear radiation had been discovered in the previous decade, and it had been a mystery as to where its energy originated. The explanation was that, in certain nuclear processes, a small amount of mass is destroyed and energy is released and carried by nuclear radiation. But the amount of mass destroyed is so small that it is difficult to detect that any is missing. Although Einstein proposed this as the source of energy in the radioactive salts then being studied, it was many years before there was broad recognition that mass could be and, in fact, commonly is converted to energy. (See .) Because of the relationship of rest energy to mass, we now consider mass to be a form of energy rather than something separate. There had not even been a hint of this prior to Einstein’s work. Such conversion is now known to be the source of the Sun’s energy, the energy of nuclear decay, and even the source of energy keeping Earth’s interior hot. ### Stored Energy and Potential Energy What happens to energy stored in an object at rest, such as the energy put into a battery by charging it, or the energy stored in a toy gun’s compressed spring? The energy input becomes part of the total energy of the object and, thus, increases its rest mass. All stored and potential energy becomes mass in a system. Why is it we don’t ordinarily notice this? In fact, conservation of mass (meaning total mass is constant) was one of the great laws verified by 19th-century science. Why was it not noticed to be incorrect? The following example helps answer these questions. ### Kinetic Energy and the Ultimate Speed Limit Kinetic energy is energy of motion. Classically, kinetic energy has the familiar expression . The relativistic expression for kinetic energy is obtained from the work-energy theorem. This theorem states that the net work on a system goes into kinetic energy. If our system starts from rest, then the work-energy theorem is Relativistically, at rest we have rest energy . The work increases this to the total energy . Thus, Relativistically, we have . When motionless, we have and so that at rest, as expected. But the expression for relativistic kinetic energy (such as total energy and rest energy) does not look much like the classical . To show that the classical expression for kinetic energy is obtained at low velocities, we note that the binomial expansion for at low velocities gives A binomial expansion is a way of expressing an algebraic quantity as a sum of an infinite series of terms. In some cases, as in the limit of small velocity here, most terms are very small. Thus the expression derived for here is not exact, but it is a very accurate approximation. Thus, at low velocities, Entering this into the expression for relativistic kinetic energy gives So, in fact, relativistic kinetic energy does become the same as classical kinetic energy when . It is even more interesting to investigate what happens to kinetic energy when the velocity of an object approaches the speed of light. We know that becomes infinite as approaches , so that KErel also becomes infinite as the velocity approaches the speed of light. (See .) An infinite amount of work (and, hence, an infinite amount of energy input) is required to accelerate a mass to the speed of light. So the speed of light is the ultimate speed limit for any particle having mass. All of this is consistent with the fact that velocities less than always add to less than . Both the relativistic form for kinetic energy and the ultimate speed limit being have been confirmed in detail in numerous experiments. No matter how much energy is put into accelerating a mass, its velocity can only approach—not reach—the speed of light. ### Relativistic Energy and Momentum We know classically that kinetic energy and momentum are related to each other, since Relativistically, we can obtain a relationship between energy and momentum by algebraically manipulating their definitions. This produces where is the relativistic total energy and is the relativistic momentum. This relationship between relativistic energy and relativistic momentum is more complicated than the classical, but we can gain some interesting new insights by examining it. First, total energy is related to momentum and rest mass. At rest, momentum is zero, and the equation gives the total energy to be the rest energy (so this equation is consistent with the discussion of rest energy above). However, as the mass is accelerated, its momentum increases, thus increasing the total energy. At sufficiently high velocities, the rest energy term becomes negligible compared with the momentum term ; thus, at extremely relativistic velocities. If we consider momentum to be distinct from mass, we can determine the implications of the equation for a particle that has no mass. If we take to be zero in this equation, then , or . Massless particles have this momentum. There are several massless particles found in nature, including photons (these are quanta of electromagnetic radiation). Another implication is that a massless particle must travel at speed and only at speed . While it is beyond the scope of this text to examine the relationship in the equation in detail, we can see that the relationship has important implications in special relativity. ### Test Prep for AP Courses ### Section Summary 1. Relativistic energy is conserved as long as we define it to include the possibility of mass changing to energy. 2. Total Energy is defined as: , where . 3. Rest energy is , meaning that mass is a form of energy. If energy is stored in an object, its mass increases. Mass can be destroyed to release energy. 4. We do not ordinarily notice the increase or decrease in mass of an object because the change in mass is so small for a large increase in energy. 5. The relativistic work-energy theorem is . 6. Relativistically, , where is the relativistic kinetic energy. 7. Relativistic kinetic energy is , where . At low velocities, relativistic kinetic energy reduces to classical kinetic energy. 8. No object with mass can attain the speed of light because an infinite amount of work and an infinite amount of energy input is required to accelerate a mass to the speed of light. 9. The equation relates the relativistic total energy and the relativistic momentum . At extremely high velocities, the rest energy becomes negligible, and . ### Conceptual Questions ### Problems & Exercises
# Quantum Physics ## Introduction to Quantum Physics Quantum mechanics is the branch of physics needed to deal with submicroscopic objects. Because these objects are smaller than we can observe directly with our senses and generally must be observed with the aid of instruments, parts of quantum mechanics seem as foreign and bizarre as parts of relativity. But, like relativity, quantum mechanics has been shown to be valid—truth is often stranger than fiction. Certain aspects of quantum mechanics are familiar to us. We accept as fact that matter is composed of atoms, the smallest unit of an element, and that these atoms combine to form molecules, the smallest unit of a compound. (See .) While we cannot see the individual water molecules in a stream, for example, we are aware that this is because molecules are so small and so numerous in that stream. When introducing atoms, we commonly say that electrons orbit atoms in discrete shells around a tiny nucleus, itself composed of smaller particles called protons and neutrons. We are also aware that electric charge comes in tiny units carried almost entirely by electrons and protons. As with water molecules in a stream, we do not notice individual charges in the current through a lightbulb, because the charges are so small and so numerous in the macroscopic situations we sense directly. Atoms, molecules, and fundamental electron and proton charges are all examples of physical entities that are quantized—that is, they appear only in certain discrete values and do not have every conceivable value. Quantized is the opposite of continuous. We cannot have a fraction of an atom, or part of an electron’s charge, or 14-1/3 cents, for example. Rather, everything is built of integral multiples of these substructures. Quantum physics is the branch of physics that deals with small objects and the quantization of various entities, including energy and angular momentum. Just as with classical physics, quantum physics has several subfields, such as mechanics and the study of electromagnetic forces. The correspondence principle states that in the classical limit (large, slow-moving objects), quantum mechanics becomes the same as classical physics. In this chapter, we begin the development of quantum mechanics and its description of the strange submicroscopic world. In later chapters, we will examine many areas, such as atomic and nuclear physics, in which quantum mechanics is crucial.
# Quantum Physics ## Quantization of Energy ### Learning Objectives By the end of this section, you will be able to: 1. Explain Max Planck’s contribution to the development of quantum mechanics. 2. Explain why atomic spectra indicate quantization. ### Planck’s Contribution Energy is quantized in some systems, meaning that the system can have only certain energies and not a continuum of energies, unlike the classical case. This would be like having only certain speeds at which a car can travel because its kinetic energy can have only certain values. We also find that some forms of energy transfer take place with discrete lumps of energy. While most of us are familiar with the quantization of matter into lumps called atoms, molecules, and the like, we are less aware that energy, too, can be quantized. Some of the earliest clues about the necessity of quantum mechanics over classical physics came from the quantization of energy. Where is the quantization of energy observed? Let us begin by considering the emission and absorption of electromagnetic (EM) radiation. The EM spectrum radiated by a hot solid is linked directly to the solid’s temperature. (See .) An ideal radiator is one that has an emissivity of 1 at all wavelengths and, thus, is jet black. Ideal radiators are therefore called blackbodies, and their EM radiation is called blackbody radiation. It was discussed that the total intensity of the radiation varies as the fourth power of the absolute temperature of the body, and that the peak of the spectrum shifts to shorter wavelengths at higher temperatures. All of this seems quite continuous, but it was the curve of the spectrum of intensity versus wavelength that gave a clue that the energies of the atoms in the solid are quantized. In fact, providing a theoretical explanation for the experimentally measured shape of the spectrum was a mystery at the turn of the century. When this “ultraviolet catastrophe” was eventually solved, the answers led to new technologies such as computers and the sophisticated imaging techniques described in earlier chapters. Once again, physics as an enabling science changed the way we live. The German physicist Max Planck (1858–1947) used the idea that atoms and molecules in a body act like oscillators to absorb and emit radiation. The energies of the oscillating atoms and molecules had to be quantized to correctly describe the shape of the blackbody spectrum. Planck deduced that the energy of an oscillator having a frequency is given by Here is any nonnegative integer (0, 1, 2, 3, …). The symbol stands for Planck’s constant, given by The equation means that an oscillator having a frequency (emitting and absorbing EM radiation of frequency ) can have its energy increase or decrease only in discrete steps of size It might be helpful to mention some macroscopic analogies of this quantization of energy phenomena. This is like a pendulum that has a characteristic oscillation frequency but can swing with only certain amplitudes. Quantization of energy also resembles a standing wave on a string that allows only particular harmonics described by integers. It is also similar to going up and down a hill using discrete stair steps rather than being able to move up and down a continuous slope. Your potential energy takes on discrete values as you move from step to step. Using the quantization of oscillators, Planck was able to correctly describe the experimentally known shape of the blackbody spectrum. This was the first indication that energy is sometimes quantized on a small scale and earned him the Nobel Prize in Physics in 1918. Although Planck’s theory comes from observations of a macroscopic object, its analysis is based on atoms and molecules. It was such a revolutionary departure from classical physics that Planck himself was reluctant to accept his own idea that energy states are not continuous. The general acceptance of Planck’s energy quantization was greatly enhanced by Einstein’s explanation of the photoelectric effect (discussed in the next section), which took energy quantization a step further. Planck was fully involved in the development of both early quantum mechanics and relativity. He quickly embraced Einstein’s special relativity, published in 1905, and in 1906 Planck was the first to suggest the correct formula for relativistic momentum, . Note that Planck’s constant is a very small number. So for an infrared frequency of being emitted by a blackbody, for example, the difference between energy levels is only or about 0.4 eV. This 0.4 eV of energy is significant compared with typical atomic energies, which are on the order of an electron volt, or thermal energies, which are typically fractions of an electron volt. But on a macroscopic or classical scale, energies are typically on the order of joules. Even if macroscopic energies are quantized, the quantum steps are too small to be noticed. This is an example of the correspondence principle. For a large object, quantum mechanics produces results indistinguishable from those of classical physics. ### Atomic Spectra Now let us turn our attention to the emission and absorption of EM radiation by gases. The Sun is the most common example of a body containing gases emitting an EM spectrum that includes visible light. We also see examples in neon signs and candle flames. Studies of emissions of hot gases began more than two centuries ago, and it was soon recognized that these emission spectra contained huge amounts of information. The type of gas and its temperature, for example, could be determined. We now know that these EM emissions come from electrons transitioning between energy levels in individual atoms and molecules; thus, they are called atomic spectra. Atomic spectra remain an important analytical tool today. shows an example of an emission spectrum obtained by passing an electric discharge through a material. One of the most important characteristics of these spectra is that they are discrete. By this we mean that only certain wavelengths, and hence frequencies, are emitted. This is called a line spectrum. If frequency and energy are associated as the energies of the electrons in the emitting atoms and molecules are quantized. This is discussed in more detail later in this chapter. It was a major puzzle that atomic spectra are quantized. Some of the best minds of 19th-century science failed to explain why this might be. Not until the second decade of the 20th century did an answer based on quantum mechanics begin to emerge. Again a macroscopic or classical body of gas was involved in the studies, but the effect, as we shall see, is due to individual atoms and molecules. ### Test Prep for AP Courses ### Section Summary 1. The first indication that energy is sometimes quantized came from blackbody radiation, which is the emission of EM radiation by an object with an emissivity of 1. 2. Planck recognized that the energy levels of the emitting atoms and molecules were quantized, with only the allowed values of where is any non-negative integer (0, 1, 2, 3, …). 3. is Planck’s constant, whose value is 4. Thus, the oscillatory absorption and emission energies of atoms and molecules in a blackbody could increase or decrease only in steps of size where is the frequency of the oscillatory nature of the absorption and emission of EM radiation. 5. Another indication of energy levels being quantized in atoms and molecules comes from the lines in atomic spectra, which are the EM emissions of individual atoms and molecules. ### Conceptual Questions ### Problems & Exercises
# Quantum Physics ## The Photoelectric Effect ### Learning Objectives By the end of this section, you will be able to: 1. Describe a typical photoelectric-effect experiment. 2. Determine the maximum kinetic energy of photoelectrons ejected by photons of one energy or wavelength, when given the maximum kinetic energy of photoelectrons for a different photon energy or wavelength. When light strikes materials, it can eject electrons from them. This is called the photoelectric effect, meaning that light (photo) produces electricity. One common use of the photoelectric effect is in light meters, such as those that adjust the automatic iris on various types of cameras. In a similar way, another use is in solar cells, as you probably have in your calculator or have seen on a roof top or a roadside sign. These make use of the photoelectric effect to convert light into electricity for running different devices. This effect has been known for more than a century and can be studied using a device such as that shown in . This figure shows an evacuated tube with a metal plate and a collector wire that are connected by a variable voltage source, with the collector more negative than the plate. When light (or other EM radiation) strikes the plate in the evacuated tube, it may eject electrons. If the electrons have energy in electron volts (eV) greater than the potential difference between the plate and the wire in volts, some electrons will be collected on the wire. Since the electron energy in eV is , where is the electron charge and is the potential difference, the electron energy can be measured by adjusting the retarding voltage between the wire and the plate. The voltage that stops the electrons from reaching the wire equals the energy in eV. For example, if barely stops the electrons, their energy is 3.00 eV. The number of electrons ejected can be determined by measuring the current between the wire and plate. The more light, the more electrons; a little circuitry allows this device to be used as a light meter. What is really important about the photoelectric effect is what Albert Einstein deduced from it. Einstein realized that there were several characteristics of the photoelectric effect that could be explained only if EM radiation is itself quantized: the apparently continuous stream of energy in an EM wave is actually composed of energy quanta called photons. In his explanation of the photoelectric effect, Einstein defined a quantized unit or quantum of EM energy, which we now call a photon, with an energy proportional to the frequency of EM radiation. In equation form, the photon energy is where is the energy of a photon of frequency and is Planck’s constant. This revolutionary idea looks similar to Planck’s quantization of energy states in blackbody oscillators, but it is quite different. It is the quantization of EM radiation itself. EM waves are composed of photons and are not continuous smooth waves as described in previous chapters on optics. Their energy is absorbed and emitted in lumps, not continuously. This is exactly consistent with Planck’s quantization of energy levels in blackbody oscillators, since these oscillators increase and decrease their energy in steps of by absorbing and emitting photons having . We do not observe this with our eyes, because there are so many photons in common light sources that individual photons go unnoticed. (See .) The next section of the text (Photon Energies and the Electromagnetic Spectrum) is devoted to a discussion of photons and some of their characteristics and implications. For now, we will use the photon concept to explain the photoelectric effect, much as Einstein did. The photoelectric effect has the properties discussed below. All these properties are consistent with the idea that individual photons of EM radiation are absorbed by individual electrons in a material, with the electron gaining the photon’s energy. Some of these properties are inconsistent with the idea that EM radiation is a simple wave. For simplicity, let us consider what happens with monochromatic EM radiation in which all photons have the same energy . 1. If we vary the frequency of the EM radiation falling on a material, we find the following: For a given material, there is a threshold frequency for the EM radiation below which no electrons are ejected, regardless of intensity. Individual photons interact with individual electrons. Thus if the photon energy is too small to break an electron away, no electrons will be ejected. If EM radiation was a simple wave, sufficient energy could be obtained by increasing the intensity. 2. Once EM radiation falls on a material, electrons are ejected without delay. As soon as an individual photon of a sufficiently high frequency is absorbed by an individual electron, the electron is ejected. If the EM radiation were a simple wave, several minutes would be required for sufficient energy to be deposited to the metal surface to eject an electron. 3. The number of electrons ejected per unit time is proportional to the intensity of the EM radiation and to no other characteristic. High-intensity EM radiation consists of large numbers of photons per unit area, with all photons having the same characteristic energy . 4. If we vary the intensity of the EM radiation and measure the energy of ejected electrons, we find the following: The maximum kinetic energy of ejected electrons is independent of the intensity of the EM radiation. Since there are so many electrons in a material, it is extremely unlikely that two photons will interact with the same electron at the same time, thereby increasing the energy given it. Instead (as noted in 3 above), increased intensity results in more electrons of the same energy being ejected. If EM radiation were a simple wave, a higher intensity could give more energy, and higher-energy electrons would be ejected. 5. The kinetic energy of an ejected electron equals the photon energy minus the binding energy of the electron in the specific material. An individual photon can give all of its energy to an electron. The photon’s energy is partly used to break the electron away from the material. The remainder goes into the ejected electron’s kinetic energy. In equation form, this is given by where is the maximum kinetic energy of the ejected electron, is the photon’s energy, and BE is the binding energy of the electron to the particular material. (BE is sometimes called the work function of the material.) This equation, due to Einstein in 1905, explains the properties of the photoelectric effect quantitatively. An individual photon of EM radiation (it does not come any other way) interacts with an individual electron, supplying enough energy, BE, to break it away, with the remainder going to kinetic energy. The binding energy is , where is the threshold frequency for the particular material. shows a graph of maximum versus the frequency of incident EM radiation falling on a particular material. Einstein’s idea that EM radiation is quantized was crucial to the beginnings of quantum mechanics. It is a far more general concept than its explanation of the photoelectric effect might imply. All EM radiation can also be modeled in the form of photons, and the characteristics of EM radiation are entirely consistent with this fact. (As we will see in the next section, many aspects of EM radiation, such as the hazards of ultraviolet (UV) radiation, can be explained only by photon properties.) More famous for modern relativity, Einstein planted an important seed for quantum mechanics in 1905, the same year he published his first paper on special relativity. His explanation of the photoelectric effect was the basis for the Nobel Prize awarded to him in 1921. Although his other contributions to theoretical physics were also noted in that award, special and general relativity were not fully recognized in spite of having been partially verified by experiment by 1921. Although hero-worshipped, this great man never received Nobel recognition for his most famous work—relativity. ### Test Prep for AP Courses ### Section Summary 1. The photoelectric effect is the process in which EM radiation ejects electrons from a material. 2. Einstein proposed photons to be quanta of EM radiation having energy , where is the frequency of the radiation. 3. All EM radiation is composed of photons. As Einstein explained, all characteristics of the photoelectric effect are due to the interaction of individual photons with individual electrons. 4. The maximum kinetic energy of ejected electrons (photoelectrons) is given by , where is the photon energy and BE is the binding energy (or work function) of the electron to the particular material. ### Conceptual Questions ### Problems & Exercises
# Quantum Physics ## Photon Energies and the Electromagnetic Spectrum ### Learning Objectives By the end of this section, you will be able to: 1. Explain the relationship between the energy of a photon in joules or electron volts and its wavelength or frequency. 2. Calculate the number of photons per second emitted by a monochromatic source of specific wavelength and power. ### Ionizing Radiation A photon is a quantum of EM radiation. Its energy is given by and is related to the frequency and wavelength of the radiation by where is the energy of a single photon and is the speed of light. When working with small systems, energy in eV is often useful. Note that Planck’s constant in these units is Since many wavelengths are stated in nanometers (nm), it is also useful to know that These will make many calculations a little easier. All EM radiation is composed of photons. shows various divisions of the EM spectrum plotted against wavelength, frequency, and photon energy. Previously in this book, photon characteristics were alluded to in the discussion of some of the characteristics of UV, x rays, and rays, the first of which start with frequencies just above violet in the visible spectrum. It was noted that these types of EM radiation have characteristics much different than visible light. We can now see that such properties arise because photon energy is larger at high frequencies. Photons act as individual quanta and interact with individual electrons, atoms, molecules, and so on. The energy a photon carries is, thus, crucial to the effects it has. lists representative submicroscopic energies in eV. When we compare photon energies from the EM spectrum in with energies in the table, we can see how effects vary with the type of EM radiation. Gamma rays, a form of nuclear and cosmic EM radiation, can have the highest frequencies and, hence, the highest photon energies in the EM spectrum. For example, a -ray photon with has an energy This is sufficient energy to ionize thousands of atoms and molecules, since only 10 to 1000 eV are needed per ionization. In fact, rays are one type of ionizing radiation, as are x rays and UV, because they produce ionization in materials that absorb them. Because so much ionization can be produced, a single -ray photon can cause significant damage to biological tissue, killing cells or damaging their ability to properly reproduce. When cell reproduction is disrupted, the result can be cancer, one of the known effects of exposure to ionizing radiation. Since cancer cells are rapidly reproducing, they are exceptionally sensitive to the disruption produced by ionizing radiation. This means that ionizing radiation has positive uses in cancer treatment as well as risks in producing cancer. High photon energy also enables rays to penetrate materials, since a collision with a single atom or molecule is unlikely to absorb all the ray’s energy. This can make rays useful as a probe, and they are sometimes used in medical imaging. x rays, as you can see in , overlap with the low-frequency end of the ray range. Since x rays have energies of keV and up, individual x-ray photons also can produce large amounts of ionization. At lower photon energies, x rays are not as penetrating as rays and are slightly less hazardous. X rays are ideal for medical imaging, their most common use, and a fact that was recognized immediately upon their discovery in 1895 by the German physicist W. C. Roentgen (1845–1923). (See .) Within one year of their discovery, x rays (for a time called Roentgen rays) were used for medical diagnostics. Roentgen received the 1901 Nobel Prize for the discovery of x rays. While rays originate in nuclear decay, x rays are produced by the process shown in . Electrons ejected by thermal agitation from a hot filament in a vacuum tube are accelerated through a high voltage, gaining kinetic energy from the electrical potential energy. When they strike the anode, the electrons convert their kinetic energy to a variety of forms, including thermal energy. But since an accelerated charge radiates EM waves, and since the electrons act individually, photons are also produced. Some of these x-ray photons obtain the kinetic energy of the electron. The accelerated electrons originate at the cathode, so such a tube is called a cathode ray tube (CRT), and various versions of them are found in older TV and computer screens as well as in x-ray machines. shows the spectrum of x rays obtained from an x-ray tube. There are two distinct features to the spectrum. First, the smooth distribution results from electrons being decelerated in the anode material. A curve like this is obtained by detecting many photons, and it is apparent that the maximum energy is unlikely. This decelerating process produces radiation that is called bremsstrahlung (German for braking radiation). The second feature is the existence of sharp peaks in the spectrum; these are called characteristic x rays, since they are characteristic of the anode material. Characteristic x rays come from atomic excitations unique to a given type of anode material. They are akin to lines in atomic spectra, implying the energy levels of atoms are quantized. Phenomena such as discrete atomic spectra and characteristic x rays are explored further in Atomic Physics. Ultraviolet radiation (approximately 4 eV to 300 eV) overlaps with the low end of the energy range of x rays, but UV is typically lower in energy. UV comes from the de-excitation of atoms that may be part of a hot solid or gas. These atoms can be given energy that they later release as UV by numerous processes, including electric discharge, nuclear explosion, thermal agitation, and exposure to x rays. A UV photon has sufficient energy to ionize atoms and molecules, which makes its effects different from those of visible light. UV thus has some of the same biological effects as rays and x rays. For example, it can cause skin cancer and is used as a sterilizer. The major difference is that several UV photons are required to disrupt cell reproduction or kill a bacterium, whereas single -ray and X-ray photons can do the same damage. But since UV does have the energy to alter molecules, it can do what visible light cannot. One of the beneficial aspects of UV is that it triggers the production of vitamin D in the skin, whereas visible light has insufficient energy per photon to alter the molecules that trigger this production. Infantile jaundice is treated by exposing the baby to UV (with eye protection), called phototherapy, the beneficial effects of which are thought to be related to its ability to help prevent the buildup of potentially toxic bilirubin in the blood. ### Visible Light The range of photon energies for visible light from red to violet is 1.63 to 3.26 eV, respectively (left for this chapter’s Problems and Exercises to verify). These energies are on the order of those between outer electron shells in atoms and molecules. This means that these photons can be absorbed by atoms and molecules. A single photon can actually stimulate the retina, for example, by altering a receptor molecule that then triggers a nerve impulse. Photons can be absorbed or emitted only by atoms and molecules that have precisely the correct quantized energy step to do so. For example, if a red photon of frequency encounters a molecule that has an energy step, equal to then the photon can be absorbed. Violet flowers absorb red and reflect violet; this implies there is no energy step between levels in the receptor molecule equal to the violet photon’s energy, but there is an energy step for the red. There are some noticeable differences in the characteristics of light between the two ends of the visible spectrum that are due to photon energies. Red light has insufficient photon energy to expose most black-and-white film, and it is thus used to illuminate darkrooms where such film is developed. Since violet light has a higher photon energy, dyes that absorb violet tend to fade more quickly than those that do not. (See .) Take a look at some faded color posters in a storefront some time, and you will notice that the blues and violets are the last to fade. This is because other dyes, such as red and green dyes, absorb blue and violet photons, the higher energies of which break up their weakly bound molecules. (Complex molecules such as those in dyes and DNA tend to be weakly bound.) Blue and violet dyes reflect those colors and, therefore, do not absorb these more energetic photons, thus suffering less molecular damage. Transparent materials, such as some glasses, do not absorb any visible light, because there is no energy step in the atoms or molecules that could absorb the light. Since individual photons interact with individual atoms, it is nearly impossible to have two photons absorbed simultaneously to reach a large energy step. Because of its lower photon energy, visible light can sometimes pass through many kilometers of a substance, while higher frequencies like UV, x ray, and rays are absorbed, because they have sufficient photon energy to ionize the material. ### Lower-Energy Photons Infrared radiation (IR) has even lower photon energies than visible light and cannot significantly alter atoms and molecules. IR can be absorbed and emitted by atoms and molecules, particularly between closely spaced states. IR is extremely strongly absorbed by water, for example, because water molecules have many states separated by energies on the order of to well within the IR and microwave energy ranges. This is why in the IR range, skin is almost jet black, with an emissivity near 1—there are many states in water molecules in the skin that can absorb a large range of IR photon energies. Not all molecules have this property. Air, for example, is nearly transparent to many IR frequencies. Microwaves are the highest frequencies that can be produced by electronic circuits, although they are also produced naturally. Thus microwaves are similar to IR but do not extend to as high frequencies. There are states in water and other molecules that have the same frequency and energy as microwaves, typically about This is one reason why food absorbs microwaves more strongly than many other materials, making microwave ovens an efficient way of putting energy directly into food. Photon energies for both IR and microwaves are so low that huge numbers of photons are involved in any significant energy transfer by IR or microwaves (such as warming yourself with a heat lamp or cooking pizza in the microwave). Visible light, IR, microwaves, and all lower frequencies cannot produce ionization with single photons and do not ordinarily have the hazards of higher frequencies. When visible, IR, or microwave radiation is hazardous, such as the inducement of cataracts by microwaves, the hazard is due to huge numbers of photons acting together (not to an accumulation of photons, such as sterilization by weak UV). The negative effects of visible, IR, or microwave radiation can be thermal effects, which could be produced by any heat source. But one difference is that at very high intensity, strong electric and magnetic fields can be produced by photons acting together. Such electromagnetic fields (EMF) can actually ionize materials. It is virtually impossible to detect individual photons having frequencies below microwave frequencies, because of their low photon energy. But the photons are there. A continuous EM wave can be modeled as photons. At low frequencies, EM waves are generally treated as time- and position-varying electric and magnetic fields with no discernible quantization. This is another example of the correspondence principle in situations involving huge numbers of photons. ### Test Prep for AP Courses ### Section Summary 1. Photon energy is responsible for many characteristics of EM radiation, being particularly noticeable at high frequencies. 2. Photons have both wave and particle characteristics. ### Conceptual Questions ### Problems & Exercises
# Quantum Physics ## Photon Momentum ### Learning Objectives By the end of this section, you will be able to: 1. Relate the linear momentum of a photon to its energy or wavelength, and apply linear momentum conservation to simple processes involving the emission, absorption, or reflection of photons. 2. Account qualitatively for the increase of photon wavelength that is observed, and explain the significance of the Compton wavelength. ### Measuring Photon Momentum The quantum of EM radiation we call a photon has properties analogous to those of particles we can see, such as grains of sand. A photon interacts as a unit in collisions or when absorbed, rather than as an extensive wave. Massive quanta, like electrons, also act like macroscopic particles—something we expect, because they are the smallest units of matter. Particles carry momentum as well as energy. Despite photons having no mass, there has long been evidence that EM radiation carries momentum. (Maxwell and others who studied EM waves predicted that they would carry momentum.) It is now a well-established fact that photons do have momentum. In fact, photon momentum is suggested by the photoelectric effect, where photons knock electrons out of a substance. shows macroscopic evidence of photon momentum. shows a comet with two prominent tails. What most people do not know about the tails is that they always point away from the Sun rather than trailing behind the comet (like the tail of Bo Peep’s sheep). Comet tails are composed of gases and dust evaporated from the body of the comet and ionized gas. The dust particles recoil away from the Sun when photons scatter from them. Evidently, photons carry momentum in the direction of their motion (away from the Sun), and some of this momentum is transferred to dust particles in collisions. Gas atoms and molecules in the blue tail are most affected by other particles of radiation, such as protons and electrons emanating from the Sun, rather than by the momentum of photons. Momentum is conserved in quantum mechanics just as it is in relativity and classical physics. Some of the earliest direct experimental evidence of this came from scattering of x-ray photons by electrons in substances, named Compton scattering after the American physicist Arthur H. Compton (1892–1962). Around 1923, Compton observed that x rays scattered from materials had a decreased energy and correctly analyzed this as being due to the scattering of photons from electrons. This phenomenon could be handled as a collision between two particles—a photon and an electron at rest in the material. Energy and momentum are conserved in the collision. (See ) He won a Nobel Prize in 1929 for the discovery of this scattering, now called the Compton effect, because it helped prove that photon momentum is given by where is Planck’s constant and is the photon wavelength. (Note that relativistic momentum given as is valid only for particles having mass.) We can see that photon momentum is small, since and is very small. It is for this reason that we do not ordinarily observe photon momentum. Our mirrors do not recoil when light reflects from them (except perhaps in cartoons). Compton saw the effects of photon momentum because he was observing x rays, which have a small wavelength and a relatively large momentum, interacting with the lightest of particles, the electron. ### Relativistic Photon Momentum There is a relationship between photon momentum and photon energy that is consistent with the relation given previously for the relativistic total energy of a particle as . We know is zero for a photon, but is not, so that becomes or To check the validity of this relation, note that for a photon. Substituting this into yields as determined experimentally and discussed above. Thus, is equivalent to Compton’s result . For a further verification of the relationship between photon energy and momentum, see . ### Test Prep for AP Courses ### Section Summary 1. Photons have momentum, given by , where is the photon wavelength. 2. Photon energy and momentum are related by , where for a photon. ### Conceptual Questions ### Problems & Exercises