text
stringlengths 313
1.33M
|
---|
# Linear Momentum and Collisions
## Inelastic Collisions in One Dimension
### Learning Objectives
By the end of this section, you will be able to:
1. Define inelastic collision.
2. Explain perfectly inelastic collision.
3. Apply an understanding of collisions to sports.
4. Determine recoil velocity and loss in kinetic energy given mass and initial velocity.
We have seen that in an elastic collision, internal kinetic energy is conserved. An inelastic collision is one in which the internal kinetic energy changes (it is not conserved). This lack of conservation means that the forces between colliding objects may remove or add internal kinetic energy. Work done by internal forces may change the forms of energy within a system. For inelastic collisions, such as when colliding objects stick together, this internal work may transform some internal kinetic energy into heat transfer. Or it may convert stored energy into internal kinetic energy, such as when exploding bolts separate a satellite from its launch vehicle.
shows an example of an inelastic collision. Two objects that have equal masses head toward one another at equal speeds and then stick together. Their total internal kinetic energy is initially
. The two objects come to rest after sticking together, conserving momentum. But the internal kinetic energy is zero after the collision. A collision in which the objects stick together is sometimes called a perfectly inelastic collision because it reduces internal kinetic energy more than does any other type of inelastic collision. In fact, such a collision reduces internal kinetic energy to the minimum it can have while still conserving momentum.
### Test Prep for AP Courses
### Section Summary
1. An inelastic collision is one in which the internal kinetic energy changes (it is not conserved).
2. A collision in which the objects stick together is sometimes called perfectly inelastic because it reduces internal kinetic energy more than does any other type of inelastic collision.
3. Sports science and technologies also use physics concepts such as momentum and rotational motion and vibrations.
### Conceptual Questions
### Problems & Exercises
|
# Linear Momentum and Collisions
## Collisions of Point Masses in Two Dimensions
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss two dimensional collisions as an extension of one dimensional analysis.
2. Define point masses.
3. Derive an expression for conservation of momentum along x-axis and y-axis.
4. Describe elastic collisions of two objects with equal mass.
5. Determine the magnitude and direction of the final velocity given initial velocity, and scattering angle.
In the previous two sections, we considered only one-dimensional collisions; during such collisions, the incoming and outgoing velocities are all along the same line. But what about collisions, such as those between billiard balls, in which objects scatter to the side? These are two-dimensional collisions, and we shall see that their study is an extension of the one-dimensional analysis already presented. The approach taken (similar to the approach in discussing two-dimensional kinematics and dynamics) is to choose a convenient coordinate system and resolve the motion into components along perpendicular axes. Resolving the motion yields a pair of one-dimensional problems to be solved simultaneously.
One complication arising in two-dimensional collisions is that the objects might rotate before or after their collision. For example, if two ice skaters hook arms as they pass by one another, they will spin in circles. We will not consider such rotation until later, and so for now we arrange things so that no rotation is possible. To avoid rotation, we consider only the scattering of point masses—that is, structureless particles that cannot rotate or spin.
We start by assuming that
, so that momentum
is conserved. The simplest collision is one in which one of the particles is initially at rest. (See .) The best choice for a coordinate system is one with an axis parallel to the velocity of the incoming particle, as shown in . Because momentum is conserved, the components of momentum along the
- and -axes
will also be conserved, but with the chosen coordinate system,
is initially zero and
is the momentum of the incoming particle. Both facts simplify the analysis. (Even with the simplifying assumptions of point masses, one particle initially at rest, and a convenient coordinate system, we still gain new insights into nature from the analysis of two-dimensional collisions.)
Along the -axis, the equation for conservation of momentum is
Where the subscripts denote the particles and axes and the primes denote the situation after the collision. In terms of masses and velocities, this equation is
But because particle 2 is initially at rest, this equation becomes
The components of the velocities along the -axis have the form . Because particle 1 initially moves along the -axis, we find .
Conservation of momentum along the -axis gives the following equation:
where and are as shown in .
Along the -axis, the equation for conservation of momentum is
or
But is zero, because particle 1 initially moves along the -axis. Because particle 2 is initially at rest, is also zero. The equation for conservation of momentum along the -axis becomes
The components of the velocities along the -axis have the form .
Thus, conservation of momentum along the -axis gives the following equation:
The equations of conservation of momentum along the -axis and -axis are very useful in analyzing two-dimensional collisions of particles, where one is originally stationary (a common laboratory situation). But two equations can only be used to find two unknowns, and so other data may be necessary when collision experiments are used to explore nature at the subatomic level.
### Elastic Collisions of Two Objects with Equal Mass
Some interesting situations arise when the two colliding objects have equal mass and the collision is elastic. This situation is nearly the case with colliding billiard balls, and precisely the case with some subatomic particle collisions. We can thus get a mental image of a collision of subatomic particles by thinking about billiards (or pool). (Refer to for masses and angles.) First, an elastic collision conserves internal kinetic energy. Again, let us assume object 2 is initially at rest. Then, the internal kinetic energy before and after the collision of two objects that have equal masses is
Because the masses are equal, . Algebraic manipulation (left to the reader) of conservation of momentum in the - and -directions can show that
(Remember that is negative here.) The two preceding equations can both be true only if
There are three ways that this term can be zero. They are
1. : head-on collision; incoming ball stops
2. : no collision; incoming ball continues unaffected
3. : angle of separation is after the collision
All three of these ways are familiar occurrences in billiards and pool, although most of us try to avoid the second. If you play enough pool, you will notice that the angle between the balls is very close to after the collision, although it will vary from this value if a great deal of spin is placed on the ball. (Large spin carries in extra energy and a quantity called angular momentum, which must also be conserved.) The assumption that the scattering of billiard balls is elastic is reasonable based on the correctness of the three results it produces. This assumption also implies that, to a good approximation, momentum is conserved for the two-ball system in billiards and pool. The problems below explore these and other characteristics of two-dimensional collisions.
### Test Prep for AP Courses
### Section Summary
1. The approach to two-dimensional collisions is to choose a convenient coordinate system and break the motion into components along perpendicular axes. Choose a coordinate system with the -axis parallel to the velocity of the incoming particle.
2.
Two-dimensional collisions of point masses where mass 2 is initially at rest conserve momentum along the initial direction of mass 1 (the -axis), stated by
and along the direction perpendicular to the initial direction (the
-axis) stated by
.
3. The internal kinetic before and after the collision of two objects that have equal masses is
4. Point masses are structureless particles that cannot spin.
### Conceptual Questions
### Problems & Exercises
|
# Linear Momentum and Collisions
## Introduction to Rocket Propulsion
### Learning Objectives
By the end of this section, you will be able to:
1. State Newton’s third law of motion.
2. Explain the principle involved in propulsion of rockets and jet engines.
3. Derive an expression for the acceleration of the rocket and discuss the factors that affect the acceleration.
4. Describe the function of a space shuttle.
Rockets range in size from fireworks so small that ordinary people use them to immense Saturn Vs that once propelled massive payloads toward the Moon. The propulsion of all rockets, jet engines, deflating balloons, and even squids and octopuses is explained by the same physical principle—Newton’s third law of motion. Matter is forcefully ejected from a system, producing an equal and opposite reaction on what remains. Another common example is the recoil of a gun. The gun exerts a force on a bullet to accelerate it and consequently experiences an equal and opposite force, causing the gun’s recoil or kick.
shows a rocket accelerating straight up. In part (a), the rocket has a mass and a velocity relative to Earth, and hence a momentum . In part (b), a time has elapsed in which the rocket has ejected a mass of hot gas at a velocity relative to the rocket. The remainder of the mass now has a greater velocity . The momentum of the entire system (rocket plus expelled gas) has actually decreased because the force of gravity has acted for a time , producing a negative impulse . (Remember that impulse is the net external force on a system multiplied by the time it acts, and it equals the change in momentum of the system.) So, the center of mass of the system is in free fall but, by rapidly expelling mass, part of the system can accelerate upward. It is a commonly held misconception that the rocket exhaust pushes on the ground. If we consider thrust; that is, the force exerted on the rocket by the exhaust gases, then a rocket’s thrust is greater in outer space than in the atmosphere or on the launch pad. In fact, gases are easier to expel into a vacuum.
By calculating the change in momentum for the entire system over , and equating this change to the impulse, the following expression can be shown to be a good approximation for the acceleration of the rocket.
“The rocket” is that part of the system remaining after the gas is ejected, and is the acceleration due to gravity.
A rocket’s acceleration depends on three major factors, consistent with the equation for acceleration of a rocket . First, the greater the exhaust velocity of the gases relative to the rocket, , the greater the acceleration is. The practical limit for is about for conventional (non-nuclear) hot-gas propulsion systems. The second factor is the rate at which mass is ejected from the rocket. This is the factor in the equation. The quantity , with units of newtons, is called "thrust.” The faster the rocket burns its fuel, the greater its thrust, and the greater its acceleration. The third factor is the mass of the rocket. The smaller the mass is (all other factors being the same), the greater the acceleration. The rocket mass decreases dramatically during flight because most of the rocket is fuel to begin with, so that acceleration increases continuously, reaching a maximum just before the fuel is exhausted.
To achieve the high speeds needed to hop continents, obtain orbit, or escape Earth’s gravity altogether, the mass of the rocket other than fuel must be as small as possible. It can be shown that, in the absence of air resistance and neglecting gravity, the final velocity of a one-stage rocket initially at rest is
where is the natural logarithm of the ratio of the initial mass of the rocket to what is left after all of the fuel is exhausted. (Note that is actually the change in velocity, so the equation can be used for any segment of the flight. If we start from rest, the change in velocity equals the final velocity.) For example, let us calculate the mass ratio needed to escape Earth’s gravity starting from rest, given that the escape velocity from Earth is about , and assuming an exhaust velocity .
Solving for gives
Thus, the mass of the rocket is
This result means that only of the mass is left when the fuel is burnt, and of the initial mass was fuel. Expressed as percentages, 98.9% of the rocket is fuel, while payload, engines, fuel tanks, and other components make up only 1.10%. Taking air resistance and gravitational force into account, the mass remaining can only be about . It is difficult to build a rocket in which the fuel has a mass 180 times everything else. The solution is multistage rockets. Each stage only needs to achieve part of the final velocity and is discarded after it burns its fuel. The result is that each successive stage can have smaller engines and more payload relative to its fuel. Once out of the atmosphere, the ratio of payload to fuel becomes more favorable, too.
The space shuttle was an attempt at an economical vehicle with some reusable parts, such as the solid fuel boosters and the craft itself. (See ) The shuttle’s need to be operated by humans, however, made it at least as costly for launching satellites as expendable, unpiloted rockets. Ideally, the shuttle would only have been used when human activities were required for the success of a mission, such as the repair of the Hubble space telescope. Rockets with satellites can also be launched from airplanes. Using airplanes has the double advantage that the initial velocity is significantly above zero and a rocket can avoid most of the atmosphere’s resistance.
### Section Summary
1. Newton’s third law of motion states that to every action, there is an equal and opposite reaction.
2. Acceleration of a rocket is .
3. A rocket’s acceleration depends on three main factors. They are
### Conceptual Questions
### Problems & Exercises
|
# Statics and Torque
## Connection for AP® Courses
What might desks, bridges, buildings, trees, and mountains have in common? What do these objects have in common with a car moving at a constant velocity? While it may be apparent that the objects in the first group are all motionless relative to Earth, they also share something with the moving car and all objects moving at a constant velocity. All of these objects, stationary and moving, share an acceleration of zero. How can this be? Consider Newton's second law, F = ma. When acceleration is zero, as is the case for both stationary objects and objects moving at a constant velocity, the net external force must also be zero (Big Idea 3). Forces are acting on both stationary objects and on objects moving at a constant velocity, but the forces are balanced. That is, they are in equilibrium. In equilibrium, the net force is zero.
The first two sections of this chapter will focus on the two conditions necessary for equilibrium. They will not only help you to distinguish between stationary bridges and cars moving at constant velocity, but will introduce a second equilibrium condition, this time involving rotation. As you explore the second equilibrium condition, you will learn about torque, in support of both Enduring Understanding 3.F and Essential Knowledge 3.F.1. Much like a force, torque provides the capability for acceleration; however, with careful attention, torques may also be balanced and equilibrium can be reached.
The remainder of this chapter will discuss a variety of interesting equilibrium applications. From the art of balancing, to simple machines, to the muscles in your body, the world around you relies upon the principles of equilibrium to remain stable. This chapter will help you to see just how closely related these events truly are.
The content in this chapter supports:
Big Idea 3 The interactions of an object with other objects can be described by forces.
Enduring Understanding 3.F A force exerted on an object can cause a torque on that object.
Essential Knowledge 3.F.1 Only the force component perpendicular to the line connecting the axis of rotation and the point of application of the force results in a torque about that axis. |
# Statics and Torque
## The First Condition for Equilibrium
### Learning Objectives
By the end of this section, you will be able to:
1. State the first condition of equilibrium.
2. Explain static equilibrium.
3. Explain dynamic equilibrium.
The first condition necessary to achieve equilibrium is the one already mentioned: the net external force on the system must be zero. Expressed as an equation, this is simply
Note that if net is zero, then the net external force in any direction is zero. For example, the net external forces along the typical x- and y-axes are zero. This is written as
and illustrate situations where for both static equilibrium (motionless), and dynamic equilibrium (constant velocity).
However, it is not sufficient for the net external force of a system to be zero for a system to be in equilibrium. Consider the two situations illustrated in and where forces are applied to an ice hockey stick lying flat on ice. The net external force is zero in both situations shown in the figure; but in one case, equilibrium is achieved, whereas in the other, it is not. In , the ice hockey stick remains motionless. But in , with the same forces applied in different places, the stick experiences accelerated rotation. Therefore, we know that the point at which a force is applied is another factor in determining whether or not equilibrium is achieved. This will be explored further in the next section.
### Section Summary
1. Statics is the study of forces in equilibrium.
2. Two conditions must be met to achieve equilibrium, which is defined to be motion without linear or rotational acceleration.
3. The first condition necessary to achieve equilibrium is that the net external force on the system must be zero, so that .
### Conceptual Questions
|
# Statics and Torque
## The Second Condition for Equilibrium
### Learning Objectives
By the end of this section, you will be able to:
1. State the second condition that is necessary to achieve equilibrium.
2. Explain torque and the factors on which it depends.
3. Describe the role of torque in rotational mechanics.
Several familiar factors determine how effective you are in opening the door. See . First of all, the larger the force, the more effective it is in opening the door—obviously, the harder you push, the more rapidly the door opens. Also, the point at which you push is crucial. If you apply your force too close to the hinges, the door will open slowly, if at all. Most people have been embarrassed by making this mistake and bumping up against a door when it did not open as quickly as expected. Finally, the direction in which you push is also important. The most effective direction is perpendicular to the door—we push in this direction almost instinctively.
The magnitude, direction, and point of application of the force are incorporated into the definition of the physical quantity called torque. Torque is the rotational equivalent of a force. It is a measure of the effectiveness of a force in changing or accelerating a rotation (changing the angular velocity over a period of time). In equation form, the magnitude of torque is defined to be
where (the Greek letter tau) is the symbol for torque,
is the distance from the pivot point to the point where the force is applied,
is the magnitude of the force, and is the angle between the force and the vector directed from the point of application to the pivot point, as seen in and . An alternative expression for torque is given in terms of the perpendicular lever arm as shown in and , which is defined as
so that
The perpendicular lever arm is the shortest distance from the pivot point to the line along which
acts; it is shown as a dashed line in and . Note that the line segment that defines the distance
is perpendicular to
, as its name implies. It is sometimes easier to find or visualize
than to find both
and . In such cases, it may be more convenient to use
rather than
for torque, but both are equally valid.
The SI unit of torque is newtons times meters, usually written as . For example, if you push perpendicular to the door with a force of 40 N at a distance of 0.800 m from the hinges, you exert a torque of 32 N·m(0.800 m × 40 N × sin 90º) relative to the hinges. If you reduce the force to 20 N, the torque is reduced to , and so on.
The torque is always calculated with reference to some chosen pivot point. For the same applied force, a different choice for the location of the pivot will give you a different value for the torque, since both and depend on the location of the pivot. Any point in any object can be chosen to calculate the torque about that point. The object may not actually pivot about the chosen “pivot point.”
Note that for rotation in a plane, torque has two possible directions. Torque is either clockwise or counterclockwise relative to the chosen pivot point, as illustrated for points B and A, respectively, in . If the object can rotate about point A, it will rotate counterclockwise, which means that the torque for the force is shown as counterclockwise relative to A. But if the object can rotate about point B, it will rotate clockwise, which means the torque for the force shown is clockwise relative to B. Also, the magnitude of the torque is greater when the lever arm is longer.
Now, the second condition necessary to achieve equilibrium is that the net external torque on a system must be zero. An external torque is one that is created by an external force. You can choose the point around which the torque is calculated. The point can be the physical pivot point of a system or any other point in space—but it must be the same point for all torques. If the second condition (net external torque on a system is zero) is satisfied for one choice of pivot point, it will also hold true for any other choice of pivot point in or out of the system of interest. (This is true only in an inertial frame of reference.) The second condition necessary to achieve equilibrium is stated in equation form as
where net means total. Torques, which are in opposite directions are assigned opposite signs. A common convention is to call counterclockwise (ccw) torques positive and clockwise (cw) torques negative.
When two children balance a seesaw as shown in , they satisfy the two conditions for equilibrium. Most people have perfect intuition about seesaws, knowing that the lighter child must sit farther from the pivot and that a heavier child can keep a lighter one off the ground indefinitely.
Several aspects of the preceding example have broad implications. First, the choice of the pivot as the point around which torques are calculated simplified the problem. Since is exerted on the pivot point, its lever arm is zero. Hence, the torque exerted by the supporting force is zero relative to that pivot point. The second condition for equilibrium holds for any choice of pivot point, and so we choose the pivot point to simplify the solution of the problem.
Second, the acceleration due to gravity canceled in this problem, and we were left with a ratio of masses. This will not always be the case. Always enter the correct forces—do not jump ahead to enter some ratio of masses.
Third, the weight of each child is distributed over an area of the seesaw, yet we treated the weights as if each force were exerted at a single point. This is not an approximation—the distances and are the distances to points directly below the center of gravity of each child. As we shall see in the next section, the mass and weight of a system can act as if they are located at a single point.
Finally, note that the concept of torque has an importance beyond static equilibrium. Torque plays the same role in rotational motion that force plays in linear motion. We will examine this in the next chapter.
### Test Prep for AP Courses
### Section Summary
1. The second condition assures those torques are also balanced. Torque is the rotational equivalent of a force in producing a rotation and is defined to be
where
so that
2. The perpendicular lever arm is the shortest distance from the pivot point to the line along which acts. The SI unit for torque is newton-meter . The second condition necessary to achieve equilibrium is that the net external torque on a system must be zero:
By convention, counterclockwise torques are positive, and clockwise torques are negative.
### Conceptual Questions
### Problems & Exercises
|
# Statics and Torque
## Stability
### Learning Objectives
By the end of this section, you will be able to:
1. State the types of equilibrium.
2. Describe stable and unstable equilibriums.
3. Describe neutral equilibrium.
It is one thing to have a system in equilibrium; it is quite another for it to be stable. The toy doll perched on the man’s hand in , for example, is not in stable equilibrium. There are three types of equilibrium: stable, unstable, and neutral. Figures throughout this module illustrate various examples.
presents a balanced system, such as the toy doll on the man’s hand, which has its center of gravity (cg) directly over the pivot, so that the torque of the total weight is zero. This is equivalent to having the torques of the individual parts balanced about the pivot point, in this case the hand. The cgs of the arms, legs, head, and torso are labeled with smaller type.
A system is said to be in stable equilibrium if, when displaced from equilibrium, it experiences a net force or torque in a direction opposite to the direction of the displacement. For example, a marble at the bottom of a bowl will experience a restoring force when displaced from its equilibrium position. This force moves it back toward the equilibrium position. Most systems are in stable equilibrium, especially for small displacements. For another example of stable equilibrium, see the pencil in .
A system is in unstable equilibrium if, when displaced, it experiences a net force or torque in the same direction as the displacement from equilibrium. A system in unstable equilibrium accelerates away from its equilibrium position if displaced even slightly. An obvious example is a ball resting on top of a hill. Once displaced, it accelerates away from the crest. See the next several figures for examples of unstable equilibrium.
A system is in neutral equilibrium if its equilibrium is independent of displacements from its original position. A marble on a flat horizontal surface is an example. Combinations of these situations are possible. For example, a marble on a saddle is stable for displacements toward the front or back of the saddle and unstable for displacements to the side. shows another example of neutral equilibrium.
When we consider how far a system in stable equilibrium can be displaced before it becomes unstable, we find that some systems in stable equilibrium are more stable than others. The pencil in and the person in (a) are in stable equilibrium, but become unstable for relatively small displacements to the side. The critical point is reached when the cg is no longer above the base of support. Additionally, since the cg of a person’s body is above the pivots in the hips, displacements must be quickly controlled. This control is a central nervous system function that is developed when we learn to hold our bodies erect as infants. For increased stability while standing, the feet should be spread apart, giving a larger base of support. Stability is also increased by lowering one’s center of gravity by bending the knees, as when a football player prepares to receive a ball or braces themselves for a tackle. A cane, a crutch, or a walker increases the stability of the user, even more as the base of support widens. Usually, the cg of a female is lower (closer to the ground) than a male. Young children have their center of gravity between their shoulders, which increases the challenge of learning to walk.
Animals such as chickens have easier systems to control. shows that the cg of a chicken lies below its hip joints and between its widely separated and broad feet. Even relatively large displacements of the chicken’s cg are stable and result in restoring forces and torques that return the cg to its equilibrium position with little effort on the chicken’s part. Not all birds are like chickens, of course. Some birds, such as the flamingo, have balance systems that are almost as sophisticated as that of humans.
shows that the cg of a chicken is below the hip joints and lies above a broad base of support formed by widely-separated and large feet. Hence, the chicken is in very stable equilibrium, since a relatively large displacement is needed to render it unstable. The body of the chicken is supported from above by the hips and acts as a pendulum between the hips. Therefore, the chicken is stable for front-to-back displacements as well as for side-to-side displacements.
Engineers and architects strive to achieve extremely stable equilibriums for buildings and other systems that must withstand wind, earthquakes, and other forces that displace them from equilibrium. Although the examples in this section emphasize gravitational forces, the basic conditions for equilibrium are the same for all types of forces. The net external force must be zero, and the net torque must also be zero.
### Test Prep for AP Courses
### Section Summary
1. A system is said to be in stable equilibrium if, when displaced from equilibrium, it experiences a net force or torque in a direction opposite the direction of the displacement.
2. A system is in unstable equilibrium if, when displaced from equilibrium, it experiences a net force or torque in the same direction as the displacement from equilibrium.
3. A system is in neutral equilibrium if its equilibrium is independent of displacements from its original position.
### Conceptual Questions
### Problems & Exercises
|
# Statics and Torque
## Applications of Statics, Including Problem-Solving Strategies
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the applications of Statics in real life.
2. State and discuss various problem-solving strategies in Statics.
Statics can be applied to a variety of situations, ranging from raising a drawbridge to bad posture and back strain. We begin with a discussion of problem-solving strategies specifically used for statics. Since statics is a special case of Newton’s laws, both the general problem-solving strategies and the special strategies for Newton’s laws, discussed in Problem-Solving Strategies, still apply.
Now let us apply this problem-solving strategy for the pole vaulter shown in the three figures below. The pole is uniform and has a mass of 5.00 kg. In , the pole’s cg lies halfway between the vaulter’s hands. It seems reasonable that the force exerted by each hand is equal to half the weight of the pole, or 24.5 N. This obviously satisfies the first condition for equilibrium . The second condition is also satisfied, as we can see by choosing the cg to be the pivot point. The weight exerts no torque about a pivot point located at the cg, since it is applied at that point and its lever arm is zero. The equal forces exerted by the hands are equidistant from the chosen pivot, and so they exert equal and opposite torques. Similar arguments hold for other systems where supporting forces are exerted symmetrically about the cg. For example, the four legs of a uniform table each support one-fourth of its weight.
In , a pole vaulter holding a pole with its cg halfway between his hands is shown. Each hand exerts a force equal to half the weight of the pole, . (b) The pole vaulter moves the pole to his left, and the forces that the hands exert are no longer equal. See . If the pole is held with its cg to the left of the person, then he must push down with his right hand and up with his left. The forces he exerts are larger here because they are in opposite directions and the cg is at a long distance from either hand.
Similar observations can be made using a meter stick held at different locations along its length.
If the pole vaulter holds the pole as shown in , the situation is not as simple. The total force he exerts is still equal to the weight of the pole, but it is not evenly divided between his hands. (If , then the torques about the cg would not be equal since the lever arms are different.) Logically, the right hand should support more weight, since it is closer to the cg. In fact, if the right hand is moved directly under the cg, it will support all the weight. This situation is exactly analogous to two people carrying a load; the one closer to the cg carries more of its weight. Finding the forces and is straightforward, as the next example shows.
If the pole vaulter holds the pole from near the end of the pole (), the direction of the force applied by the right hand of the vaulter reverses its direction.
If the pole vaulter holds the pole as he might at the start of a run, shown in , the forces change again. Both are considerably greater, and one force reverses direction.
### Test Prep for AP Courses
### Summary
1. Statics can be applied to a variety of situations, ranging from raising a drawbridge to bad posture and back strain. We have discussed the problem-solving strategies specifically useful for statics. Statics is a special case of Newton’s laws, both the general problem-solving strategies and the special strategies for Newton’s laws, discussed in Problem-Solving Strategies, still apply.
### Conceptual Questions
### Problems & Exercises
|
# Statics and Torque
## Simple Machines
### Learning Objectives
By the end of this section, you will be able to:
1. Describe different simple machines.
2. Calculate the mechanical advantage.
Simple machines are devices that can be used to multiply or augment a force that we apply – often at the expense of a distance through which we apply the force. The word for “machine” comes from the Greek word meaning “to help make things easier.” Levers, gears, pulleys, wedges, and screws are some examples of machines. Energy is still conserved for these devices because a machine cannot do more work than the energy put into it. However, machines can reduce the input force that is needed to perform the job. The ratio of output to input force magnitudes for any simple machine is called its mechanical advantage (MA).
One of the simplest machines is the lever, which is a rigid bar pivoted at a fixed place called the fulcrum. Torques are involved in levers, since there is rotation about a pivot point. Distances from the physical pivot of the lever are crucial, and we can obtain a useful expression for the MA in terms of these distances.
shows a lever type that is used as a nail puller. Crowbars, seesaws, and other such levers are all analogous to this one.
is the input force and
is the output force. There are three vertical forces acting on the nail puller (the system of interest) – these are
and
.
is the reaction force back on the system, equal and opposite to
. (Note that
is not a force on the system.)
is the normal force upon the lever, and its torque is zero since it is exerted at the pivot. The torques due to
and must be equal to each other if the nail is not moving, to satisfy the second condition for equilibrium
. (In order for the nail to actually move, the torque due to must be ever-so-slightly greater than torque due to .) Hence,
Notice that is the distance from the pivot point to the point where the input force is applied, and (not labeled on the diagram) is the distance from the pivot point to the point where the output force is applied. The distances and are the perpendicular components of the distances from where the input and output forces are applied to the pivot, as shown in the figure. Rearranging the last equation gives
What interests us most here is that the magnitude of the force exerted by the nail puller, , is much greater than the magnitude of the input force applied to the puller at the other end, . For the nail puller,
This equation is true for levers in general. For the nail puller, the MA is certainly greater than one. The longer the handle on the nail puller, the greater the force you can exert with it.
Two other types of levers that differ slightly from the nail puller are a wheelbarrow and a shovel, shown in . All these lever types are similar in that only three forces are involved – the input force, the output force, and the force on the pivot – and thus their MAs are given by and , with distances being measured relative to the physical pivot. The wheelbarrow and shovel differ from the nail puller because both the input and output forces are on the same side of the pivot.
In the case of the wheelbarrow, the output force or load is between the pivot (the wheel’s axle) and the input or applied force. In the case of the shovel, the input force is between the pivot (at the end of the handle) and the load, but the input lever arm is shorter than the output lever arm. In this case, the MA is less than one.
Another very simple machine is the inclined plane. Pushing a cart up a plane is easier than lifting the same cart straight up to the top using a ladder, because the applied force is less. However, the work done in both cases (assuming the work done by friction is negligible) is the same. Inclined lanes or ramps were probably used during the construction of the Egyptian pyramids to move large blocks of stone to the top.
A crank is a lever that can be rotated
about its pivot, as shown in . Such a machine may not look like a lever, but the physics of its actions remain the same. The MA for a crank is simply the ratio of the radii . Wheels and gears have this simple expression for their MAs too. The MA can be greater than 1, as it is for the crank, or less than 1, as it is for the simplified car axle driving the wheels, as shown. If the axle’s radius is
and the wheel’s radius is
, then
and the axle would have to exert a force of
on the wheel to enable it to exert a force of
on the ground.
An ordinary pulley has an MA of 1; it only changes the direction of the force and not its magnitude. Combinations of pulleys, such as those illustrated in , are used to multiply force. If the pulleys are friction-free, then the force output is approximately an integral multiple of the tension in the cable. The number of cables pulling directly upward on the system of interest, as illustrated in the figures given below, is approximately the MA of the pulley system. Since each attachment applies an external force in approximately the same direction as the others, they add, producing a total force that is nearly an integral multiple of the input force
.
### Test Prep for AP Courses
### Section Summary
1. Simple machines are devices that can be used to multiply or augment a force that we apply – often at the expense of a distance through which we have to apply the force.
2. The ratio of output to input forces for any simple machine is called its mechanical advantage
3. A few simple machines are the lever, nail puller, wheelbarrow, crank, etc.
### Conceptual Questions
### Problems & Exercises
|
# Statics and Torque
## Forces and Torques in Muscles and Joints
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the forces exerted by muscles.
2. State how a bad posture causes back strain.
3. Discuss the benefits of skeletal muscles attached close to joints.
4. Discuss various complexities in the real system of muscles, bones, and joints.
Muscles, bones, and joints are some of the most interesting applications of statics. There are some surprises. Muscles, for example, exert far greater forces than we might think. shows a forearm holding a book and a schematic diagram of an analogous lever system. The schematic is a good approximation for the forearm, which looks more complicated than it is, and we can get some insight into the way typical muscle systems function by analyzing it.
Muscles can only contract, so they occur in pairs. In the arm, the biceps muscle is a flexor—that is, it closes the limb. The triceps muscle is an extensor that opens the limb. This configuration is typical of skeletal muscles, bones, and joints in humans and other vertebrates. Most skeletal muscles exert much larger forces within the body than the limbs apply to the outside world. The reason is clear once we realize that most muscles are attached to bones via tendons close to joints, causing these systems to have mechanical advantages much less than one. Viewing them as simple machines, the input force is much greater than the output force, as seen in .
In the above example of the biceps muscle, the angle between the forearm and upper arm is 90°. If this angle changes, the force exerted by the biceps muscle also changes. In addition, the length of the biceps muscle changes. The force the biceps muscle can exert depends upon its length; it is smaller when it is shorter than when it is stretched.
Very large forces are also created in the joints. In the previous example, the downward force exerted by the humerus at the elbow joint equals 407 N, or 6.38 times the total weight supported. (The calculation of
is straightforward and is left as an end-of-chapter problem.) Because muscles can contract, but not expand beyond their resting length, joints and muscles often exert forces that act in opposite directions and thus subtract. (In the above example, the upward force of the muscle minus the downward force of the joint equals the weight supported—that is,
, approximately equal to the weight supported.) Forces in muscles and joints are largest when their load is a long distance from the joint, as the book is in the previous example.
In racquet sports such as tennis the constant extension of the arm during game play creates large forces in this way. The mass times the lever arm of a tennis racquet is an important factor, and many players use the heaviest racquet they can handle. It is no wonder that joint deterioration and damage to the tendons in the elbow, such as “tennis elbow,” can result from repetitive motion, undue torques, and possibly poor racquet selection in such sports. Various tried techniques for holding and using a racquet or bat or stick not only increases sporting prowess but can minimize fatigue and long-term damage to the body. For example, tennis balls correctly hit at the “sweet spot” on the racquet will result in little vibration or impact force being felt in the racquet and the body—less torque as explained in Collisions of Extended Bodies in Two Dimensions. Twisting the hand to provide top spin on the ball or using an extended rigid elbow in a backhand stroke can also aggravate the tendons in the elbow.
Training coaches and physical therapists use the knowledge of relationships between forces and torques in the treatment of muscles and joints. In physical therapy, an exercise routine can apply a particular force and torque which can, over a period of time, revive muscles and joints. Some exercises are designed to be carried out under water, because this requires greater forces to be exerted, further strengthening muscles. However, connecting tissues in the limbs, such as tendons and cartilage as well as joints are sometimes damaged by the large forces they carry. Often, this is due to accidents, but heavily muscled athletes, such as weightlifters, can tear muscles and connecting tissue through effort alone.
The back is considerably more complicated than the arm or leg, with various muscles and joints between vertebrae, all having mechanical advantages less than 1. Back muscles must, therefore, exert very large forces, which are borne by the spinal column. Discs crushed by mere exertion are very common. The jaw is somewhat exceptional—the masseter muscles that close the jaw have a mechanical advantage greater than 1 for the back teeth, allowing us to exert very large forces with them. A cause of stress headaches is persistent clenching of teeth where the sustained large force translates into fatigue in muscles around the skull.
shows how bad posture causes back strain. In part (a), we see a person with good posture. Note that her upper body’s cg is directly above the pivot point in the hips, which in turn is directly above the base of support at her feet. Because of this, her upper body’s weight exerts no torque about the hips. The only force needed is a vertical force at the hips equal to the weight supported. No muscle action is required, since the bones are rigid and transmit this force from the floor. This is a position of unstable equilibrium, but only small forces are needed to bring the upper body back to vertical if it is slightly displaced. Bad posture is shown in part (b); we see that the upper body’s cg is in front of the pivot in the hips. This creates a clockwise torque around the hips that is counteracted by muscles in the lower back. These muscles must exert large forces, since they have typically small mechanical advantages. (In other words, the perpendicular lever arm for the muscles is much smaller than for the cg.) Poor posture can also cause muscle strain for people sitting at their desks using computers. Special chairs are available that allow the body’s CG to be more easily situated above the seat, to reduce back pain. Prolonged muscle action produces muscle strain. Note that the cg of the entire body is still directly above the base of support in part (b) of . This is compulsory; otherwise the person would not be in equilibrium. We lean forward for the same reason when carrying a load on our backs, to the side when carrying a load in one arm, and backward when carrying a load in front of us, as seen in .
You have probably been warned against lifting objects with your back. This action, even more than bad posture, can cause muscle strain and damage discs and vertebrae, since abnormally large forces are created in the back muscles and spine.
What are the benefits of having most skeletal muscles attached so close to joints? One advantage is speed because small muscle contractions can produce large movements of limbs in a short period of time. Other advantages are flexibility and agility, made possible by the large numbers of joints and the ranges over which they function. For example, it is difficult to imagine a system with biceps muscles attached at the wrist that would be capable of the broad range of movement we vertebrates possess.
There are some interesting complexities in real systems of muscles, bones, and joints. For instance, the pivot point in many joints changes location as the joint is flexed, so that the perpendicular lever arms and the mechanical advantage of the system change, too. Thus the force the biceps muscle must exert to hold up a book varies as the forearm is flexed. Similar mechanisms operate in the legs, which explain, for example, why there is less leg strain when a bicycle seat is set at the proper height. The methods employed in this section give a reasonable description of real systems provided enough is known about the dimensions of the system. There are many other interesting examples of force and torque in the body—a few of these are the subject of end-of-chapter problems.
### Test Prep for AP Courses
### Section Summary
1. Statics plays an important part in understanding everyday strains in our muscles and bones.
2. Many lever systems in the body have a mechanical advantage of significantly less than one, as many of our muscles are attached close to joints.
3. Someone with good posture stands or sits in such a way that the person's center of gravity lies directly above the pivot point in the hips, thereby avoiding back strain and damage to disks.
### Conceptual Questions
### Problems & Exercises
|
# Rotational Motion and Angular Momentum
## Connection for AP® Courses
Why do tornadoes spin? And why do tornadoes spin so rapidly? The answer is that the air masses that produce tornadoes are themselves rotating, and when the radii of the air masses decrease, their rate of rotation increases. An ice skater increases her spin in an exactly analogous manner, as seen in Figure 10.2. The skater starts her rotation with outstretched limbs and increases her rate of spin by pulling them in toward her body. The same physics describes the exhilarating spin of a skater and the wrenching force of a tornado. We will find that this is another example of the importance of conservation laws and their role in determining how changes happen in a system, supporting Big Idea 5. The idea that a change of a conserved quantity is always equal to the transfer of that quantity between interacting systems (Enduring Understanding 5.A) is presented for both energy and angular momentum (Enduring Understanding 5.E). The conservation of angular momentum in relation to the external net torque (Essential Knowledge 5.E.1) parallels that of linear momentum conservation in relation to the external net force. The concept of rotational inertia is introduced, a concept that takes into account not only the mass of an object or a system, but also the distribution of mass within the object or system. Therefore, changes in the rotational inertia of a system could lead to changes in the motion (Essential Knowledge 5.E.2) of the system. We shall see that all important aspects of rotational motion either have already been defined for linear motion or have exact analogues in linear motion.
Clearly, therefore, force, energy, and power are associated with rotational motion. This supports Big Idea 3, that interactions are described by forces. The ability of forces to cause torques (Enduring Understanding 3.F) is extended to the interactions between objects that result in nonzero net torque. This nonzero net torque in turn causes changes in the rotational motion of an object (Essential Knowledge 3.F.2) and results in changes of the angular momentum of an object (Essential Knowledge 3.F.3).
Similarly, Big Idea 4, that interactions between systems cause changes in those systems, is supported by the empirical observation that when torques are exerted on rigid bodies these torques cause changes in the angular momentum of the system (Enduring Understanding 4.D).
Again, there is a clear analogy between linear and rotational motion in this interaction. Both the angular kinematics variables (angular displacement, angular velocity, and angular acceleration) and the dynamics variables (torque and angular momentum) are vectors with direction depending on whether the rotation is clockwise or counterclockwise with respect to an axis of rotation (Essential Knowledge 4.D.1). The angular momentum of the system can change due to interactions (Essential Knowledge 4.D.2). This change is defined as the product of the average torque and the time interval during which torque is exerted (Essential Knowledge 4.D.3), analogous to the impulse-momentum theorem for linear motion.
The concepts in this chapter support:
Big Idea 3. The interactions of an object with other objects can be described by forces.
Enduring Understanding 3.F. A force exerted on an object can cause a torque on that object.
Extended Knowledge 3.F.2. The presence of a net torque along any axis will cause a rigid system to change its rotational motion or an object to change its rotational motion about that axis.
Extended Knowledge 3.F.3. A torque exerted on an object can change the angular momentum of an object.
Big Idea 4. Interactions between systems can result in changes in those systems.
Enduring Understanding 4.D. A net torque exerted on a system by other objects or systems will change the angular momentum of the system.
Extended Knowledge 4.D.1. Torque, angular velocity, angular acceleration, and angular momentum are vectors and can be characterized as positive or negative depending upon whether they give rise to or correspond to counterclockwise or clockwise rotation with respect to an axis.
Extended Knowledge 4.D.2. The angular momentum of a system may change due to interactions with other objects or systems.
Extended Knowledge 4.D.3. The change in angular momentum is given by the product of the average torque and the time interval during which the torque is exerted.
Big Idea 5. Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.A. Certain quantities are conserved, in the sense that the changes of those quantities in a given system are always equal to the transfer of that quantity to or from the system by all possible interactions with other systems.
Extended Knowledge 5.A.2. For all systems under all circumstances, energy, charge, linear momentum, and angular momentum are conserved.
Enduring Understanding 5.E. The angular momentum of a system is conserved.
Extended Knowledge 5.E.1. If the net external torque exerted on the system is zero, the angular momentum of the system does not change.
Extended Knowledge 5.E.2. The angular momentum of a system is determined by the locations and velocities of the objects that make up the system. The rotational inertia of an object or system depends upon the distribution of mass within the object or system. Changes in the radius of a system or in the distribution of mass within the system result in changes in the system's rotational inertia, and hence in its angular velocity and linear speed for a given angular momentum. Examples should include elliptical orbits in an Earth-satellite system. Mathematical expressions for the moments of inertia will be provided where needed. Students will not be expected to know the parallel axis theorem. |
# Rotational Motion and Angular Momentum
## Angular Acceleration
### Learning Objectives
By the end of this section, you will be able to:
1. Describe uniform circular motion.
2. Explain non-uniform circular motion.
3. Calculate angular acceleration of an object.
4. Observe the link between linear and angular acceleration.
Uniform Circular Motion and Gravitation discussed only uniform circular motion, which is motion in a circle at constant speed and, hence, constant angular velocity. Recall that angular velocity was defined as the time rate of change of angle :
where is the angle of rotation as seen in . The relationship between angular velocity and linear velocity was also defined in Rotation Angle and Angular Velocity as
or
where is the radius of curvature, also seen in . According to the sign convention, the counter clockwise direction is considered as positive direction and clockwise direction as negative
Angular velocity is not constant when a skater pulls in her arms, when a child starts up a merry-go-round from rest, or when a computer’s hard disk slows to a halt when switched off. In all these cases, there is an angular acceleration, in which changes. The faster the change occurs, the greater the angular acceleration. Angular acceleration is defined as the rate of change of angular velocity. In equation form, angular acceleration is expressed as follows:
where is the change in angular velocity and is the change in time. The units of angular acceleration are , or . If increases, then is positive. If decreases, then is negative.
If the bicycle in the preceding example had been on its wheels instead of upside-down, it would first have accelerated along the ground and then come to a stop. This connection between circular motion and linear motion needs to be explored. For example, it would be useful to know how linear and angular acceleration are related. In circular motion, linear acceleration is tangent to the circle at the point of interest, as seen in . Thus, linear acceleration is called tangential acceleration .
Linear or tangential acceleration refers to changes in the magnitude of velocity but not its direction. We know from Uniform Circular Motion and Gravitation that in circular motion centripetal acceleration,
, refers to changes in the direction of the velocity but not its magnitude. An object undergoing circular motion experiences centripetal acceleration, as seen in . Thus,
and
are perpendicular and independent of one another. Tangential acceleration
is directly related to the angular acceleration
and is linked to an increase or decrease in the velocity, but not its direction.
Now we can find the exact relationship between linear acceleration and angular acceleration . Because linear acceleration is proportional to a change in the magnitude of the velocity, it is defined (as it was in One-Dimensional Kinematics) to be
For circular motion, note that , so that
The radius is constant for circular motion, and so . Thus,
By definition, . Thus,
or
These equations mean that linear acceleration and angular acceleration are directly proportional. The greater the angular acceleration is, the larger the linear (tangential) acceleration is, and vice versa. For example, the greater the angular acceleration of a car’s drive wheels, the greater the acceleration of the car. The radius also matters. For example, the smaller a wheel, the smaller its linear acceleration for a given angular acceleration .
So far, we have defined three rotational quantities— , and . These quantities are analogous to the translational quantities , and . displays rotational quantities, the analogous translational quantities, and the relationships between them.
### Section Summary
1. Uniform circular motion is the motion with a constant angular velocity .
2. In non-uniform circular motion, the velocity changes with time and the rate of change of angular velocity (i.e. angular acceleration) is .
3. Linear or tangential acceleration refers to changes in the magnitude of velocity but not its direction, given as .
4. For circular motion, note that , so that
5. The radius r is constant for circular motion, and so . Thus,
6. By definition, . Thus,
or
### Conceptual Questions
### Problems & Exercises
|
# Rotational Motion and Angular Momentum
## Kinematics of Rotational Motion
### Learning Objectives
By the end of this section, you will be able to:
1. Observe the kinematics of rotational motion.
2. Derive rotational kinematic equations.
3. Evaluate problem solving strategies for rotational kinematics.
Just by using our intuition, we can begin to see how rotational quantities like , , and are related to one another. For example, if a motorcycle wheel has a large angular acceleration for a fairly long time, it ends up spinning rapidly and rotates through many revolutions. In more technical terms, if the wheel’s angular acceleration is large for a long period of time , then the final angular velocity and angle of rotation are large. The wheel’s rotational motion is exactly analogous to the fact that the motorcycle’s large translational acceleration produces a large final velocity, and the distance traveled will also be large.
Kinematics is the description of motion. The kinematics of rotational motion describes the relationships among rotation angle, angular velocity, angular acceleration, and time. Let us start by finding an equation relating , , and . To determine this equation, we recall a familiar kinematic equation for translational, or straight-line, motion:
Note that in rotational motion , and we shall use the symbol for tangential or linear acceleration from now on. As in linear kinematics, we assume is constant, which means that angular acceleration is also a constant, because . Now, let us substitute and into the linear equation above:
The radius cancels in the equation, yielding
where is the initial angular velocity. This last equation is a kinematic relationship among , , and —that is, it describes their relationship without reference to forces or masses that may affect rotation. It is also precisely analogous in form to its translational counterpart.
Starting with the four kinematic equations we developed in One-Dimensional Kinematics, we can derive the following four rotational kinematic equations (presented together with their translational counterparts):
In these equations, the subscript 0 denotes initial values (, , and are initial values), and the average angular velocity and average velocity are defined as follows:
The equations given above in can be used to solve any rotational or translational kinematics problem in which and are constant.
There is translational motion even for something spinning in place, as the following example illustrates. shows a fly on the edge of a rotating microwave oven plate. The example below calculates the total distance it travels.
### Section Summary
1. Kinematics is the description of motion.
2. The kinematics of rotational motion describes the relationships among rotation angle, angular velocity, angular acceleration, and time.
3. Starting with the four kinematic equations we developed in the One-Dimensional Kinematics, we can derive the four rotational kinematic equations (presented together with their translational counterparts) seen in .
4. In these equations, the subscript 0 denotes initial values ( and are initial values), and the average angular velocity and average velocity are defined as follows:
### Problems & Exercises
|
# Rotational Motion and Angular Momentum
## Dynamics of Rotational Motion: Rotational Inertia
### Learning Objectives
By the end of this section, you will be able to:
1. Understand the relationship between force, mass and acceleration.
2. Study the turning effect of force.
3. Study the analogy between force and torque, mass and moment of inertia, and linear acceleration and angular acceleration.
If you have ever spun a bike wheel or pushed a merry-go-round, you know that force is needed to change angular velocity as seen in . In fact, your intuition is reliable in predicting many of the factors that are involved. For example, we know that a door opens slowly if we push too close to its hinges. Furthermore, we know that the more massive the door, the more slowly it opens. The first example implies that the farther the force is applied from the pivot, the greater the angular acceleration; another implication is that angular acceleration is inversely proportional to mass. These relationships should seem very similar to the familiar relationships among force, mass, and acceleration embodied in Newton’s second law of motion. There are, in fact, precise rotational analogs to both force and mass.
To develop the precise relationship among force, mass, radius, and angular acceleration, consider what happens if we exert a force on a point mass that is at a distance from a pivot point, as shown in . Because the force is perpendicular to , an acceleration is obtained in the direction of . We can rearrange this equation such that and then look for ways to relate this expression to expressions for rotational quantities. We note that , and we substitute this expression into , yielding
Recall that torque is the turning effectiveness of a force. In this case, because is perpendicular to , torque is simply . So, if we multiply both sides of the equation above by , we get torque on the left-hand side. That is,
or
This last equation is the rotational analog of Newton’s second law (), where torque is analogous to force, angular acceleration is analogous to translational acceleration, and is analogous to mass (or inertia). The quantity is called the rotational inertia or moment of inertia of a point mass a distance from the center of rotation.
### Rotational Inertia and Moment of Inertia
Before we can consider the rotation of anything other than a point mass like the one in , we must extend the idea of rotational inertia to all types of objects. To expand our concept of rotational inertia, we define the moment of inertia
of an object to be the sum of
for all the point masses of which it is composed. That is,
. Here
is analogous to
in translational motion. Because of the distance
, the moment of inertia for any object depends on the chosen axis. Actually, calculating
is beyond the scope of this text except for one simple case—that of a hoop, which has all its mass at the same distance from its axis. A hoop’s moment of inertia around its axis is therefore
, where
is its total mass and its radius. (We use and for an entire object to distinguish them from and for point masses.) In all other cases, we must consult (note that the table is piece of artwork that has shapes as well as formulae) for formulas for that have been derived from integration over the continuous body. Note that has units of mass multiplied by distance squared (), as we might expect from its definition.
The general relationship among torque, moment of inertia, and angular acceleration is
or
where net is the total torque from all forces relative to a chosen axis. For simplicity, we will only consider torques exerted by forces in the plane of the rotation. Such torques are either positive or negative and add like ordinary numbers. The relationship in is the rotational analog to Newton’s second law and is very generally applicable. This equation is actually valid for any torque, applied to any object, relative to any axis.
As we might expect, the larger the torque is, the larger the angular acceleration is. For example, the harder a child pushes on a merry-go-round, the faster it accelerates. Furthermore, the more massive a merry-go-round, the slower it accelerates for the same torque. The basic relationship between moment of inertia and angular acceleration is that the larger the moment of inertia, the smaller is the angular acceleration. But there is an additional twist. The moment of inertia depends not only on the mass of an object, but also on its distribution of mass relative to the axis around which it rotates. For example, it will be much easier to accelerate a merry-go-round full of children if they stand close to its axis than if they all stand at the outer edge. The mass is the same in both cases, but the moment of inertia is much larger when the children are at the edge.
### Test Prep for AP Courses
### Section Summary
1. The farther the force is applied from the pivot, the greater is the angular acceleration; angular acceleration is inversely proportional to mass.
2. If we exert a force on a point mass that is at a distance from a pivot point and because the force is perpendicular to , an acceleration is obtained in the direction of . We can rearrange this equation such that
and then look for ways to relate this expression to expressions for rotational quantities. We note that
3. Torque is the turning effectiveness of a force. In this case, because is perpendicular to , torque is simply . If we multiply both sides of the equation above by , we get torque on the left-hand side. That is,
or
4. The moment of inertia of an object is the sum of for all the point masses of which it is composed. That is,
5. The general relationship among torque, moment of inertia, and angular acceleration is
or
### Conceptual Questions
### Problems & Exercises
|
# Rotational Motion and Angular Momentum
## Rotational Kinetic Energy: Work and Energy Revisited
### Learning Objectives
By the end of this section, you will be able to:
1. Derive the equation for rotational work.
2. Calculate rotational kinetic energy.
3. Demonstrate the Law of Conservation of Energy.
In this module, we will learn about work and energy associated with rotational motion. shows a worker using an electric grindstone propelled by a motor. Sparks are flying, and noise and vibration are created as layers of steel are pared from the pole. The stone continues to turn even after the motor is turned off, but it is eventually brought to a stop by friction. Clearly, the motor had to work to get the stone spinning. This work went into heat, light, sound, vibration, and considerable rotational kinetic energy.
Work must be done to rotate objects such as grindstones or merry-go-rounds. Work was defined in Uniform Circular Motion and Gravitation for translational motion, and we can build on that knowledge when considering work done in rotational motion. The simplest rotational situation is one in which the net force is exerted perpendicular to the radius of a disk (as shown in ) and remains perpendicular as the disk starts to rotate. The force is parallel to the displacement, and so the net work done is the product of the force times the arc length traveled:
To get torque and other rotational quantities into the equation, we multiply and divide the right-hand side of the equation by , and gather terms:
We recognize that and , so that
This equation is the expression for rotational work. It is very similar to the familiar definition of translational work as force multiplied by distance. Here, torque is analogous to force, and angle is analogous to distance. The equation is valid in general, even though it was derived for a special case.
To get an expression for rotational kinetic energy, we must again perform some algebraic manipulations. The first step is to note that , so that
Now, we solve one of the rotational kinematics equations for . We start with the equation
Next, we solve for :
Substituting this into the equation for net and gathering terms yields
This equation is the work-energy theorem for rotational motion only. As you may recall, net work changes the kinetic energy of a system. Through an analogy with translational motion, we define the term to be rotational kinetic energy for an object with a moment of inertia and an angular velocity :
The expression for rotational kinetic energy is exactly analogous to translational kinetic energy, with being analogous to and to . Rotational kinetic energy has important effects. Flywheels, for example, can be used to store large amounts of rotational kinetic energy in a vehicle, as seen in .
Helicopter pilots are quite familiar with rotational kinetic energy. They know, for example, that a point of no return will be reached if they allow their blades to slow below a critical angular velocity during flight. The blades lose lift, and it is impossible to immediately get the blades spinning fast enough to regain it. Rotational kinetic energy must be supplied to the blades to get them to rotate faster, and enough energy cannot be supplied in time to avoid a crash. Because of weight limitations, helicopter engines are too small to supply both the energy needed for lift and to replenish the rotational kinetic energy of the blades once they have slowed down. The rotational kinetic energy is put into them before takeoff and must not be allowed to drop below this crucial level. One possible way to avoid a crash is to use the gravitational potential energy of the helicopter to replenish the rotational kinetic energy of the blades by losing altitude and aligning the blades so that the helicopter is spun up in the descent. Of course, if the helicopter’s altitude is too low, then there is insufficient time for the blade to regain lift before reaching the ground.
### How Thick Is the Soup? Or Why Don’t All Objects Roll Downhill at the Same Rate?
One of the quality controls in a tomato soup factory consists of rolling filled cans down a ramp. If they roll too fast, the soup is too thin. Why should cans of identical size and mass roll down an incline at different rates? And why should the thickest soup roll the slowest?
The easiest way to answer these questions is to consider energy. Suppose each can starts down the ramp from rest. Each can starting from rest means each starts with the same gravitational potential energy , which is converted entirely to , provided each rolls without slipping. , however, can take the form of or , and total is the sum of the two. If a can rolls down a ramp, it puts part of its energy into rotation, leaving less for translation. Thus, the can goes slower than it would if it slid down. Furthermore, the thin soup does not rotate, whereas the thick soup does, because it sticks to the can. The thick soup thus puts more of the can’s original gravitational potential energy into rotation than the thin soup, and the can rolls more slowly, as seen in .
Assuming no losses due to friction, there is only one force doing work—gravity. Therefore the total work done is the change in kinetic energy. As the cans start moving, the potential energy is changing into kinetic energy. Conservation of energy gives
More specifically,
or
So, the initial is divided between translational kinetic energy and rotational kinetic energy; and the greater is, the less energy goes into translation. If the can slides down without friction, then and all the energy goes into translation; thus, the can goes faster.
### Test Prep for AP Courses
### Section Summary
1. The rotational kinetic energy for an object with a moment of inertia and an angular velocity is given by
2. Helicopters store large amounts of rotational kinetic energy in their blades. This energy must be put into the blades before takeoff and maintained until the end of the flight. The engines do not have enough power to simultaneously provide lift and put significant rotational energy into the blades.
3. Work and energy in rotational motion are completely analogous to work and energy in translational motion.
4. The equation for the work-energy theorem for rotational motion is,
### Conceptual Questions
### Problems & Exercises
|
# Rotational Motion and Angular Momentum
## Angular Momentum and Its Conservation
### Learning Objectives
By the end of this section, you will be able to:
1. Understand the analogy between angular momentum and linear momentum.
2. Observe the relationship between torque and angular momentum.
3. Apply the law of conservation of angular momentum.
Why does Earth keep on spinning? What started it spinning to begin with? And how does an ice skater manage to spin faster and faster simply by pulling her arms in? Why does she not have to exert a torque to spin faster? Questions like these have answers based in angular momentum, the rotational analog to linear momentum.
By now the pattern is clear—every rotational phenomenon has a direct translational analog. It seems quite reasonable, then, to define angular momentum as
This equation is an analog to the definition of linear momentum as . Units for linear momentum are while units for angular momentum are . As we would expect, an object that has a large moment of inertia , such as Earth, has a very large angular momentum. An object that has a large angular velocity , such as a centrifuge, also has a rather large angular momentum.
When you push a merry-go-round, spin a bike wheel, or open a door, you exert a torque. If the torque you exert is greater than opposing torques, then the rotation accelerates, and angular momentum increases. The greater the net torque, the more rapid the increase in . The relationship between torque and angular momentum is
This expression is exactly analogous to the relationship between force and linear momentum, . The equation is very fundamental and broadly applicable. It is, in fact, the rotational form of Newton’s second law.
### Conservation of Angular Momentum
We can now understand why Earth keeps on spinning. As we saw in the previous example, . This equation means that, to change angular momentum, a torque must act over some period of time. Because Earth has a large angular momentum, a large torque acting over a long time is needed to change its rate of spin. So what external torques are there? Tidal friction exerts torque that is slowing Earth’s rotation, but tens of millions of years must pass before the change is very significant. Recent research indicates the length of the day was 18 h some 900 million years ago. Only the tides exert significant retarding torques on Earth, and so it will continue to spin, although ever more slowly, for many billions of years.
What we have here is, in fact, another conservation law. If the net torque is zero, then angular momentum is constant or conserved. We can see this rigorously by considering for the situation in which the net torque is zero. In that case,
implying that
If the change in angular momentum is zero, then the angular momentum is constant; thus,
or
These expressions are the law of conservation of angular momentum. Conservation laws are as scarce as they are important.
An example of conservation of angular momentum is seen in , in which an ice skater is executing a spin. The net torque on her is very close to zero, because there is relatively little friction between her skates and the ice and because the friction is exerted very close to the pivot point. (Both and are small, and so is negligibly small.) Consequently, she can spin for quite some time. She can do something else, too. She can increase her rate of spin by pulling her arms and legs in. Why does pulling her arms and legs in increase her rate of spin? The answer is that her angular momentum is constant, so that
Expressing this equation in terms of the moment of inertia,
where the primed quantities refer to conditions after she has pulled in her arms and reduced her moment of inertia. Because is smaller, the angular velocity must increase to keep the angular momentum constant. The change can be dramatic, as the following example shows.
There are several other examples of objects that increase their rate of spin because something reduced their moment of inertia. Tornadoes are one example. Storm systems that create tornadoes are slowly rotating. When the radius of rotation narrows, even in a local region, angular velocity increases, sometimes to the furious level of a tornado. Earth is another example. Our planet was born from a huge cloud of gas and dust, the rotation of which came from turbulence in an even larger cloud. Gravitational forces caused the cloud to contract, and the rotation rate increased as a result. (See .)
In case of human motion, one would not expect angular momentum to be conserved when a body interacts with the environment as its foot pushes off the ground. Astronauts floating in space aboard the International Space Station have no angular momentum relative to the inside of the ship if they are motionless. Their bodies will continue to have this zero value no matter how they twist about as long as they do not give themselves a push off the side of the vessel.
### Test Prep for AP Courses
### Section Summary
1. Every rotational phenomenon has a direct translational analog , likewise angular momentum can be defined as
2. This equation is an analog to the definition of linear momentum as . The relationship between torque and angular momentum is
3. Angular momentum, like energy and linear momentum, is conserved. This universally applicable law is another sign of underlying unity in physical laws. Angular momentum is conserved when net external torque is zero, just as linear momentum is conserved when the net external force is zero.
### Conceptual Questions
### Problems & Exercises
|
# Rotational Motion and Angular Momentum
## Collisions of Extended Bodies in Two Dimensions
### Learning Objectives
By the end of this section, you will be able to:
1. Observe collisions of extended bodies in two dimensions.
2. Examine collision at the point of percussion.
Bowling pins are sent flying and spinning when hit by a bowling ball—angular momentum as well as linear momentum and energy have been imparted to the pins. (See ). Many collisions involve angular momentum. Cars, for example, may spin and collide on ice or a wet surface. Baseball pitchers throw curves by putting spin on the baseball. A tennis player can put a lot of top spin on the tennis ball which causes it to dive down onto the court once it crosses the net. We now take a brief look at what happens when objects that can rotate collide.
Consider the relatively simple collision shown in , in which a disk strikes and adheres to an initially motionless stick nailed at one end to a frictionless surface. After the collision, the two rotate about the nail. There is an unbalanced external force on the system at the nail. This force exerts no torque because its lever arm is zero. Angular momentum is therefore conserved in the collision. Kinetic energy is not conserved, because the collision is inelastic. It is possible that momentum is not conserved either because the force at the nail may have a component in the direction of the disk’s initial velocity. Let us examine a case of rotation in a collision in .
The above example has other implications. For example, what would happen if the disk hit very close to the nail? Obviously, a force would be exerted on the nail in the forward direction. So, when the stick is struck at the end farthest from the nail, a backward force is exerted on the nail, and when it is hit at the end nearest the nail, a forward force is exerted on the nail. Thus, striking it at a certain point in between produces no force on the nail. This intermediate point is known as the percussion point.
An analogous situation occurs in tennis as seen in . If you hit a ball with the end of your racquet, the handle is pulled away from your hand. If you hit a ball much farther down, for example, on the shaft of the racquet, the handle is pushed into your palm. And if you hit the ball at the racquet’s percussion point (what some people call the “sweet spot”), then little or no force is exerted on your hand, and there is less vibration, reducing chances of a tennis elbow. The same effect occurs for a baseball bat.
### Test Prep for AP Courses
### Section Summary
1. Angular momentum is analogous to linear momentum and is given by .
2. Angular momentum is changed by torque, following the relationship
3. Angular momentum is conserved if the net torque is zero
or
. This equation is known as the law of conservation of angular momentum, which may be conserved in collisions.
### Conceptual Questions
### Problems & Exercises
|
# Rotational Motion and Angular Momentum
## Gyroscopic Effects: Vector Aspects of Angular Momentum
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the right-hand rule to find the direction of angular velocity, momentum, and torque.
2. Explain the gyroscopic effect.
3. Study how Earth acts like a gigantic gyroscope.
Angular momentum is a vector and, therefore, has direction as well as magnitude. Torque affects both the direction and the magnitude of angular momentum. What is the direction of the angular momentum of a rotating object like the disk in ? The figure shows the right-hand rule used to find the direction of both angular momentum and angular velocity. Both and are vectors—each has direction and magnitude. Both can be represented by arrows. The right-hand rule defines both to be perpendicular to the plane of rotation in the direction shown. Because angular momentum is related to angular velocity by , the direction of is the same as the direction of . Notice in the figure that both point along the axis of rotation.
Now, recall that torque changes angular momentum as expressed by
This equation means that the direction of is the same as the direction of the torque that creates it. This result is illustrated in , which shows the direction of torque and the angular momentum it creates.
Let us now consider a bicycle wheel with a couple of handles attached to it, as shown in . (This device is popular in demonstrations among physicists, because it does unexpected things.) With the wheel rotating as shown, its angular momentum is to the woman's left. Suppose the person holding the wheel tries to rotate it as in the figure. Her natural expectation is that the wheel will rotate in the direction she pushes it—but what happens is quite different. The forces exerted create a torque that is horizontal toward the person, as shown in (a). This torque creates a change in angular momentum in the same direction, perpendicular to the original angular momentum , thus changing the direction of but not the magnitude of . shows how and add, giving a new angular momentum with direction that is inclined more toward the person than before. The axis of the wheel has thus moved perpendicular to the forces exerted on it, instead of in the expected direction.
This same logic explains the behavior of gyroscopes. shows the two forces acting on a spinning gyroscope. The torque produced is perpendicular to the angular momentum, thus the direction of the torque is changed, but not its magnitude. The gyroscope precesses around a vertical axis, since the torque is always horizontal and perpendicular to . If the gyroscope is not spinning, it acquires angular momentum in the direction of the torque (), and it rotates around a horizontal axis, falling over just as we would expect.
Earth itself acts like a gigantic gyroscope. Its angular momentum is along its axis and points at Polaris, the North Star. But Earth is slowly precessing (once in about 26,000 years) due to the torque of the Sun and the Moon on its nonspherical shape.
### Test Prep for AP Courses
### Section Summary
1. Torque is perpendicular to the plane formed by and and is the direction your right thumb would point if you curled the fingers of your right hand in the direction of . The direction of the torque is thus the same as that of the angular momentum it produces.
2. The gyroscope precesses around a vertical axis, since the torque is always horizontal and perpendicular to . If the gyroscope is not spinning, it acquires angular momentum in the direction of the torque (), and it rotates about a horizontal axis, falling over just as we would expect.
3. Earth itself acts like a gigantic gyroscope. Its angular momentum is along its axis and points at Polaris, the North Star.
### Conceptual Questions
### Problem Exercises
|
# Fluid Statics
## Connection for AP® Courses
Much of what we value in life is fluid: a breath of fresh winter air; the water we drink, swim in, and bathe in; the blood in our veins. But what exactly is a fluid? Can we understand fluids with the laws already presented, or will new laws emerge from their study?
As you read this chapter, you will learn how the arrangement and interaction of the particles—atoms and molecules—that make up a fluid define many of its macroscopic characteristics, like density and pressure (Big Idea 1, Enduring Understanding 1.E, Essential Knowledge 1.E.1). While the number of particles in a fluid is often immense, you will be able to use a probabilistic approach in order to explain how a fluid affects its environment in a variety of ways (Big Idea 7, Enduring Understanding 7.A, Essential Knowledge 7.A.1). From the pressure a fluid places on the walls of a hydraulic system to a variety of biological and medical applications, understanding a fluid's properties begins with understanding its internal structure.
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.E Materials have many macroscopic properties that result from the arrangement and interactions of the atoms and molecules that make up the material.
Essential Knowledge 1.E.1 Matter has a property called density.
Big Idea 7 The mathematics of probability can be used to describe the behavior of complex systems and to interpret the behavior of quantum mechanical systems.
Enduring Understanding 7.A The properties of an ideal gas can be explained in terms of a small number of macroscopic variables including temperature and pressure.
Essential Knowledge 7.A.1 The pressure of a system determines the force that the system exerts on the walls of its container and is a measure of the average change in the momentum or impulse of the molecules colliding with the walls of the container. The pressure also exists inside the system itself, not just at the walls of the container. |
# Fluid Statics
## What Is a Fluid?
### Learning Objectives
By the end of this section, you will be able to:
1. State the common phases of matter.
2. Explain the physical characteristics of solids, liquids, and gases.
3. Describe the arrangement of atoms in solids, liquids, and gases.
Matter most commonly exists as a solid, liquid, gas, or plasma; these states are known as the common phases of matter. Solids have a definite shape and a specific volume, liquids have a definite volume but their shape changes depending on the container in which they are held, gases have neither a definite shape nor a specific volume as their molecules move to fill the container in which they are held, and plasmas also have neither definite shape nor volume. (See .) Liquids, gases, and plasmas are considered to be fluids because they yield to shearing forces, whereas solids resist them. Note that the extent to which fluids yield to shearing forces (and hence flow easily and quickly) depends on a quantity called the viscosity which is discussed in detail in Viscosity and Laminar Flow; Poiseuille’s Law. We can understand the phases of matter and what constitutes a fluid by considering the forces between atoms that make up matter in the three phases.
Atoms in solids are in close contact, with forces between them that allow the atoms to vibrate but not to change positions with neighboring atoms. (These forces can be thought of as springs that can be stretched or compressed, but not easily broken.) Thus a solid resists all types of stress. A solid cannot be easily deformed because the atoms that make up the solid are not able to move about freely. Solids also resist compression, because their atoms form part of a lattice structure in which the atoms are a relatively fixed distance apart. Under compression, the atoms would be forced into one another. Most of the examples we have studied so far have involved solid objects which deform very little when stressed.
In contrast, liquids deform easily when stressed and do not spring back to their original shape once the force is removed because the atoms are free to slide about and change neighbors—that is, they flow (so they are a type of fluid), with the molecules held together by their mutual attraction. When a liquid is placed in a container with no lid on, it remains in the container (providing the container has no holes below the surface of the liquid!). Because the atoms are closely packed, liquids, like solids, resist compression.
Atoms in gases and charged particles in plasmas are separated by distances that are large compared with the size of the particles. The forces between the particles are therefore very weak, except when they collide with one another. Gases and plasmas thus not only flow (and are therefore considered to be fluids) but they are relatively easy to compress because there is much space and little force between the particles. When placed in an open container gases, unlike liquids, will escape. The major distinction is that gases are easily compressed, whereas liquids are not. Plasmas are difficult to contain because they have so much energy. When discussing how substances flow, we shall generally refer to both gases and liquids simply as fluids, and make a distinction between them only when they behave differently.
### Section Summary
1. A fluid is a state of matter that yields to sideways or shearing forces. Liquids and gases are both fluids. Fluid statics is the physics of stationary fluids.
### Conceptual Questions
|
# Fluid Statics
## Density
### Learning Objectives
By the end of this section, you will be able to:
1. Define density.
2. Calculate the mass of a reservoir from its density.
3. Compare and contrast the densities of various substances.
Which weighs more, a ton of feathers or a ton of bricks? This old riddle plays with the distinction between mass and density. A ton is a ton, of course; but bricks have much greater density than feathers, and so we are tempted to think of them as heavier. (See .)
Density, as you will see, is an important characteristic of substances. It is crucial, for example, in determining whether an object sinks or floats in a fluid. Density is the mass per unit volume of a substance or object. In equation form, density is defined as
where the Greek letter (rho) is the symbol for density, is the mass, and is the volume occupied by the substance.
In the riddle regarding the feathers and bricks, the masses are the same, but the volume occupied by the feathers is much greater, since their density is much lower. The SI unit of density is , representative values are given in . The metric system was originally devised so that water would have a density of , equivalent to . Thus the basic mass unit, the kilogram, was first devised to be the mass of 1000 mL of water, which has a volume of 1000 cm3.
As you can see by examining , the density of an object may help identify its composition. The density of gold, for example, is about 2.5 times the density of iron, which is about 2.5 times the density of aluminum. Density also reveals something about the phase of the matter and its substructure. Notice that the densities of liquids and solids are roughly comparable, consistent with the fact that their atoms are in close contact. The densities of gases are much less than those of liquids and solids, because the atoms in gases are separated by large amounts of empty space.
### Test Prep for AP Courses
### Section Summary
1. Density is the mass per unit volume of a substance or object. In equation form, density is defined as
2. The SI unit of density is .
### Conceptual Questions
### Problems & Exercises
|
# Fluid Statics
## Pressure
### Learning Objectives
By the end of this section, you will be able to:
1. Define pressure.
2. Explain the relationship between pressure and force.
3. Calculate force given pressure and area.
You have no doubt heard the word pressure being used in relation to blood (high or low blood pressure) and in relation to the weather (high- and low-pressure weather systems). These are only two of many examples of pressures in fluids. Pressure is defined as
where is a force applied to an area that is perpendicular to the force.
A given force can have a significantly different effect depending on the area over which the force is exerted, as shown in . The SI unit for pressure is the pascal, where
In addition to the pascal, there are many other units for pressure that are in common use. In meteorology, atmospheric pressure is often described in units of millibar (mb), where
Pounds per square inch is still sometimes used as a measure of tire pressure, and millimeters of mercury (mm Hg) is still often used in the measurement of blood pressure. Pressure is defined for all states of matter but is particularly important when discussing fluids.
The force exerted on the end of the tank is perpendicular to its inside surface. This direction is because the force is exerted by a static or stationary fluid. We have already seen that fluids cannot withstand shearing (sideways) forces; they cannot exert shearing forces, either. Fluid pressure has no direction, being a scalar quantity. The forces due to pressure have well-defined directions: they are always exerted perpendicular to any surface. (See the tire in , for example.) Finally, note that pressure is exerted on all surfaces. Swimmers, as well as the tire, feel pressure on all sides. (See .)
### Test Prep for AP Courses
### Section Summary
1. Pressure is the force per unit perpendicular area over which the force is applied. In equation form, pressure is defined as
2. The SI unit of pressure is pascal and .
### Conceptual Questions
### Problems & Exercises
|
# Fluid Statics
## Variation of Pressure with Depth in a Fluid
### Learning Objectives
By the end of this section, you will be able to:
1. Define pressure in terms of weight.
2. Explain the variation of pressure with depth in a fluid.
3. Calculate density given pressure and altitude.
If your ears have ever popped on a plane flight or ached during a deep dive in a swimming pool, you have experienced the effect of depth on pressure in a fluid. At the Earth’s surface, the air pressure exerted on you is a result of the weight of air above you. This pressure is reduced as you climb up in altitude and the weight of air above you decreases. Under water, the pressure exerted on you increases with increasing depth. In this case, the pressure being exerted upon you is a result of both the weight of water above you and that of the atmosphere above you. You may notice an air pressure change on an elevator ride that transports you many stories, but you need only dive a meter or so below the surface of a pool to feel a pressure increase. The difference is that water is much denser than air, about 775 times as dense.
Consider the container in . Its bottom supports the weight of the fluid in it. Let us calculate the pressure exerted on the bottom by the weight of the fluid. That pressure is the weight of the fluid divided by the area supporting it (the area of the bottom of the container):
We can find the mass of the fluid from its volume and density:
The volume of the fluid is related to the dimensions of the container. It is
where is the cross-sectional area and is the depth. Combining the last two equations gives
If we enter this into the expression for pressure, we obtain
The area cancels, and rearranging the variables yields
This value is the pressure due to the weight of a fluid. The equation has general validity beyond the special conditions under which it is derived here. Even if the container were not there, the surrounding fluid would still exert this pressure, keeping the fluid static. Thus the equation represents the pressure due to the weight of any fluid of average density at any depth below its surface. For liquids, which are nearly incompressible, this equation holds to great depths. For gases, which are quite compressible, one can apply this equation as long as the density changes are small over the depth considered. illustrates this situation.
Atmospheric pressure is another example of pressure due to the weight of a fluid, in this case due to the weight of air above a given height. The atmospheric pressure at the Earth’s surface varies a little due to the large-scale flow of the atmosphere induced by the Earth’s rotation (this creates weather “highs” and “lows”). However, the average pressure at sea level is given by the standard atmospheric pressure , measured to be
This relationship means that, on average, at sea level, a column of air above of the Earth’s surface has a weight of , equivalent to . (See .)
What do you suppose is the total pressure at a depth of 10.3 m in a swimming pool? Does the atmospheric pressure on the water’s surface affect the pressure below? The answer is yes. This seems only logical, since both the water’s weight and the atmosphere’s weight must be supported. So the total pressure at a depth of 10.3 m is 2 atm—half from the water above and half from the air above. We shall see in Pascal’s Principle that fluid pressures always add in this way.
### Section Summary
1. Pressure is the weight of the fluid divided by the area supporting it (the area of the bottom of the container):
2. Pressure due to the weight of a liquid is given by
where
### Conceptual Questions
### Problems & Exercises
|
# Fluid Statics
## Pascal’s Principle
### Learning Objectives
By the end of this section, you will be able to:
1. Define pressure.
2. State Pascal’s principle.
3. Understand applications of Pascal’s principle.
4. Derive relationships between forces in a hydraulic system.
Pressure is defined as force per unit area. Can pressure be increased in a fluid by pushing directly on the fluid? Yes, but it is much easier if the fluid is enclosed. The heart, for example, increases blood pressure by pushing directly on the blood in an enclosed system (valves closed in a chamber). If you try to push on a fluid in an open system, such as a river, the fluid flows away. An enclosed fluid cannot flow away, and so pressure is more easily increased by an applied force.
What happens to a pressure in an enclosed fluid? Since atoms in a fluid are free to move about, they transmit the pressure to all parts of the fluid and to the walls of the container. Remarkably, the pressure is transmitted undiminished. This phenomenon is called Pascal’s principle, because it was first clearly stated by the French philosopher and scientist Blaise Pascal (1623–1662): A change in pressure applied to an enclosed fluid is transmitted undiminished to all portions of the fluid and to the walls of its container.
Pascal’s principle, an experimentally verified fact, is what makes pressure so important in fluids. Since a change in pressure is transmitted undiminished in an enclosed fluid, we often know more about pressure than other physical quantities in fluids. Moreover, Pascal’s principle implies that the total pressure in a fluid is the sum of the pressures from different sources. We shall find this fact—that pressures add—very useful.
Blaise Pascal had an interesting life in that he was home-schooled by his father who removed all of the mathematics textbooks from his house and forbade him to study mathematics until the age of 15. This, of course, raised the boy’s curiosity, and by the age of 12, he started to teach himself geometry. Despite this early deprivation, Pascal went on to make major contributions in the mathematical fields of probability theory, number theory, and geometry. He is also well known for being the inventor of the first mechanical digital calculator, in addition to his contributions in the field of fluid statics.
### Application of Pascal’s Principle
One of the most important technological applications of Pascal’s principle is found in a hydraulic system, which is an enclosed fluid system used to exert forces. The most common hydraulic systems are those that operate car brakes. Let us first consider the simple hydraulic system shown in .
### Relationship Between Forces in a Hydraulic System
We can derive a relationship between the forces in the simple hydraulic system shown in by applying Pascal’s principle. Note first that the two pistons in the system are at the same height, and so there will be no difference in pressure due to a difference in depth. Now the pressure due to acting on area is simply , as defined by . According to Pascal’s principle, this pressure is transmitted undiminished throughout the fluid and to all walls of the container. Thus, a pressure is felt at the other piston that is equal to . That is .
But since , we see that .
This equation relates the ratios of force to area in any hydraulic system, providing the pistons are at the same vertical height and that friction in the system is negligible. Hydraulic systems can increase or decrease the force applied to them. To make the force larger, the pressure is applied to a larger area. For example, if a 100-N force is applied to the left cylinder in and the right one has an area five times greater, then the force out is 500 N. Hydraulic systems are analogous to simple levers, but they have the advantage that pressure can be sent through tortuously curved lines to several places at once.
A simple hydraulic system, such as a simple machine, can increase force but cannot do more work than done on it. Work is force times distance moved, and the wheel cylinder moves through a smaller distance than the pedal cylinder. Furthermore, the more wheels added, the smaller the distance each moves. Many hydraulic systems—such as power brakes and those in bulldozers—have a motorized pump that actually does most of the work in the system. The movement of the legs of a spider is achieved partly by hydraulics. Using hydraulics, a jumping spider can create a force that makes it capable of jumping 25 times its length!
### Section Summary
1. Pressure is force per unit area.
2. A change in pressure applied to an enclosed fluid is transmitted undiminished to all portions of the fluid and to the walls of its container.
3. A hydraulic system is an enclosed fluid system used to exert forces.
### Conceptual Questions
### Problems & Exercises
|
# Fluid Statics
## Gauge Pressure, Absolute Pressure, and Pressure Measurement
### Learning Objectives
By the end of this section, you will be able to:
1. Define gauge pressure and absolute pressure.
2. Understand the working of aneroid and open-tube barometers.
If you limp into a gas station with a nearly flat tire, you will notice the tire gauge on the airline reads nearly zero when you begin to fill it. In fact, if there were a gaping hole in your tire, the gauge would read zero, even though atmospheric pressure exists in the tire. Why does the gauge read zero? There is no mystery here. Tire gauges are simply designed to read zero at atmospheric pressure and positive when pressure is greater than atmospheric.
Similarly, atmospheric pressure adds to blood pressure in every part of the circulatory system. (As noted in Pascal’s Principle, the total pressure in a fluid is the sum of the pressures from different sources—here, the heart and the atmosphere.) But atmospheric pressure has no net effect on blood flow since it adds to the pressure coming out of the heart and going back into it, too. What is important is how much greater blood pressure is than atmospheric pressure. Blood pressure measurements, like tire pressures, are thus made relative to atmospheric pressure.
In brief, it is very common for pressure gauges to ignore atmospheric pressure—that is, to read zero at atmospheric pressure. We therefore define gauge pressure to be the pressure relative to atmospheric pressure. Gauge pressure is positive for pressures above atmospheric pressure, and negative for pressures below it.
In fact, atmospheric pressure does add to the pressure in any fluid not enclosed in a rigid container. This happens because of Pascal’s principle. The total pressure, or absolute pressure, is thus the sum of gauge pressure and atmospheric pressure: where is absolute pressure, is gauge pressure, and is atmospheric pressure. For example, if your tire gauge reads 34 psi (pounds per square inch), then the absolute pressure is 34 psi plus 14.7 psi ( in psi), or 48.7 psi (equivalent to 336 kPa).
For reasons we will explore later, in most cases the absolute pressure in fluids cannot be negative. Fluids push rather than pull, so the smallest absolute pressure is zero. (A negative absolute pressure is a pull.) Thus the smallest possible gauge pressure is (this makes zero). There is no theoretical limit to how large a gauge pressure can be.
There are a host of devices for measuring pressure, ranging from tire gauges to blood pressure cuffs. Pascal’s principle is of major importance in these devices. The undiminished transmission of pressure through a fluid allows precise remote sensing of pressures. Remote sensing is often more convenient than putting a measuring device into a system, such as a person’s artery.
shows one of the many types of mechanical pressure gauges in use today. In all mechanical pressure gauges, pressure results in a force that is converted (or transduced) into some type of readout.
An entire class of gauges uses the property that pressure due to the weight of a fluid is given by Consider the U-shaped tube shown in , for example. This simple tube is called a manometer. In (a), both sides of the tube are open to the atmosphere. Atmospheric pressure therefore pushes down on each side equally so its effect cancels. If the fluid is deeper on one side, there is a greater pressure on the deeper side, and the fluid flows away from that side until the depths are equal.
Let us examine how a manometer is used to measure pressure. Suppose one side of the U-tube is connected to some source of pressure such as the toy balloon in (b) or the vacuum-packed peanut jar shown in (c). Pressure is transmitted undiminished to the manometer, and the fluid levels are no longer equal. In (b), is greater than atmospheric pressure, whereas in (c), is less than atmospheric pressure. In both cases, differs from atmospheric pressure by an amount , where is the density of the fluid in the manometer. In (b), can support a column of fluid of height , and so it must exert a pressure greater than atmospheric pressure (the gauge pressure is positive). In (c), atmospheric pressure can support a column of fluid of height , and so is less than atmospheric pressure by an amount (the gauge pressure is negative). A manometer with one side open to the atmosphere is an ideal device for measuring gauge pressures. The gauge pressure is and is found by measuring .
Mercury manometers are often used to measure arterial blood pressure. An inflatable cuff is placed on the upper arm as shown in . By squeezing the bulb, the person making the measurement exerts pressure, which is transmitted undiminished to both the main artery in the arm and the manometer. When this applied pressure exceeds blood pressure, blood flow below the cuff is cut off. The person making the measurement then slowly lowers the applied pressure and listens for blood flow to resume. Blood pressure pulsates because of the pumping action of the heart, reaching a maximum, called systolic pressure, and a minimum, called diastolic pressure, with each heartbeat. Systolic pressure is measured by noting the value of when blood flow first begins as cuff pressure is lowered. Diastolic pressure is measured by noting when blood flows without interruption. The typical blood pressure of a young adult raises the mercury to a height of 120 mm at systolic and 80 mm at diastolic. This is commonly quoted as 120 over 80, or 120/80. The first pressure is representative of the maximum output of the heart; the second is due to the elasticity of the arteries in maintaining the pressure between beats. The density of the mercury fluid in the manometer is 13.6 times greater than water, so the height of the fluid will be 1/13.6 of that in a water manometer. This reduced height can make measurements difficult, so mercury manometers are used to measure larger pressures, such as blood pressure. The density of mercury is such that .
A barometer is a device that measures atmospheric pressure. A mercury barometer is shown in . This device measures atmospheric pressure, rather than gauge pressure, because there is a nearly pure vacuum above the mercury in the tube. The height of the mercury is such that . When atmospheric pressure varies, the mercury rises or falls, giving important clues to weather forecasters. The barometer can also be used as an altimeter, since average atmospheric pressure varies with altitude. Mercury barometers and manometers are so common that units of mm Hg are often quoted for atmospheric pressure and blood pressures. gives conversion factors for some of the more commonly used units of pressure.
### Section Summary
1. Gauge pressure is the pressure relative to atmospheric pressure.
2. Absolute pressure is the sum of gauge pressure and atmospheric pressure.
3. Aneroid gauge measures pressure using a bellows-and-spring arrangement connected to the pointer of a calibrated scale.
4. Open-tube manometers have U-shaped tubes and one end is always open. It is used to measure pressure.
5. A mercury barometer is a device that measures atmospheric pressure.
### Conceptual Questions
### Problems & Exercises
|
# Fluid Statics
## Archimedes’ Principle
### Learning Objectives
By the end of this section, you will be able to:
1. Define buoyant force.
2. State Archimedes’ principle.
3. Understand why objects float or sink.
4. Understand the relationship between density and Archimedes’ principle.
When you rise from lounging in a warm bath, your arms feel strangely heavy. This is because you no longer have the buoyant support of the water. Where does this buoyant force come from? Why is it that some things float and others do not? Do objects that sink get any support at all from the fluid? Is your body buoyed by the atmosphere, or are only helium balloons affected? (See .)
Answers to all these questions, and many others, are based on the fact that pressure increases with depth in a fluid. This means that the upward force on the bottom of an object in a fluid is greater than the downward force on the top of the object. There is a net upward, or buoyant force on any object in any fluid. (See .) If the buoyant force is greater than the object’s weight, the object will rise to the surface and float. If the buoyant force is less than the object’s weight, the object will sink. If the buoyant force equals the object’s weight, the object will remain suspended at that depth. The buoyant force is always present whether the object floats, sinks, or is suspended in a fluid.
Just how great is this buoyant force? To answer this question, think about what happens when a submerged object is removed from a fluid, as in .
The space it occupied is filled by fluid having a weight . This weight is supported by the surrounding fluid, and so the buoyant force must equal , the weight of the fluid displaced by the object. It is a tribute to the genius of the Greek mathematician and inventor Archimedes (ca. 287–212 B.C.) that he stated this principle long before concepts of force were well established. Stated in words, Archimedes’ principle is as follows: The buoyant force on an object equals the weight of the fluid it displaces. In equation form, Archimedes’ principle is
where is the buoyant force and is the weight of the fluid displaced by the object. Archimedes’ principle is valid in general, for any object in any fluid, whether partially or totally submerged.
Humm … High-tech body swimsuits were introduced in 2008 in preparation for the Beijing Olympics. One concern (and international rule) was that these suits should not provide any buoyancy advantage. How do you think that this rule could be verified?
### Floating and Sinking
Drop a lump of clay in water. It will sink. Then mold the lump of clay into the shape of a boat, and it will float. Because of its shape, the boat displaces more water than the lump and experiences a greater buoyant force. The same is true of steel ships.
### Density and Archimedes’ Principle
Density plays a crucial role in Archimedes’ principle. The average density of an object is what ultimately determines whether it floats. If its average density is less than that of the surrounding fluid, it will float. This is because the fluid, having a higher density, contains more mass and hence more weight in the same volume. The buoyant force, which equals the weight of the fluid displaced, is thus greater than the weight of the object. Likewise, an object denser than the fluid will sink.
The extent to which a floating object is submerged depends on how the object’s density is related to that of the fluid. In , for example, the unloaded ship has a lower density and less of it is submerged compared with the same ship loaded. We can derive a quantitative expression for the fraction submerged by considering density. The fraction submerged is the ratio of the volume submerged to the volume of the object, or
The volume submerged equals the volume of fluid displaced, which we call . Now we can obtain the relationship between the densities by substituting into the expression. This gives
where is the average density of the object and is the density of the fluid. Since the object floats, its mass and that of the displaced fluid are equal, and so they cancel from the equation, leaving
We use this last relationship to measure densities. This is done by measuring the fraction of a floating object that is submerged—for example, with a hydrometer. It is useful to define the ratio of the density of an object to a fluid (usually water) as specific gravity:
where is the average density of the object or substance and is the density of water at 4.00°C. Specific gravity is dimensionless, independent of whatever units are used for . If an object floats, its specific gravity is less than one. If it sinks, its specific gravity is greater than one. Moreover, the fraction of a floating object that is submerged equals its specific gravity. If an object’s specific gravity is exactly 1, then it will remain suspended in the fluid, neither sinking nor floating. Scuba divers try to obtain this state so that they can hover in the water. We measure the specific gravity of fluids, such as battery acid, radiator fluid, and urine, as an indicator of their condition. One device for measuring specific gravity is shown in .
There are many obvious examples of lower-density objects or substances floating in higher-density fluids—oil on water, a hot-air balloon, a bit of cork in wine, an iceberg, and hot wax in a “lava lamp,” to name a few. Less obvious examples include lava rising in a volcano and mountain ranges floating on the higher-density crust and mantle beneath them. Even seemingly solid Earth has fluid characteristics.
### More Density Measurements
One of the most common techniques for determining density is shown in .
An object, here a coin, is weighed in air and then weighed again while submerged in a liquid. The density of the coin, an indication of its authenticity, can be calculated if the fluid density is known. This same technique can also be used to determine the density of the fluid if the density of the coin is known. All of these calculations are based on Archimedes’ principle.
Archimedes’ principle states that the buoyant force on the object equals the weight of the fluid displaced. This, in turn, means that the object appears to weigh less when submerged; we call this measurement the object’s apparent weight. The object suffers an apparent weight loss equal to the weight of the fluid displaced. Alternatively, on balances that measure mass, the object suffers an apparent mass loss equal to the mass of fluid displaced. That is
or
The next example illustrates the use of this technique.
This brings us back to Archimedes’ principle and how it came into being. As the story goes, the king of Syracuse gave Archimedes the task of determining whether the royal crown maker was supplying a crown of pure gold. The purity of gold is difficult to determine by color (it can be diluted with other metals and still look as yellow as pure gold), and other analytical techniques had not yet been conceived. Even ancient peoples, however, realized that the density of gold was greater than that of any other then-known substance. Archimedes purportedly agonized over his task and had his inspiration one day while at the public baths, pondering the support the water gave his body. He came up with his now-famous principle, saw how to apply it to determine density, and ran naked down the streets of Syracuse crying “Eureka!” (Greek for “I have found it”). Similar behavior can be observed in contemporary physicists from time to time!
### Section Summary
1. Buoyant force is the net upward force on any object in any fluid. If the buoyant force is greater than the object’s weight, the object will rise to the surface and float. If the buoyant force is less than the object’s weight, the object will sink. If the buoyant force equals the object’s weight, the object will remain suspended at that depth. The buoyant force is always present whether the object floats, sinks, or is suspended in a fluid.
2. Archimedes’ principle states that the buoyant force on an object equals the weight of the fluid it displaces.
3. Specific gravity is the ratio of the density of an object to a fluid (usually water).
### Conceptual Questions
### Problem Exercises
|
# Fluid Statics
## Cohesion and Adhesion in Liquids: Surface Tension and Capillary Action
### Learning Objectives
By the end of this section, you will be able to:
1. Understand cohesive and adhesive forces.
2. Define surface tension.
3. Understand capillary action.
### Cohesion and Adhesion in Liquids
Children blow soap bubbles and play in the spray of a sprinkler on a hot summer day. (See .) An underwater spider keeps his air supply in a shiny bubble he carries wrapped around him. A technician draws blood into a small-diameter tube just by touching it to a drop on a pricked finger. A premature infant struggles to inflate her lungs. What is the common thread? All these activities are dominated by the attractive forces between atoms and molecules in liquids—both within a liquid and between the liquid and its surroundings.
Attractive forces between molecules of the same type are called cohesive forces. Liquids can, for example, be held in open containers because cohesive forces hold the molecules together. Attractive forces between molecules of different types are called adhesive forces. Such forces cause liquid drops to cling to window panes, for example. In this section we examine effects directly attributable to cohesive and adhesive forces in liquids.
### Surface Tension
Cohesive forces between molecules cause the surface of a liquid to contract to the smallest possible surface area. This general effect is called surface tension. Molecules on the surface are pulled inward by cohesive forces, reducing the surface area. Molecules inside the liquid experience zero net force, since they have neighbors on all sides.
The model of a liquid surface acting like a stretched elastic sheet can effectively explain surface tension effects. For example, some insects can walk on water (as opposed to floating in it) as we would walk on a trampoline—they dent the surface as shown in (a). (b) shows another example, where a needle rests on a water surface. The iron needle cannot, and does not, float, because its density is greater than that of water. Rather, its weight is supported by forces in the stretched surface that try to make the surface smaller or flatter. If the needle were placed point down on the surface, its weight acting on a smaller area would break the surface, and it would sink.
Surface tension is proportional to the strength of the cohesive force, which varies with the type of liquid. Surface tension is defined to be the force F per unit length exerted by a stretched liquid membrane:
lists values of for some liquids. For the insect of (a), its weight is supported by the upward components of the surface tension force: , where is the circumference of the insect’s foot in contact with the water. shows one way to measure surface tension. The liquid film exerts a force on the movable wire in an attempt to reduce its surface area. The magnitude of this force depends on the surface tension of the liquid and can be measured accurately.
Surface tension is the reason why liquids form bubbles and droplets. The inward surface tension force causes bubbles to be approximately spherical and raises the pressure of the gas trapped inside relative to atmospheric pressure outside. It can be shown that the gauge pressure inside a spherical bubble is given by
where is the radius of the bubble. Thus the pressure inside a bubble is greatest when the bubble is the smallest. Another bit of evidence for this is illustrated in . When air is allowed to flow between two balloons of unequal size, the smaller balloon tends to collapse, filling the larger balloon.
Our lungs contain hundreds of millions of mucus-lined sacs called alveoli, which are very similar in size, and about 0.1 mm in diameter. (See .) You can exhale without muscle action by allowing surface tension to contract these sacs. Medical patients whose breathing is aided by a positive pressure respirator have air blown into the lungs, but are generally allowed to exhale on their own. Even if there is paralysis, surface tension in the alveoli will expel air from the lungs. Since pressure increases as the radii of the alveoli decrease, an occasional deep cleansing breath is needed to fully reinflate the alveoli. Respirators are programmed to do this and we find it natural, as do our companion dogs and cats, to take a cleansing breath before settling into a nap.
The tension in the walls of the alveoli results from the membrane tissue and a liquid on the walls of the alveoli containing a long lipoprotein that acts as a surfactant (a surface-tension reducing substance). The need for the surfactant results from the tendency of small alveoli to collapse and the air to fill into the larger alveoli making them even larger (as demonstrated in ). During inhalation, the lipoprotein molecules are pulled apart and the wall tension increases as the radius increases (increased surface tension). During exhalation, the molecules slide back together and the surface tension decreases, helping to prevent a collapse of the alveoli. The surfactant therefore serves to change the wall tension so that small alveoli don’t collapse and large alveoli are prevented from expanding too much. This tension change is a unique property of these surfactants, and is not shared by detergents (which simply lower surface tension). (See .)
If water gets into the lungs, the surface tension is too great and you cannot inhale. This is a severe problem in resuscitating drowning victims. A similar problem occurs in newborn infants who are born without this surfactant—their lungs are very difficult to inflate. This condition is known as hyaline membrane disease and is a leading cause of death for infants, particularly in premature births. Some success has been achieved in treating hyaline membrane disease by spraying a surfactant into the infant’s breathing passages. Emphysema produces the opposite problem with alveoli. Alveolar walls of emphysema victims deteriorate, and the sacs combine to form larger sacs. Because pressure produced by surface tension decreases with increasing radius, these larger sacs produce smaller pressure, reducing the ability of emphysema victims to exhale. A common test for emphysema is to measure the pressure and volume of air that can be exhaled.
### Adhesion and Capillary Action
Why is it that water beads up on a waxed car but does not on bare paint? The answer is that the adhesive forces between water and wax are much smaller than those between water and paint. Competition between the forces of adhesion and cohesion are important in the macroscopic behavior of liquids. An important factor in studying the roles of these two forces is the angle between the tangent to the liquid surface and the surface. (See .) The contact angle is directly related to the relative strength of the cohesive and adhesive forces. The larger the strength of the cohesive force relative to the adhesive force, the larger is, and the more the liquid tends to form a droplet. The smaller is, the smaller the relative strength, so that the adhesive force is able to flatten the drop. lists contact angles for several combinations of liquids and solids.
One important phenomenon related to the relative strength of cohesive and adhesive forces is capillary action—the tendency of a fluid to be raised or suppressed in a narrow tube, or capillary tube. This action causes blood to be drawn into a small-diameter tube when the tube touches a drop.
If a capillary tube is placed vertically into a liquid, as shown in , capillary action will raise or suppress the liquid inside the tube depending on the combination of substances. The actual effect depends on the relative strength of the cohesive and adhesive forces and, thus, the contact angle given in the table. If is less than , then the fluid will be raised; if is greater than , it will be suppressed. Mercury, for example, has a very large surface tension and a large contact angle with glass. When placed in a tube, the surface of a column of mercury curves downward, somewhat like a drop. The curved surface of a fluid in a tube is called a meniscus. The tendency of surface tension is always to reduce the surface area. Surface tension thus flattens the curved liquid surface in a capillary tube. This results in a downward force in mercury and an upward force in water, as seen in .
Capillary action can move liquids horizontally over very large distances, but the height to which it can raise or suppress a liquid in a tube is limited by its weight. It can be shown that this height is given by
If we look at the different factors in this expression, we might see how it makes good sense. The height is directly proportional to the surface tension , which is its direct cause. Furthermore, the height is inversely proportional to tube radius—the smaller the radius , the higher the fluid can be raised, since a smaller tube holds less mass. The height is also inversely proportional to fluid density , since a larger density means a greater mass in the same volume. (See .)
How does sap get to the tops of tall trees? (Recall that a column of water can only rise to a height of 10 m when there is a vacuum at the top—see .) The question has not been completely resolved, but it appears that it is pulled up like a chain held together by cohesive forces. As each molecule of sap enters a leaf and evaporates (a process called transpiration), the entire chain is pulled up a notch. So a negative pressure created by water evaporation must be present to pull the sap up through the xylem vessels. In most situations, fluids can push but can exert only negligible pull, because the cohesive forces seem to be too small to hold the molecules tightly together. But in this case, the cohesive force of water molecules provides a very strong pull. shows one device for studying negative pressure. Some experiments have demonstrated that negative pressures sufficient to pull sap to the tops of the tallest trees can be achieved.
### Section Summary
1. Attractive forces between molecules of the same type are called cohesive forces.
2. Attractive forces between molecules of different types are called adhesive forces.
3. Cohesive forces between molecules cause the surface of a liquid to contract to the smallest possible surface area. This general effect is called surface tension.
4. Capillary action is the tendency of a fluid to be raised or suppressed in a narrow tube, or capillary tube which is due to the relative strength of cohesive and adhesive forces.
### Conceptual Questions
### Problems & Exercises
|
# Fluid Statics
## Pressures in the Body
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the concept of pressure the in human body.
2. Explain systolic and diastolic blood pressures.
3. Describe pressures in the eye, lungs, spinal column, bladder, and skeletal system.
### Pressure in the Body
Next to taking a person’s temperature and weight, measuring blood pressure is the most common of all medical examinations. Control of high blood pressure is largely responsible for the significant decreases in heart attack and stroke fatalities achieved in the last three decades. The pressures in various parts of the body can be measured and often provide valuable medical indicators. In this section, we consider a few examples together with some of the physics that accompanies them.
lists some of the measured pressures in mm Hg, the units most commonly quoted.
### Blood Pressure
Common arterial blood pressure measurements typically produce values of 120 mm Hg and 80 mm Hg, respectively, for systolic and diastolic pressures. Both pressures have health implications. When systolic pressure is chronically high, the risk of stroke and heart attack is increased. If, however, it is too low, fainting is a problem. Systolic pressure increases dramatically during exercise to increase blood flow and returns to normal afterward. This change produces no ill effects and, in fact, may be beneficial to the tone of the circulatory system. Diastolic pressure can be an indicator of fluid balance. When low, it may indicate that a person is hemorrhaging internally and needs a transfusion. Conversely, high diastolic pressure indicates a ballooning of the blood vessels, which may be due to the transfusion of too much fluid into the circulatory system. High diastolic pressure is also an indication that blood vessels are not dilating properly to pass blood through. This can seriously strain the heart in its attempt to pump blood.
Blood leaves the heart at about 120 mm Hg but its pressure continues to decrease (to almost 0) as it goes from the aorta to smaller arteries to small veins (see ). The pressure differences in the circulation system are caused by blood flow through the system as well as the position of the person. For a person standing up, the pressure in the feet will be larger than at the heart due to the weight of the blood . If we assume that the distance between the heart and the feet of a person in an upright position is 1.4 m, then the increase in pressure in the feet relative to that in the heart (for a static column of blood) is given by
Standing a long time can lead to an accumulation of blood in the legs and swelling. This is the reason why soldiers who are required to stand still for long periods of time have been known to faint. Elastic bandages around the calf can help prevent this accumulation and can also help provide increased pressure to enable the veins to send blood back up to the heart. For similar reasons, doctors recommend tight stockings for long-haul flights.
Blood pressure may also be measured in the major veins, the heart chambers, arteries to the brain, and the lungs. But these pressures are usually only monitored during surgery or for patients in intensive care since the measurements are invasive. To obtain these pressure measurements, qualified health care workers thread thin tubes, called catheters, into appropriate locations to transmit pressures to external measuring devices.
The heart consists of two pumps—the right side forcing blood through the lungs and the left causing blood to flow through the rest of the body (). Right-heart failure, for example, results in a rise in the pressure in the vena cavae and a drop in pressure in the arteries to the lungs. Left-heart failure results in a rise in the pressure entering the left side of the heart and a drop in aortal pressure. Implications of these and other pressures on flow in the circulatory system will be discussed in more detail in Fluid Dynamics and Its Biological and Medical Applications.
### Pressure in the Eye
The shape of the eye is maintained by fluid pressure, called intraocular pressure, which is normally in the range of 12.0 to 24.0 mm Hg. When the circulation of fluid in the eye is blocked, it can lead to a buildup in pressure, a condition called glaucoma. The net pressure can become as great as 85.0 mm Hg, an abnormally large pressure that can permanently damage the optic nerve. To get an idea of the force involved, suppose the back of the eye has an area of , and the net pressure is 85.0 mm Hg. Force is given by
. To get
in newtons, we convert the area to
(
). Then we calculate as follows:
This force is the weight of about a 680-g mass. A mass of 680 g resting on the eye (imagine 1.5 lb resting on your eye) would be sufficient to cause it damage. (A normal force here would be the weight of about 120 g, less than one-quarter of our initial value.)
People over 40 years of age are at greatest risk of developing glaucoma and should have their intraocular pressure tested routinely. Most measurements involve exerting a force on the (anesthetized) eye over some area (a pressure) and observing the eye’s response. A noncontact approach uses a puff of air and a measurement is made of the force needed to indent the eye (). If the intraocular pressure is high, the eye will deform less and rebound more vigorously than normal. Excessive intraocular pressures can be detected reliably and sometimes controlled effectively.
### Pressure Associated with the Lungs
The pressure inside the lungs increases and decreases with each breath. The pressure drops to below atmospheric pressure (negative gauge pressure) when you inhale, causing air to flow into the lungs. It increases above atmospheric pressure (positive gauge pressure) when you exhale, forcing air out.
Lung pressure is controlled by several mechanisms. Muscle action in the diaphragm and rib cage is necessary for inhalation; this muscle action increases the volume of the lungs thereby reducing the pressure within them . Surface tension in the alveoli creates a positive pressure opposing inhalation. (See Cohesion and Adhesion in Liquids: Surface Tension and Capillary Action.) You can exhale without muscle action by letting surface tension in the alveoli create its own positive pressure. Muscle action can add to this positive pressure to produce forced exhalation, such as when you blow up a balloon, blow out a candle, or cough.
The lungs, in fact, would collapse due to the surface tension in the alveoli, if they were not attached to the inside of the chest wall by liquid adhesion. The gauge pressure in the liquid attaching the lungs to the inside of the chest wall is thus negative, ranging from to during exhalation and inhalation, respectively. If air is allowed to enter the chest cavity, it breaks the attachment, and one or both lungs may collapse. Suction is applied to the chest cavity of surgery patients and trauma victims to reestablish negative pressure and inflate the lungs.
### Other Pressures in the Body
### Spinal Column and Skull
Normally, there is a 5- to12-mm Hg pressure in the fluid surrounding the brain and filling the spinal column. This cerebrospinal fluid serves many purposes, one of which is to supply flotation to the brain. The buoyant force supplied by the fluid nearly equals the weight of the brain, since their densities are nearly equal. If there is a loss of fluid, the brain rests on the inside of the skull, causing severe headaches, constricted blood flow, and serious damage. Spinal fluid pressure is measured by means of a needle inserted between vertebrae that transmits the pressure to a suitable measuring device.
### Bladder Pressure
This bodily pressure is one of which we are often aware. In fact, there is a relationship between our awareness of this pressure and a subsequent increase in it. Bladder pressure climbs steadily from zero to about 25 mm Hg as the bladder fills to its normal capacity of . This pressure triggers the micturition reflex, which stimulates the feeling of needing to urinate. What is more, it also causes muscles around the bladder to contract, raising the pressure to over 100 mm Hg, accentuating the sensation. Coughing, straining, tensing in cold weather, wearing tight clothes, and experiencing simple nervous tension all can increase bladder pressure and trigger this reflex. So can the weight of a pregnant woman’s fetus, especially if it is kicking vigorously or pushing down with its head! Bladder pressure can be measured by a catheter or by inserting a needle through the bladder wall and transmitting the pressure to an appropriate measuring device. One hazard of high bladder pressure (sometimes created by an obstruction), is that such pressure can force urine back into the kidneys, causing potentially severe damage.
### Pressures in the Skeletal System
These pressures are the largest in the body, due both to the high values of initial force, and the small areas to which this force is applied, such as in the joints.. For example, when a person lifts an object improperly, a force of 5000 N may be created between vertebrae in the spine, and this may be applied to an area as small as . The pressure created is
or about 50 atm! This pressure can damage both the spinal discs (the cartilage between vertebrae), as well as the bony vertebrae themselves. Even under normal circumstances, forces between vertebrae in the spine are large enough to create pressures of several atmospheres. Most causes of excessive pressure in the skeletal system can be avoided by lifting properly and avoiding extreme physical activity. (See Forces and Torques in Muscles and Joints.)
There are many other interesting and medically significant pressures in the body. For example, pressure caused by various muscle actions drives food and waste through the digestive system. Stomach pressure behaves much like bladder pressure and is tied to the sensation of hunger. Pressure in the relaxed esophagus is normally negative because pressure in the chest cavity is normally negative. Positive pressure in the stomach may thus force acid into the esophagus, causing “heartburn.” Pressure in the middle ear can result in significant force on the eardrum if it differs greatly from atmospheric pressure, such as while scuba diving. The decrease in external pressure is also noticeable during plane flights (due to a decrease in the weight of air above relative to that at the Earth’s surface). The Eustachian tubes connect the middle ear to the throat and allow us to equalize pressure in the middle ear to avoid an imbalance of force on the eardrum.
Many pressures in the human body are associated with the flow of fluids. Fluid flow will be discussed in detail in the Fluid Dynamics and Its Biological and Medical Applications.
### Section Summary
1. Measuring blood pressure is among the most common of all medical examinations.
2. The pressures in various parts of the body can be measured and often provide valuable medical indicators.
3. The shape of the eye is maintained by fluid pressure, called intraocular pressure.
4. When the circulation of fluid in the eye is blocked, it can lead to a buildup in pressure, a condition called glaucoma.
5. Some of the other pressures in the body are spinal and skull pressures, bladder pressure, pressures in the skeletal system.
### Problems & Exercises
|
# Fluid Dynamics and Its Biological and Medical Applications
## Connection for AP® Courses
How do planes fly? How do we model blood flow? How do sprayers work for paints or aerosols? What is the purpose of a water tower? To answer these questions, we will examine fluid dynamics. The equations governing fluid dynamics are derived from the same equations that represent energy conservation. One of the most powerful equations in fluid dynamics is Bernoulli's equation, which governs the relationship between fluid pressure, kinetic energy, and potential energy (Essential Knowledge 5.B.10). We will see how Bernoulli's equation explains the pressure difference that provides lift for airplanes and provides the means for fluids (like water or paint or perfume) to move in useful ways.
The content in this chapter supports:
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.10 Bernoulli's equation describes the conservation of energy in a fluid flow.
Enduring Understanding 5.F Classically, the mass of a system is conserved.
Essential Knowledge 5.F.1 The continuity equation describes conservation of mass flow rate in fluids. |
# Fluid Dynamics and Its Biological and Medical Applications
## Flow Rate and Its Relation to Velocity
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate flow rate.
2. Define units of volume.
3. Describe incompressible fluids.
4. Explain the consequences of the equation of continuity.
Flow rate is defined to be the volume of fluid passing by some location through an area during a period of time, as seen in . In symbols, this can be written as
where is the volume and is the elapsed time.
The SI unit for flow rate is , but a number of other units for are in common use. For example, the heart of a resting adult pumps blood at a rate of 5.00 liters per minute (L/min). Note that a liter (L) is 1/1000 of a cubic meter or 1000 cubic centimeters ( or ). In this text we shall use whatever metric units are most convenient for a given situation.
Flow rate and velocity are related, but quite different, physical quantities. To make the distinction clear, think about the flow rate of a river. The greater the velocity of the water, the greater the flow rate of the river. But flow rate also depends on the size of the river. A rapid mountain stream carries far less water than the Amazon River in Brazil, for example. The precise relationship between flow rate and velocity is
where is the cross-sectional area and is the average velocity. This equation seems logical enough. The relationship tells us that flow rate is directly proportional to both the magnitude of the average velocity (hereafter referred to as the speed) and the size of a river, pipe, or other conduit. The larger the conduit, the greater its cross-sectional area. illustrates how this relationship is obtained. The shaded cylinder has a volume
which flows past the point in a time . Dividing both sides of this relationship by gives
We note that and the average speed is
. Thus the equation becomes .
shows an incompressible fluid flowing along a pipe of decreasing radius. Because the fluid is incompressible, the same amount of fluid must flow past any point in the tube in a given time to ensure continuity of flow. In this case, because the cross-sectional area of the pipe decreases, the velocity must necessarily increase. This logic can be extended to say that the flow rate must be the same at all points along the pipe. In particular, for points 1 and 2,
This is called the equation of continuity and is valid for any incompressible fluid. The consequences of the equation of continuity can be observed when water flows from a hose into a narrow spray nozzle: it emerges with a large speed—that is the purpose of the nozzle. Conversely, when a river empties into one end of a reservoir, the water slows considerably, perhaps picking up speed again when it leaves the other end of the reservoir. In other words, speed increases when cross-sectional area decreases, and speed decreases when cross-sectional area increases.
Since liquids are essentially incompressible, the equation of continuity is valid for all liquids. However, gases are compressible, and so the equation must be applied with caution to gases if they are subjected to compression or expansion.
The solution to the last part of the example shows that speed is inversely proportional to the square of the radius of the tube, making for large effects when radius varies. We can blow out a candle at quite a distance, for example, by pursing our lips, whereas blowing on a candle with our mouth wide open is quite ineffective.
In many situations, including in the cardiovascular system, branching of the flow occurs. The blood is pumped from the heart into arteries that subdivide into smaller arteries (arterioles) which branch into very fine vessels called capillaries. In this situation, continuity of flow is maintained but it is the sum of the flow rates in each of the branches in any portion along the tube that is maintained. The equation of continuity in a more general form becomes
where and are the number of branches in each of the sections along the tube.
### Test Prep for AP Courses
### Section Summary
1. Flow rate is defined to be the volume flowing past a point in time
, or
where
is volume and
is time.
2. The SI unit of volume is
.
3. Another common unit is the liter (L), which is .
4. Flow rate and velocity are related by
where is the cross-sectional area of the flow and
is its average velocity.
5. For incompressible fluids, flow rate at various points is constant. That is,
### Conceptual Questions
### Problems & Exercises
|
# Fluid Dynamics and Its Biological and Medical Applications
## Bernoulli’s Equation
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the terms in Bernoulli’s equation.
2. Explain how Bernoulli’s equation is related to conservation of energy.
3. Explain how to derive Bernoulli’s principle from Bernoulli’s equation.
4. Calculate with Bernoulli’s principle.
5. List some applications of Bernoulli’s principle.
When a fluid flows into a narrower channel, its speed increases. That means its kinetic energy also increases. Where does that change in kinetic energy come from? The increased kinetic energy comes from the net work done on the fluid to push it into the channel and the work done on the fluid by the gravitational force, if the fluid changes vertical position. Recall the work-energy theorem,
There is a pressure difference when the channel narrows. This pressure difference results in a net force on the fluid: recall that pressure times area equals force. The net work done increases the fluid’s kinetic energy. As a result, the pressure will drop in a rapidly-moving fluid, whether or not the fluid is confined to a tube.
There are a number of common examples of pressure dropping in rapidly-moving fluids. Shower curtains have a disagreeable habit of bulging into the shower stall when the shower is on. The high-velocity stream of water and air creates a region of lower pressure inside the shower, and standard atmospheric pressure on the other side. The pressure difference results in a net force inward pushing the curtain in. You may also have noticed that when passing a truck on the highway, your car tends to veer toward it. The reason is the same—the high velocity of the air between the car and the truck creates a region of lower pressure, and the vehicles are pushed together by greater pressure on the outside. (See .) This effect was observed as far back as the mid-1800s, when it was found that trains passing in opposite directions tipped precariously toward one another.
### Bernoulli’s Equation
The relationship between pressure and velocity in fluids is described quantitatively by Bernoulli’s equation, named after its discoverer, the Swiss scientist Daniel Bernoulli (1700–1782). Bernoulli’s equation states that for an incompressible, frictionless fluid, the following sum is constant:
where is the absolute pressure, is the fluid density, is the velocity of the fluid, is the height above some reference point, and is the acceleration due to gravity. If we follow a small volume of fluid along its path, various quantities in the sum may change, but the total remains constant. Let the subscripts 1 and 2 refer to any two points along the path that the bit of fluid follows; Bernoulli’s equation becomes
Bernoulli’s equation is a form of the conservation of energy principle. Note that the second and third terms are the kinetic and potential energy with replaced by . In fact, each term in the equation has units of energy per unit volume. We can prove this for the second term by substituting into it and gathering terms:
So is the kinetic energy per unit volume. Making the same substitution into the third term in the equation, we find
so is the gravitational potential energy per unit volume. Note that pressure has units of energy per unit volume, too. Since , its units are . If we multiply these by m/m, we obtain , or energy per unit volume. Bernoulli’s equation is, in fact, just a convenient statement of conservation of energy for an incompressible fluid in the absence of friction.
The general form of Bernoulli’s equation has three terms in it, and it is broadly applicable. To understand it better, we will look at a number of specific situations that simplify and illustrate its use and meaning.
### Bernoulli’s Equation for Static Fluids
Let us first consider the very simple situation where the fluid is static—that is, . Bernoulli’s equation in that case is
We can further simplify the equation by taking (we can always choose some height to be zero, just as we often have done for other situations involving the gravitational force, and take all other heights to be relative to this). In that case, we get
This equation tells us that, in static fluids, pressure increases with depth. As we go from point 1 to point 2 in the fluid, the depth increases by , and consequently, is greater than by an amount . In the very simplest case,
is zero at the top of the fluid, and we get the familiar relationship
. (Recall that
and
) Bernoulli’s equation includes the fact that the pressure due to the weight of a fluid is
. Although we introduce Bernoulli’s equation for fluid flow, it includes much of what we studied for static fluids in the preceding chapter.
### Bernoulli’s Principle—Bernoulli’s Equation at Constant Depth
Another important situation is one in which the fluid moves but its depth is constant—that is, . Under that condition, Bernoulli’s equation becomes
Situations in which fluid flows at a constant depth are so important that this equation is often called Bernoulli’s principle. It is Bernoulli’s equation for fluids at constant depth. (Note again that this applies to a small volume of fluid as we follow it along its path.) As we have just discussed, pressure drops as speed increases in a moving fluid. We can see this from Bernoulli’s principle. For example, if is greater than in the equation, then must be less than for the equality to hold.
### Applications of Bernoulli’s Principle
There are a number of devices and situations in which fluid flows at a constant height and, thus, can be analyzed with Bernoulli’s principle.
### Entrainment
People have long put the Bernoulli principle to work by using reduced pressure in high-velocity fluids to move things about. With a higher pressure on the outside, the high-velocity fluid forces other fluids into the stream. This process is called entrainment. Entrainment devices have been in use since ancient times, particularly as pumps to raise water small heights, as in draining swamps, fields, or other low-lying areas. Some other devices that use the concept of entrainment are shown in .
### Wings and Sails
The airplane wing is a beautiful example of Bernoulli’s principle in action. (a) shows the characteristic shape of a wing. The wing is tilted upward at a small angle and the upper surface is longer, causing air to flow faster over it. The pressure on top of the wing is therefore reduced, creating a net upward force or lift. (Wings can also gain lift by pushing air downward, utilizing the conservation of momentum principle. The deflected air molecules result in an upward force on the wing — Newton’s third law.) Sails also have the characteristic shape of a wing. (See (b).) The pressure on the front side of the sail, , is lower than the pressure on the back of the sail, . This results in a forward force and even allows you to sail into the wind.
### Velocity measurement
shows two devices that measure fluid velocity based on Bernoulli’s principle. The manometer in (a) is connected to two tubes that are small enough not to appreciably disturb the flow. The tube facing the oncoming fluid creates a dead spot having zero velocity () in front of it, while fluid passing the other tube has velocity . This means that Bernoulli’s principle as stated in
becomes
Thus pressure over the second opening is reduced by , and so the fluid in the manometer rises by
on the side connected to the second opening, where
(Recall that the symbol means “proportional to.”) Solving for , we see that
(b) shows a version of this device that is in common use for measuring various fluid velocities; such devices are frequently used as air speed indicators in aircraft.
### Test Prep for AP Courses
### Summary
1. Bernoulli’s equation states that the sum on each side of the following equation is constant, or the same at any two points in an incompressible frictionless fluid:
2. Bernoulli’s principle is Bernoulli’s equation applied to situations in which depth is constant. The terms involving depth (or height h ) subtract out, yielding
3. Bernoulli’s principle has many applications, including entrainment, wings and sails, and velocity measurement.
### Conceptual Questions
### Problems & Exercises
|
# Fluid Dynamics and Its Biological and Medical Applications
## The Most General Applications of Bernoulli’s Equation
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate using Torricelli’s theorem.
2. Calculate power in fluid flow.
### Torricelli’s Theorem
shows water gushing from a large tube through a dam. What is its speed as it emerges? Interestingly, if resistance is negligible, the speed is just what it would be if the water fell a distance from the surface of the reservoir; the water’s speed is independent of the size of the opening. Let us check this out. Bernoulli’s equation must be used since the depth is not constant. We consider water flowing from the surface (point 1) to the tube’s outlet (point 2). Bernoulli’s equation as stated in previously is
Both and equal atmospheric pressure ( is atmospheric pressure because it is the pressure at the top of the reservoir. must be atmospheric pressure, since the emerging water is surrounded by the atmosphere and cannot have a pressure different from atmospheric pressure.) and subtract out of the equation, leaving
Solving this equation for , noting that the density
cancels (because the fluid is incompressible), yields
We let ; the equation then becomes
where is the height dropped by the water. This is simply a kinematic equation for any object falling a distance with negligible resistance. In fluids, this last equation is called Torricelli’s theorem. Note that the result is independent of the velocity’s direction, just as we found when applying conservation of energy to falling objects.
All preceding applications of Bernoulli’s equation involved simplifying conditions, such as constant height or constant pressure. The next example is a more general application of Bernoulli’s equation in which pressure, velocity, and height all change. (See .)
### Power in Fluid Flow
Power is the rate at which work is done or energy in any form is used or supplied. To see the relationship of power to fluid flow, consider Bernoulli’s equation:
All three terms have units of energy per unit volume, as discussed in the previous section. Now, considering units, if we multiply energy per unit volume by flow rate (volume per unit time), we get units of power. That is, . This means that if we multiply Bernoulli’s equation by flow rate , we get power. In equation form, this is
Each term has a clear physical meaning. For example, is the power supplied to a fluid, perhaps by a pump, to give it its pressure . Similarly, is the power supplied to a fluid to give it its kinetic energy. And is the power going to gravitational potential energy.
### Test Prep for AP Courses
### Summary
1. Power in fluid flow is given by the equation where the first term is power associated with pressure, the second is power associated with velocity, and the third is power associated with height.
### Conceptual Questions
### Problems & Exercises
|
# Fluid Dynamics and Its Biological and Medical Applications
## Viscosity and Laminar Flow; Poiseuille’s Law
### Learning Objectives
By the end of this section, you will be able to:
1. Define laminar flow and turbulent flow.
2. Explain what viscosity is.
3. Calculate flow and resistance with Poiseuille’s law.
4. Explain how pressure drops due to resistance.
### Laminar Flow and Viscosity
When you pour yourself a glass of juice, the liquid flows freely and quickly. But when you pour syrup on your pancakes, that liquid flows slowly and sticks to the pitcher. The difference is fluid friction, both within the fluid itself and between the fluid and its surroundings. We call this property of fluids viscosity. Juice has low viscosity, whereas syrup has high viscosity. In the previous sections we have considered ideal fluids with little or no viscosity. In this section, we will investigate what factors, including viscosity, affect the rate of fluid flow.
The precise definition of viscosity is based on laminar, or nonturbulent, flow. Before we can define viscosity, then, we need to define laminar flow and turbulent flow. shows both types of flow. Laminar flow is characterized by the smooth flow of the fluid in layers that do not mix. Turbulent flow, or turbulence, is characterized by eddies and swirls that mix layers of fluid together.
shows schematically how laminar and turbulent flow differ. Layers flow without mixing when flow is laminar. When there is turbulence, the layers mix, and there are significant velocities in directions other than the overall direction of flow. The lines that are shown in many illustrations are the paths followed by small volumes of fluids. These are called streamlines. Streamlines are smooth and continuous when flow is laminar, but break up and mix when flow is turbulent. Turbulence has two main causes. First, any obstruction or sharp corner, such as in a faucet, creates turbulence by imparting velocities perpendicular to the flow. Second, high speeds cause turbulence. The drag both between adjacent layers of fluid and between the fluid and its surroundings forms swirls and eddies, if the speed is great enough. We shall concentrate on laminar flow for the remainder of this section, leaving certain aspects of turbulence for later sections.
shows how viscosity is measured for a fluid. Two parallel plates have the specific fluid between them. The bottom plate is held fixed, while the top plate is moved to the right, dragging fluid with it. The layer (or lamina) of fluid in contact with either plate does not move relative to the plate, and so the top layer moves at while the bottom layer remains at rest. Each successive layer from the top down exerts a force on the one below it, trying to drag it along, producing a continuous variation in speed from to 0 as shown. Care is taken to insure that the flow is laminar; that is, the layers do not mix. The motion in is like a continuous shearing motion. Fluids have zero shear strength, but the rate at which they are sheared is related to the same geometrical factors and as is shear deformation for solids.
A force is required to keep the top plate in moving at a constant velocity , and experiments have shown that this force depends on four factors. First, is directly proportional to (until the speed is so high that turbulence occurs—then a much larger force is needed, and it has a more complicated dependence on ). Second, is proportional to the area of the plate. This relationship seems reasonable, since is directly proportional to the amount of fluid being moved. Third, is inversely proportional to the distance between the plates . This relationship is also reasonable; is like a lever arm, and the greater the lever arm, the less force that is needed. Fourth, is directly proportional to the coefficient of viscosity, . The greater the viscosity, the greater the force required. These dependencies are combined into the equation
which gives us a working definition of fluid viscosity . Solving for gives
which defines viscosity in terms of how it is measured. The SI unit of viscosity is . lists the coefficients of viscosity for various fluids.
Viscosity varies from one fluid to another by several orders of magnitude. As you might expect, the viscosities of gases are much less than those of liquids, and these viscosities are often temperature dependent. The viscosity of blood can be reduced by aspirin consumption, allowing it to flow more easily around the body. (When used over the long term in low doses, aspirin can help prevent heart attacks, and reduce the risk of blood clotting.)
### Laminar Flow Confined to Tubes—Poiseuille’s Law
What causes flow? The answer, not surprisingly, is pressure difference. In fact, there is a very simple relationship between horizontal flow and pressure. Flow rate is in the direction from high to low pressure. The greater the pressure differential between two points, the greater the flow rate. This relationship can be stated as
where and are the pressures at two points, such as at either end of a tube, and is the resistance to flow. The resistance includes everything, except pressure, that affects flow rate. For example, is greater for a long tube than for a short one. The greater the viscosity of a fluid, the greater the value of . Turbulence greatly increases , whereas increasing the diameter of a tube decreases .
If viscosity is zero, the fluid is frictionless and the resistance to flow is also zero. Comparing frictionless flow in a tube to viscous flow, as in , we see that for a viscous fluid, speed is greatest at midstream because of drag at the boundaries. We can see the effect of viscosity in a Bunsen burner flame, even though the viscosity of natural gas is small.
The resistance to laminar flow of an incompressible fluid having viscosity through a horizontal tube of uniform radius and length , such as the one in , is given by
This equation is called Poiseuille’s law for resistance after the French scientist J. L. Poiseuille (1799–1869), who derived it in an attempt to understand the flow of blood, an often turbulent fluid.
Let us examine Poiseuille’s expression for to see if it makes good intuitive sense. We see that resistance is directly proportional to both fluid viscosity and the length of a tube. After all, both of these directly affect the amount of friction encountered—the greater either is, the greater the resistance and the smaller the flow. The radius of a tube affects the resistance, which again makes sense, because the greater the radius, the greater the flow (all other factors remaining the same). But it is surprising that is raised to the fourth power in Poiseuille’s law. This exponent means that any change in the radius of a tube has a very large effect on resistance. For example, doubling the radius of a tube decreases resistance by a factor of .
Taken together, and give the following expression for flow rate:
This equation describes laminar flow through a tube. It is sometimes called Poiseuille’s law for laminar flow, or simply Poiseuille’s law.
The circulatory system provides many examples of Poiseuille’s law in action—with blood flow regulated by changes in vessel size and blood pressure. Blood vessels are not rigid but elastic. Adjustments to blood flow are primarily made by varying the size of the vessels, since the resistance is so sensitive to the radius. During vigorous exercise, blood vessels are selectively dilated to important muscles and organs and blood pressure increases. This creates both greater overall blood flow and increased flow to specific areas. Conversely, decreases in vessel radii, perhaps from plaques in the arteries, can greatly reduce blood flow. If a vessel’s radius is reduced by only 5% (to 0.95 of its original value), the flow rate is reduced to about of its original value. A 19% decrease in flow is caused by a 5% decrease in radius. The body may compensate by increasing blood pressure by 19%, but this presents hazards to the heart and any vessel that has weakened walls. Another example comes from automobile engine oil. If you have a car with an oil pressure gauge, you may notice that oil pressure is high when the engine is cold. Motor oil has greater viscosity when cold than when warm, and so pressure must be greater to pump the same amount of cold oil.
### Flow and Resistance as Causes of Pressure Drops
You may have noticed that water pressure in your home might be lower than normal on hot summer days when there is more use. This pressure drop occurs in the water main before it reaches your home. Let us consider flow through the water main as illustrated in . We can understand why the pressure to the home drops during times of heavy use by rearranging
to
where, in this case, is the pressure at the water works and is the resistance of the water main. During times of heavy use, the flow rate is large. This means that must also be large. Thus must decrease. It is correct to think of flow and resistance as causing the pressure to drop from to . is valid for both laminar and turbulent flows.
We can use to analyze pressure drops occurring in more complex systems in which the tube radius is not the same everywhere. Resistance will be much greater in narrow places, such as an obstructed coronary artery. For a given flow rate , the pressure drop will be greatest where the tube is most narrow. This is how water faucets control flow. Additionally, is greatly increased by turbulence, and a constriction that creates turbulence greatly reduces the pressure downstream. Plaque in an artery reduces pressure and hence flow, both by its resistance and by the turbulence it creates.
is a schematic of the human circulatory system, showing average blood pressures in its major parts for an adult at rest. Pressure created by the heart’s two pumps, the right and left ventricles, is reduced by the resistance of the blood vessels as the blood flows through them. The left ventricle increases arterial blood pressure that drives the flow of blood through all parts of the body except the lungs. The right ventricle receives the lower pressure blood from two major veins and pumps it through the lungs for gas exchange with atmospheric gases – the disposal of carbon dioxide from the blood and the replenishment of oxygen. Only one major organ is shown schematically, with typical branching of arteries to ever smaller vessels, the smallest of which are the capillaries, and rejoining of small veins into larger ones. Similar branching takes place in a variety of organs in the body, and the circulatory system has considerable flexibility in flow regulation to these organs by the dilation and constriction of the arteries leading to them and the capillaries within them. The sensitivity of flow to tube radius makes this flexibility possible over a large range of flow rates.
Each branching of larger vessels into smaller vessels increases the total cross-sectional area of the tubes through which the blood flows. For example, an artery with a cross section of may branch into 20 smaller arteries, each with cross sections of
, with a total of
. In that manner, the resistance of the branchings is reduced so that pressure is not entirely lost. Moreover, because
and
increases through branching, the average velocity of the blood in the smaller vessels is reduced. The blood velocity in the aorta () is about 25 cm/s, while in the capillaries (
in diameter) the velocity is about 1 mm/s. This reduced velocity allows the blood to exchange substances with the cells in the capillaries and alveoli in particular.
### Section Summary
1. Laminar flow is characterized by smooth flow of the fluid in layers that do not mix.
2. Turbulence is characterized by eddies and swirls that mix layers of fluid together.
3. Fluid viscosity is due to friction within a fluid. Representative values are given in . Viscosity has units of
or
.
4. Flow is proportional to pressure difference and inversely proportional to resistance:
5. For laminar flow in a tube, Poiseuille’s law for resistance states that
6. Poiseuille’s law for flow in a tube is
7. The pressure drop caused by flow and resistance is given by
### Conceptual Questions
### Problems & Exercises
|
# Fluid Dynamics and Its Biological and Medical Applications
## The Onset of Turbulence
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate Reynolds number.
2. Use the Reynolds number for a system to determine whether it is laminar or turbulent.
Sometimes we can predict if flow will be laminar or turbulent. We know that flow in a very smooth tube or around a smooth, streamlined object will be laminar at low velocity. We also know that at high velocity, even flow in a smooth tube or around a smooth object will experience turbulence. In between, it is more difficult to predict. In fact, at intermediate velocities, flow may oscillate back and forth indefinitely between laminar and turbulent.
An occlusion, or narrowing, of an artery, such as shown in , is likely to cause turbulence because of the irregularity of the blockage, as well as the complexity of blood as a fluid. Turbulence in the circulatory system is noisy and can sometimes be detected with a stethoscope, such as when measuring diastolic pressure in the upper arm’s partially collapsed brachial artery. These turbulent sounds, at the onset of blood flow when the cuff pressure becomes sufficiently small, are called Korotkoff sounds. Aneurysms, or ballooning of arteries, create significant turbulence and can sometimes be detected with a stethoscope. Heart murmurs, consistent with their name, are sounds produced by turbulent flow around damaged and insufficiently closed heart valves. Ultrasound can also be used to detect turbulence as a medical indicator in a process analogous to Doppler-shift radar used to detect storms.
An indicator called the Reynolds number can reveal whether flow is laminar or turbulent. For flow in a tube of uniform diameter, the Reynolds number is defined as
where is the fluid density, its speed, its viscosity, and the tube radius. The Reynolds number is a unitless quantity. Experiments have revealed that is related to the onset of turbulence. For below about 2000, flow is laminar. For above about 3000, flow is turbulent. For values of between about 2000 and 3000, flow is unstable—that is, it can be laminar, but small obstructions and surface roughness can make it turbulent, and it may oscillate randomly between being laminar and turbulent. The blood flow through most of the body is a quiet, laminar flow. The exception is in the aorta, where the speed of the blood flow rises above a critical value of 35 m/s and becomes turbulent.
The topic of chaos has become quite popular over the last few decades. A system is defined to be chaotic when its behavior is so sensitive to some factor that it is extremely difficult to predict. The field of chaos is the study of chaotic behavior. A good example of chaotic behavior is the flow of a fluid with a Reynolds number between 2000 and 3000. Whether or not the flow is turbulent is difficult, but not impossible, to predict—the difficulty lies in the extremely sensitive dependence on factors like roughness and obstructions on the nature of the flow. A tiny variation in one factor has an exaggerated (or nonlinear) effect on the flow. Phenomena as disparate as turbulence, the orbit of Pluto, and the onset of irregular heartbeats are chaotic and can be analyzed with similar techniques.
### Section Summary
1. The Reynolds number can reveal whether flow is laminar or turbulent. It is
2. For below about 2000, flow is laminar. For above about 3000, flow is turbulent. For values of between 2000 and 3000, it may be either or both.
### Conceptual Questions
### Problems & Exercises
|
# Fluid Dynamics and Its Biological and Medical Applications
## Motion of an Object in a Viscous Fluid
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the Reynolds number for an object moving through a fluid.
2. Explain whether the Reynolds number indicates laminar or turbulent flow.
3. Describe the conditions under which an object has a terminal speed.
A moving object in a viscous fluid is equivalent to a stationary object in a flowing fluid stream. (For example, when you ride a bicycle at 10 m/s in still air, you feel the air in your face exactly as if you were stationary in a 10-m/s wind.) Flow of the stationary fluid around a moving object may be laminar, turbulent, or a combination of the two. Just as with flow in tubes, it is possible to predict when a moving object creates turbulence. We use another form of the Reynolds number , defined for an object moving in a fluid to be
where is a characteristic length of the object (a sphere’s diameter, for example), the fluid density,
its viscosity, and
the object’s speed in the fluid. If
is less than about 1, flow around the object can be laminar, particularly if the object has a smooth shape. The transition to turbulent flow occurs for between 1 and about 10, depending on surface roughness and so on. Depending on the surface, there can be a turbulent wake behind the object with some laminar flow over its surface. For an
between 10 and
, the flow may be either laminar or turbulent and may oscillate between the two. For
greater than about
, the flow is entirely turbulent, even at the surface of the object. (See .) Laminar flow occurs mostly when the objects in the fluid are small, such as raindrops, pollen, and blood cells in plasma.
One of the consequences of viscosity is a resistance force called viscous drag
that is exerted on a moving object. This force typically depends on the object’s speed (in contrast with simple friction). Experiments have shown that for laminar flow (
less than about one) viscous drag is proportional to speed, whereas for between about 10 and
, viscous drag is proportional to speed squared. (This relationship is a strong dependence and is pertinent to bicycle racing, where even a small headwind causes significantly increased drag on the racer. Cyclists take turns being the leader in the pack for this reason.) For
greater than
, drag increases dramatically and behaves with greater complexity. For laminar flow around a sphere,
is proportional to fluid viscosity
, the object’s characteristic size
, and its speed
. All of which makes sense—the more viscous the fluid and the larger the object, the more drag we expect. Recall Stoke’s law
. For the special case of a small sphere of radius
moving slowly in a fluid of viscosity
, the drag force
is given by
An interesting consequence of the increase in with speed is that an object falling through a fluid will not continue to accelerate indefinitely (as it would if we neglect air resistance, for example). Instead, viscous drag increases, slowing acceleration, until a critical speed, called the terminal speed, is reached and the acceleration of the object becomes zero. Once this happens, the object continues to fall at constant speed (the terminal speed). This is the case for particles of sand falling in the ocean, cells falling in a centrifuge, and sky divers falling through the air. shows some of the factors that affect terminal speed. There is a viscous drag on the object that depends on the viscosity of the fluid and the size of the object. But there is also a buoyant force that depends on the density of the object relative to the fluid. Terminal speed will be greatest for low-viscosity fluids and objects with high densities and small sizes. Thus a skydiver falls more slowly with outspread limbs than when they are in a pike position—head first with hands at their side and legs together.
Knowledge of terminal speed is useful for estimating sedimentation rates of small particles. We know from watching mud settle out of dirty water that sedimentation is usually a slow process. Centrifuges are used to speed sedimentation by creating accelerated frames in which gravitational acceleration is replaced by centripetal acceleration, which can be much greater, increasing the terminal speed.
### Section Summary
1. When an object moves in a fluid, there is a different form of the Reynolds number which indicates whether flow is laminar or turbulent.
2. For less than about one, flow is laminar.
3. For greater than
, flow is entirely turbulent.
### Conceptual Questions
|
# Fluid Dynamics and Its Biological and Medical Applications
## Molecular Transport Phenomena: Diffusion, Osmosis, and Related Processes
### Learning Objectives
By the end of this section, you will be able to:
1. Define diffusion, osmosis, dialysis, and active transport.
2. Calculate diffusion rates.
### Diffusion
There is something fishy about the ice cube from your freezer—how did it pick up those food odors? How does soaking a sprained ankle in Epsom salt reduce swelling? The answer to these questions are related to atomic and molecular transport phenomena—another mode of fluid motion. Atoms and molecules are in constant motion at any temperature. In fluids they move about randomly even in the absence of macroscopic flow. This motion is called a random walk and is illustrated in . Diffusion is the movement of substances due to random thermal molecular motion. Fluids, like fish fumes or odors entering ice cubes, can even diffuse through solids.
Diffusion is a slow process over macroscopic distances. The densities of common materials are great enough that molecules cannot travel very far before having a collision that can scatter them in any direction, including straight backward. It can be shown that the average distance that a molecule travels is proportional to the square root of time:
where stands for the root-mean-square distance and is the statistical average for the process. The quantity is the diffusion constant for the particular molecule in a specific medium. lists representative values of for various substances, in units of .
Note that gets progressively smaller for more massive molecules. This decrease is because the average molecular speed at a given temperature is inversely proportional to molecular mass. Thus the more massive molecules diffuse more slowly. Another interesting point is that for oxygen in air is much greater than for oxygen in water. In water, an oxygen molecule makes many more collisions in its random walk and is slowed considerably. In water, an oxygen molecule moves only about in 1 s. (Each molecule actually collides about times per second!). Finally, note that diffusion constants increase with temperature, because average molecular speed increases with temperature. This is because the average kinetic energy of molecules, , is proportional to absolute temperature.
Because diffusion is typically very slow, its most important effects occur over small distances. For example, the cornea of the eye gets most of its oxygen by diffusion through the thin tear layer covering it.
### The Rate and Direction of Diffusion
If you very carefully place a drop of food coloring in a still glass of water, it will slowly diffuse into the colorless surroundings until its concentration is the same everywhere. This type of diffusion is called free diffusion, because there are no barriers inhibiting it. Let us examine its direction and rate. Molecular motion is random in direction, and so simple chance dictates that more molecules will move out of a region of high concentration than into it. The net rate of diffusion is higher initially than after the process is partially completed. (See .)
The net rate of diffusion is proportional to the concentration difference. Many more molecules will leave a region of high concentration than will enter it from a region of low concentration. In fact, if the concentrations were the same, there would be no net movement. The net rate of diffusion is also proportional to the diffusion constant , which is determined experimentally. The farther a molecule can diffuse in a given time, the more likely it is to leave the region of high concentration. Many of the factors that affect the rate are hidden in the diffusion constant . For example, temperature and cohesive and adhesive forces all affect values of .
Diffusion is the dominant mechanism by which the exchange of nutrients and waste products occur between the blood and tissue, and between air and blood in the lungs. In the evolutionary process, as organisms became larger, they needed quicker methods of transportation than net diffusion, because of the larger distances involved in the transport, leading to the development of circulatory systems. Less sophisticated, single-celled organisms still rely totally on diffusion for the removal of waste products and the uptake of nutrients.
### Osmosis and Dialysis—Diffusion across Membranes
Some of the most interesting examples of diffusion occur through barriers that affect the rates of diffusion. For example, when you soak a swollen ankle in Epsom salt, water diffuses through your skin. Many substances regularly move through cell membranes; oxygen moves in, carbon dioxide moves out, nutrients go in, and wastes go out, for example. Because membranes are thin structures (typically to m across) diffusion rates through them can be high. Diffusion through membranes is an important method of transport.
Membranes are generally selectively permeable, or semipermeable. (See .) One type of semipermeable membrane has small pores that allow only small molecules to pass through. In other types of membranes, the molecules may actually dissolve in the membrane or react with molecules in the membrane while moving across. Membrane function, in fact, is the subject of much current research, involving not only physiology but also chemistry and physics.
Osmosis is the transport of water through a semipermeable membrane from a region of high concentration to a region of low concentration. Osmosis is driven by the imbalance in water concentration. For example, water is more concentrated in your body than in Epsom salt. When you soak a swollen ankle in Epsom salt, the water moves out of your body into the lower-concentration region in the salt. Similarly, dialysis is the transport of any other molecule through a semipermeable membrane due to its concentration difference. Both osmosis and dialysis are used by the kidneys to cleanse the blood.
Osmosis can create a substantial pressure. Consider what happens if osmosis continues for some time, as illustrated in . Water moves by osmosis from the left into the region on the right, where it is less concentrated, causing the solution on the right to rise. This movement will continue until the pressure created by the extra height of fluid on the right is large enough to stop further osmosis. This pressure is called a back pressure. The back pressure that stops osmosis is also called the relative osmotic pressure if neither solution is pure water, and it is called the osmotic pressure if one solution is pure water. Osmotic pressure can be large, depending on the size of the concentration difference. For example, if pure water and sea water are separated by a semipermeable membrane that passes no salt, osmotic pressure will be 25.9 atm. This value means that water will diffuse through the membrane until the salt water surface rises 268 m above the pure-water surface! One example of pressure created by osmosis is turgor in plants (many wilt when too dry). Turgor describes the condition of a plant in which the fluid in a cell exerts a pressure against the cell wall. This pressure gives the plant support. Dialysis can similarly cause substantial pressures.
Reverse osmosis and reverse dialysis (also called filtration) are processes that occur when back pressure is sufficient to reverse the normal direction of substances through membranes. Back pressure can be created naturally as on the right side of . (A piston can also create this pressure.) Reverse osmosis can be used to desalinate water by simply forcing it through a membrane that will not pass salt. Similarly, reverse dialysis can be used to filter out any substance that a given membrane will not pass.
One further example of the movement of substances through membranes deserves mention. We sometimes find that substances pass in the direction opposite to what we expect. Cypress tree roots, for example, extract pure water from salt water, although osmosis would move it in the opposite direction. This is not reverse osmosis, because there is no back pressure to cause it. What is happening is called active transport, a process in which a living membrane expends energy to move substances across it. Many living membranes move water and other substances by active transport. The kidneys, for example, not only use osmosis and dialysis—they also employ significant active transport to move substances into and out of blood. In fact, it is estimated that at least 25% of the body’s energy is expended on active transport of substances at the cellular level. The study of active transport carries us into the realms of microbiology, biophysics, and biochemistry and it is a fascinating application of the laws of nature to living structures.
### Section Summary
1. Diffusion is the movement of substances due to random thermal molecular motion.
2. The average distance a molecule travels by diffusion in a given amount of time is given bywhere
3. Osmosis is the transport of water through a semipermeable membrane from a region of high concentration to a region of low concentration.
4. Dialysis is the transport of any other molecule through a semipermeable membrane due to its concentration difference.
5. Both processes can be reversed by back pressure.
6. Active transport is a process in which a living membrane expends energy to move substances across it.
### Conceptual Questions
### Problem Exercises
|
# Temperature, Kinetic Theory, and the Gas Laws
## Connection for AP® Courses
Heat is something familiar to each of us. We feel the warmth of the summer sun, the chill of a clear summer night, the heat of coffee after a winter stroll, and the cooling effect of our sweat. Heat transfer is maintained by temperature differences. Manifestations of heat transfer—the movement of heat energy from one place or material to another—are apparent throughout the universe. Heat from beneath Earth’s surface is brought to the surface in flows of incandescent lava. The Sun warms Earth’s surface and is the source of much of the energy we find on it. Rising levels of atmospheric carbon dioxide threaten to trap more of the Sun’s energy, perhaps fundamentally altering the ecosphere. In space, supernovas explode, briefly radiating more heat than an entire galaxy does.
In this chapter several concepts related to heat are discussed – what is heat, how is it related to temperature, what are heat’s effects, and how is it related to other forms of energy and to work. We will find that, in spite of the richness of the phenomena, there is a small set of underlying physical principles that unite the subjects and tie them to other fields.
In a typical thermometer like this one, the alcohol, with a red dye, expands more rapidly than the glass containing it. When the thermometer’s temperature increases, the liquid from the bulb is forced into the narrow tube, producing a large change in the length of the column for a small change in temperature.
The learning objectives covered under Big Idea 7 of the AP Physics Curriculum Framework are supported in this chapter through descriptive, algebraic, and graphical representations. Complex systems with internal structure can be described by both microscopic and macroscopic quantities. Big Idea 7 corresponds to the use of probability to describe the behavior of such systems by reducing a large number of microscopic quantities to a small number of macroscopic quantities. The macroscopic quantities for an ideal gas, including temperature and pressure, are explained in this chapter (Enduring Understanding 7.A). The temperature represents average kinetic energy of gas molecules (Essential Knowledge 7.A.2). The pressure of a system determines the force that the system exerts on the walls of its container and is a measure of the average change in the momentum or impulse of the molecules colliding with the walls of the container (Essential Knowledge 7.A.1). The pressure also exists inside the system itself, not just at the walls of the container. This chapter discusses the “ideal gas law” that relates the temperature, pressure, and volume of an ideal gas using a simple equation (Essential Knowledge 7.A.3).
Big Idea 7 The mathematics of probability can be used to describe the behavior of complex systems and to interpret the behavior of quantum mechanical systems.
Enduring Understanding 7.A The properties of an ideal gas can be explained in terms of a small number of macroscopic variables including temperature and pressure.
Essential Knowledge 7.A.1 The pressure of a system determines the force that the system exerts on the walls of its container and is a measure of the average change in the momentum or impulse of the molecules colliding with the walls of the container. The pressure also exists inside the system itself, not just at the walls of the container.
Essential Knowledge 7.A.2 The temperature of a system characterizes the average kinetic energy of its molecules.
Essential Knowledge 7.A.3 In an ideal gas, the macroscopic (average) pressure (P), temperature (T), and volume (V), are related by the equation PV = nRT. |
# Temperature, Kinetic Theory, and the Gas Laws
## Temperature
### Learning Objectives
By the end of this section, you will be able to:
1. Define temperature.
2. Convert temperatures between the Celsius, Fahrenheit, and Kelvin scales.
3. Define thermal equilibrium.
4. State the zeroth law of thermodynamics.
The concept of temperature has evolved from the common concepts of hot and cold. Human perception of what feels hot or cold is a relative one. For example, if you place one hand in hot water and the other in cold water, and then place both hands in tepid water, the tepid water will feel cool to the hand that was in hot water, and warm to the one that was in cold water. The scientific definition of temperature is less ambiguous than your senses of hot and cold. Temperature is operationally defined to be what we measure with a thermometer. (Many physical quantities are defined solely in terms of how they are measured. We shall see later how temperature is related to the kinetic energies of atoms and molecules, a more physical explanation.) Two accurate thermometers, one placed in hot water and the other in cold water, will show the hot water to have a higher temperature. If they are then placed in the tepid water, both will give identical readings (within measurement uncertainties). In this section, we discuss temperature, its measurement by thermometers, and its relationship to thermal equilibrium. Again, temperature is the quantity measured by a thermometer.
Any physical property that depends on temperature, and whose response to temperature is reproducible, can be used as the basis of a thermometer. Because many physical properties depend on temperature, the variety of thermometers is remarkable. For example, volume increases with temperature for most substances. This property is the basis for the common alcohol thermometer, the old mercury thermometer, and the bimetallic strip (). Other properties used to measure temperature include electrical resistance and color, as shown in , and the emission of infrared radiation, as shown in .
### Temperature Scales
Thermometers are used to measure temperature according to well-defined scales of measurement, which use pre-defined reference points to help compare quantities. The three most common temperature scales are the Fahrenheit, Celsius, and Kelvin scales. A temperature scale can be created by identifying two easily reproducible temperatures. The freezing and boiling temperatures of water at standard atmospheric pressure are commonly used.
The Celsius scale (which replaced the slightly different scale) has the freezing point of water at and the boiling point at . Its unit is the degree Celsius. On the Fahrenheit scale (still the most frequently used in the United States), the freezing point of water is at and the boiling point is at . The unit of temperature on this scale is the degree Fahrenheit. Note that a temperature difference of one degree Celsius is greater than a temperature difference of one degree Fahrenheit. Only 100 Celsius degrees span the same range as 180 Fahrenheit degrees, thus one degree on the Celsius scale is 1.8 times larger than one degree on the Fahrenheit scale
The Kelvin scale is the temperature scale that is commonly used in science. It is an absolute temperature scale defined to have 0 K at the lowest possible temperature, called absolute zero. The official temperature unit on this scale is the kelvin, which is abbreviated K, and is not accompanied by a degree sign. The freezing and boiling points of water are 273.15 K and 373.15 K, respectively. Thus, the magnitude of temperature differences is the same in units of kelvins and degrees Celsius. Unlike other temperature scales, the Kelvin scale is an absolute scale. It is used extensively in scientific work because a number of physical quantities, such as the volume of an ideal gas, are directly related to absolute temperature. The kelvin is the SI unit used in scientific work.
The relationships between the three common temperature scales is shown in . Temperatures on these scales can be converted using the equations in .
Notice that the conversions between Fahrenheit and Kelvin look quite complicated. In fact, they are simple combinations of the conversions between Fahrenheit and Celsius, and the conversions between Celsius and Kelvin.
### Temperature Ranges in the Universe
shows the wide range of temperatures found in the universe. Human beings have been known to survive with body temperatures within a small range, from to to ). The average normal body temperature is usually given as (), and variations in this temperature can indicate a medical condition: a fever, an infection, a tumor, or circulatory problems (see ).
The lowest temperatures ever recorded have been measured during laboratory experiments: at the Massachusetts Institute of Technology (USA), and at Helsinki University of Technology (Finland). In comparison, the coldest recorded place on Earth’s surface is Vostok, Antarctica at 183 K , and the coldest place (outside the lab) known in the universe is the Boomerang Nebula, with a temperature of 1 K.
### Thermal Equilibrium and the Zeroth Law of Thermodynamics
Thermometers actually take their own temperature, not the temperature of the object they are measuring. This raises the question of how we can be certain that a thermometer measures the temperature of the object with which it is in contact. It is based on the fact that any two systems placed in thermal contact (meaning heat transfer can occur between them) will reach the same temperature. That is, heat will flow from the hotter object to the cooler one until they have exactly the same temperature. The objects are then in thermal equilibrium, and no further changes will occur. The systems interact and change because their temperatures differ, and the changes stop once their temperatures are the same. Thus, if enough time is allowed for this transfer of heat to run its course, the temperature a thermometer registers does represent the system with which it is in thermal equilibrium. Thermal equilibrium is established when two bodies are in contact with each other and can freely exchange energy.
Furthermore, experimentation has shown that if two systems, A and B, are in thermal equilibrium with each another, and B is in thermal equilibrium with a third system C, then A is also in thermal equilibrium with C. This conclusion may seem obvious, because all three have the same temperature, but it is basic to thermodynamics. It is called the zeroth law of thermodynamics.
This law was postulated in the 1930s, after the first and second laws of thermodynamics had been developed and named. It is called the zeroth law because it comes logically before the first and second laws (discussed in Thermodynamics). Suppose, for example, a cold metal block and a hot metal block are both placed on a metal plate at room temperature. Eventually the cold block and the plate will be in thermal equilibrium. In addition, the hot block and the plate will be in thermal equilibrium. By the zeroth law, we can conclude that the cold block and the hot block are also in thermal equilibrium.
### Section Summary
1. Temperature is the quantity measured by a thermometer.
2. Temperature is related to the average kinetic energy of atoms and molecules in a system.
3. Absolute zero is the temperature at which there is no molecular motion.
4. There are three main temperature scales: Celsius, Fahrenheit, and Kelvin.
5. Temperatures on one scale can be converted to temperatures on another scale using the following equations:
6. Systems are in thermal equilibrium when they have the same temperature.
7. Thermal equilibrium occurs when two bodies are in contact with each other and can freely exchange energy.
8. The zeroth law of thermodynamics states that when two systems, A and B, are in thermal equilibrium with each other, and B is in thermal equilibrium with a third system, C, then A is also in thermal equilibrium with C.
### Conceptual Questions
### Problems & Exercises
|
# Temperature, Kinetic Theory, and the Gas Laws
## Thermal Expansion of Solids and Liquids
### Learning Objectives
By the end of this section, you will be able to:
1. Define and describe thermal expansion.
2. Calculate the linear expansion of an object given its initial length, change in temperature, and coefficient of linear expansion.
3. Calculate the volume expansion of an object given its initial volume, change in temperature, and coefficient of volume expansion.
4. Calculate thermal stress on an object given its original volume, temperature change, volume change, and bulk modulus.
The expansion of alcohol in a thermometer is one of many commonly encountered examples of thermal expansion, the change in size or volume of a given mass with temperature. Hot air rises because its volume increases, which causes the hot air’s density to be smaller than the density of surrounding air, causing a buoyant (upward) force on the hot air. The same happens in all liquids and gases, driving natural heat transfer upwards in homes, oceans, and weather systems. Solids also undergo thermal expansion. Railroad tracks and bridges, for example, have expansion joints to allow them to freely expand and contract with temperature changes.
What are the basic properties of thermal expansion? First, thermal expansion is clearly related to temperature change. The greater the temperature change, the more a bimetallic strip will bend. Second, it depends on the material. In a thermometer, for example, the expansion of alcohol is much greater than the expansion of the glass containing it.
What is the underlying cause of thermal expansion? As is discussed in Kinetic Theory: Atomic and Molecular Explanation of Pressure and Temperature, an increase in temperature implies an increase in the kinetic energy of the individual atoms. In a solid, unlike in a gas, the atoms or molecules are closely packed together, but their kinetic energy (in the form of small, rapid vibrations) pushes neighboring atoms or molecules apart from each other. This neighbor-to-neighbor pushing results in a slightly greater distance, on average, between neighbors, and adds up to a larger size for the whole body. For most substances under ordinary conditions, there is no preferred direction, and an increase in temperature will increase the solid’s size by a certain fraction in each dimension.
lists representative values of the coefficient of linear expansion, which may have units of or 1/K. Because the size of a kelvin and a degree Celsius are the same, both and can be expressed in units of kelvins or degrees Celsius. The equation is accurate for small changes in temperature and can be used for large changes in temperature if an average value of is used.
### Thermal Expansion in Two and Three Dimensions
Objects expand in all dimensions, as illustrated in . That is, their areas and volumes, as well as their lengths, increase with temperature. Holes also get larger with temperature. If you cut a hole in a metal plate, the remaining material will expand exactly as it would if the plug was still in place. The plug would get bigger, and so the hole must get bigger too. (Think of the ring of neighboring atoms or molecules on the wall of the hole as pushing each other farther apart as temperature increases. Obviously, the ring of neighbors must get slightly larger, so the hole gets slightly larger).
In general, objects will expand with increasing temperature. Water is the most important exception to this rule. Water expands with increasing temperature (its density decreases) when it is at temperatures greater than . However, it expands with decreasing temperature when it is between and to . Water is densest at . (See .) Perhaps the most striking effect of this phenomenon is the freezing of water in a pond. When water near the surface cools down to it is denser than the remaining water and thus will sink to the bottom. This “turnover” results in a layer of warmer water near the surface, which is then cooled. Eventually the pond has a uniform temperature of . If the temperature in the surface layer drops below , the water is less dense than the water below, and thus stays near the top. As a result, the pond surface can completely freeze over. The ice on top of liquid water provides an insulating layer from winter’s harsh exterior air temperatures. Fish and other aquatic life can survive in water beneath ice, due to this unusual characteristic of water. It also produces circulation of water in the pond that is necessary for a healthy ecosystem of the body of water.
### Thermal Stress
Thermal stress is created by thermal expansion or contraction (see Elasticity: Stress and Strain for a discussion of stress and strain). Thermal stress can be destructive, such as when expanding gasoline ruptures a tank. It can also be useful: for example, when two parts are joined together by heating one in manufacturing, then slipping it over the other and allowing the combination to cool. Thermal stress can explain many phenomena, such as the weathering of rocks and pavement by the expansion of ice when it freezes.
Forces and pressures created by thermal stress are typically as great as that in the example above. Railroad tracks and roadways can buckle on hot days if they lack sufficient expansion joints. (See .) Power lines sag more in the summer than in the winter, and will snap in cold weather if there is insufficient slack. Cracks open and close in plaster walls as a house warms and cools. Glass cooking pans will crack if cooled rapidly or unevenly, because of differential contraction and the stresses it creates. (Pyrex® is less susceptible because of its small coefficient of thermal expansion.) Nuclear reactor pressure vessels are threatened by overly rapid cooling, and although none have failed, several have been cooled faster than considered desirable. Biological cells are ruptured when foods are frozen, detracting from their taste. Repeated thawing and freezing accentuate the damage. Even the oceans can be affected. A significant portion of the rise in sea level that is resulting from global warming is due to the thermal expansion of sea water.
Metal is regularly used in the human body for hip and knee implants. Most implants need to be replaced over time because, among other things, metal does not bond with bone. Researchers are trying to find better metal coatings that would allow metal-to-bone bonding. One challenge is to find a coating that has an expansion coefficient similar to that of metal. If the expansion coefficients are too different, the thermal stresses during the manufacturing process lead to cracks at the coating-metal interface.
Another example of thermal stress is found in the mouth. Dental fillings can expand differently from tooth enamel. It can give pain when eating ice cream or having a hot drink. Cracks might occur in the filling. Metal fillings (gold, silver, etc.) are being replaced by composite fillings (porcelain), which have smaller coefficients of expansion closer to those of teeth.
### Section Summary
1. Thermal expansion is the increase, or decrease, of the size (length, area, or volume) of a body due to a change in temperature.
2. Thermal expansion is large for gases, and relatively small, but not negligible, for liquids and solids.
3. Linear thermal expansion is
where is the change in length , is the change in temperature, and is the coefficient of linear expansion, which varies slightly with temperature.
4. The change in area due to thermal expansion is
where is the change in area.
5. The change in volume due to thermal expansion is
where is the coefficient of volume expansion and . Thermal stress is created when thermal expansion is constrained.
### Conceptual Questions
### Problems & Exercises
|
# Temperature, Kinetic Theory, and the Gas Laws
## The Ideal Gas Law
### Learning Objectives
By the end of this section, you will be able to:
1. State the ideal gas law in terms of molecules and in terms of moles.
2. Use the ideal gas law to calculate pressure change, temperature change, volume change, or the number of molecules or moles in a given volume.
3. Use Avogadro’s number to convert between number of molecules and number of moles.
In this section, we continue to explore the thermal behavior of gases. In particular, we examine the characteristics of atoms and molecules that compose gases. (Most gases, for example nitrogen, , and oxygen, , are composed of two or more atoms. We will primarily use the term “molecule” in discussing a gas because the term can also be applied to monatomic gases, such as helium.)
Gases are easily compressed. We can see evidence of this in , where you will note that gases have the largest coefficients of volume expansion. The large coefficients mean that gases expand and contract very rapidly with temperature changes. In addition, you will note that most gases expand at the same rate, or have the same . This raises the question as to why gases should all act in nearly the same way, when liquids and solids have widely varying expansion rates.
The answer lies in the large separation of atoms and molecules in gases, compared to their sizes, as illustrated in . Because atoms and molecules have large separations, forces between them can be ignored, except when they collide with each other during collisions. The motion of atoms and molecules (at temperatures well above the boiling temperature) is fast, such that the gas occupies all of the accessible volume and the expansion of gases is rapid. In contrast, in liquids and solids, atoms and molecules are closer together and are quite sensitive to the forces between them.
To get some idea of how pressure, temperature, and volume of a gas are related to one another, consider what happens when you pump air into an initially deflated tire. The tire’s volume first increases in direct proportion to the amount of air injected, without much increase in the tire pressure. Once the tire has expanded to nearly its full size, the walls limit volume expansion. If we continue to pump air into it, the pressure increases. The pressure will further increase when the car is driven and the tires move. Most manufacturers specify optimal tire pressure for cold tires. (See .)
At room temperatures, collisions between atoms and molecules can be ignored. In this case, the gas is called an ideal gas, in which case the relationship between the pressure, volume, and temperature is given by the equation of state called the ideal gas law.
The ideal gas law can be derived from basic principles, but was originally deduced from experimental measurements of Charles’ law (that volume occupied by a gas is proportional to temperature at a fixed pressure) and from Boyle’s law (that for a fixed temperature, the product is a constant). In the ideal gas model, the volume occupied by its atoms and molecules is a negligible fraction of . The ideal gas law describes the behavior of real gases under most conditions. (Note, for example, that is the total number of atoms and molecules, independent of the type of gas.)
Let us see how the ideal gas law is consistent with the behavior of filling the tire when it is pumped slowly and the temperature is constant. At first, the pressure is essentially equal to atmospheric pressure, and the volume increases in direct proportion to the number of atoms and molecules put into the tire. Once the volume of the tire is constant, the equation predicts that the pressure should increase in proportion to the number N of atoms and molecules.
### Moles and Avogadro’s Number
It is sometimes convenient to work with a unit other than molecules when measuring the amount of substance. A mole (abbreviated mol) is defined to be the amount of a substance that contains as many atoms or molecules as there are atoms in exactly 12 grams (0.012 kg) of carbon-12. The actual number of atoms or molecules in one mole is called Avogadro’s number, in recognition of Italian scientist Amedeo Avogadro (1776–1856). He developed the concept of the mole, based on the hypothesis that equal volumes of gas, at the same pressure and temperature, contain equal numbers of molecules. That is, the number is independent of the type of gas. This hypothesis has been confirmed, and the value of Avogadro’s number is
### The Ideal Gas Law Restated Using Moles
A very common expression of the ideal gas law uses the number of moles, , rather than the number of atoms and molecules, . We start from the ideal gas law,
and multiply and divide the equation by Avogadro’s number . This gives
Note that is the number of moles. We define the universal gas constant , and obtain the ideal gas law in terms of moles.
The ideal gas law can be considered to be another manifestation of the law of conservation of energy (see Conservation of Energy). Work done on a gas results in an increase in its energy, increasing pressure and/or temperature, or decreasing volume. This increased energy can also be viewed as increased internal kinetic energy, given the gas’s atoms and molecules.
### The Ideal Gas Law and Energy
Let us now examine the role of energy in the behavior of gases. When you inflate a bike tire by hand, you do work by repeatedly exerting a force through a distance. This energy goes into increasing the pressure of air inside the tire and increasing the temperature of the pump and the air.
The ideal gas law is closely related to energy: the units on both sides are joules. The right-hand side of the ideal gas law in is . This term is roughly the amount of translational kinetic energy of atoms or molecules at an absolute temperature , as we shall see formally in Kinetic Theory: Atomic and Molecular Explanation of Pressure and Temperature. The left-hand side of the ideal gas law is , which also has the units of joules. We know from our study of fluids that pressure is one type of potential energy per unit volume, so pressure multiplied by volume is energy. The important point is that there is energy in a gas related to both its pressure and its volume. The energy can be changed when the gas is doing work as it expands—something we explore in Heat and Heat Transfer Methods—similar to what occurs in gasoline or steam engines and turbines.
### Test Prep for AP Courses
### Section Summary
1. The ideal gas law relates the pressure and volume of a gas to the number of gas molecules and the temperature of the gas.
2. The ideal gas law can be written in terms of the number of molecules of gas:
where is pressure, is volume, is temperature, is number of molecules, and is the Boltzmann constant
3. A mole is the number of atoms in a 12-g sample of carbon-12.
4. The number of molecules in a mole is called Avogadro’s number ,
5. A mole of any substance has a mass in grams equal to its molecular weight, which can be determined from the periodic table of elements.
6. The ideal gas law can also be written and solved in terms of the number of moles of gas:
where is number of moles and is the universal gas constant,
7. The ideal gas law is generally valid at temperatures well above the boiling temperature.
### Conceptual Questions
### Problems & Exercises
|
# Temperature, Kinetic Theory, and the Gas Laws
## Kinetic Theory: Atomic and Molecular Explanation of Pressure and Temperature
### Learning Objectives
By the end of this section, you will be able to:
1. Express the ideal gas law in terms of molecular mass and velocity.
2. Define thermal energy.
3. Calculate the kinetic energy of a gas molecule, given its temperature.
4. Describe the relationship between the temperature of a gas and the kinetic energy of atoms and molecules.
5. Describe the distribution of speeds of molecules in a gas.
We have developed macroscopic definitions of pressure and temperature. Pressure is the force divided by the area on which the force is exerted, and temperature is measured with a thermometer. We gain a better understanding of pressure and temperature from the kinetic theory of gases, which assumes that atoms and molecules are in continuous random motion.
shows an elastic collision of a gas molecule with the wall of a container, so that it exerts a force on the wall (by Newton’s third law). Because a huge number of molecules will collide with the wall in a short time, we observe an average force per unit area. These collisions are the source of pressure in a gas. As the number of molecules increases, the number of collisions and thus the pressure increase. Similarly, the gas pressure is higher if the average velocity of molecules is higher. The actual relationship is derived in the Things Great and Small feature below. The following relationship is found:
where is the pressure (average force per unit area), is the volume of gas in the container, is the number of molecules in the container, is the mass of a molecule, and is the average of the molecular speed squared.
What can we learn from this atomic and molecular version of the ideal gas law? We can derive a relationship between temperature and the average translational kinetic energy of molecules in a gas. Recall the previous expression of the ideal gas law:
Equating the right-hand side of this equation with the right-hand side of gives
We can get the average kinetic energy of a molecule, , from the right-hand side of the equation by canceling and multiplying by 3/2. This calculation produces the result that the average kinetic energy of a molecule is directly related to absolute temperature.
The average translational kinetic energy of a molecule, , is called thermal energy. The equation is a molecular interpretation of temperature, and it has been found to be valid for gases and reasonably accurate in liquids and solids. It is another definition of temperature based on an expression of the molecular energy.
It is sometimes useful to rearrange , and solve for the average speed of molecules in a gas in terms of temperature,
where stands for root-mean-square (rms) speed.
### Distribution of Molecular Speeds
The motion of molecules in a gas is random in magnitude and direction for individual molecules, but a gas of many molecules has a predictable distribution of molecular speeds. This distribution is called the Maxwell-Boltzmann distribution, after its originators, who calculated it based on kinetic theory, and has since been confirmed experimentally. (See .) The distribution has a long tail, because a few molecules may go several times the rms speed. The most probable speed is less than the rms speed . shows that the curve is shifted to higher speeds at higher temperatures, with a broader range of speeds.
The distribution of thermal speeds depends strongly on temperature. As temperature increases, the speeds are shifted to higher values and the distribution is broadened.
What is the implication of the change in distribution with temperature shown in for humans? All other things being equal, if a person has a fever, they are likely to lose more water molecules, particularly from linings along moist cavities such as the lungs and mouth, creating a dry sensation in the mouth.
### Test Prep for AP Courses
### Section Summary
1. Kinetic theory is the atomistic description of gases as well as liquids and solids.
2. Kinetic theory models the properties of matter in terms of continuous random motion of atoms and molecules.
3. The ideal gas law can also be expressed as
where is the pressure (average force per unit area), is the volume of gas in the container, is the number of molecules in the container, is the mass of a molecule, and is the average of the molecular speed squared.
4. Thermal energy is defined to be the average translational kinetic energy of an atom or molecule.
5. The temperature of gases is proportional to the average translational kinetic energy of atoms and molecules.
or
6. The motion of individual molecules in a gas is random in magnitude and direction. However, a gas of many molecules has a predictable distribution of molecular speeds, known as the Maxwell-Boltzmann distribution.
### Conceptual Questions
### Problems & Exercises
|
# Temperature, Kinetic Theory, and the Gas Laws
## Phase Changes
### Learning Objectives
By the end of this section, you will be able to:
1. Interpret a phase diagram.
2. State Dalton’s law.
3. Identify and describe the triple point of a gas from its phase diagram.
4. Describe the state of equilibrium between a liquid and a gas, a liquid and a solid, and a gas and a solid.
Up to now, we have considered the behavior of ideal gases. Real gases are like ideal gases at high temperatures. At lower temperatures, however, the interactions between the molecules and their volumes cannot be ignored. The molecules are very close (condensation occurs) and there is a dramatic decrease in volume, as seen in . The substance changes from a gas to a liquid. When a liquid is cooled to even lower temperatures, it becomes a solid. The volume never reaches zero because of the finite volume of the molecules.
High pressure may also cause a gas to change phase to a liquid. Carbon dioxide, for example, is a gas at room temperature and atmospheric pressure, but becomes a liquid under sufficiently high pressure. If the pressure is reduced, the temperature drops and the liquid carbon dioxide solidifies into a snow-like substance at the temperature . Solid is called “dry ice.” Another example of a gas that can be in a liquid phase is liquid nitrogen . is made by liquefaction of atmospheric air (through compression and cooling). It boils at 77 K at atmospheric pressure. is useful as a refrigerant and allows for the preservation of blood, sperm, and other biological materials. It is also used to reduce noise in electronic sensors and equipment, and to help cool down their current-carrying wires. In dermatology, is used to freeze and painlessly remove warts and other growths from the skin.
### PV Diagrams
We can examine aspects of the behavior of a substance by plotting a graph of pressure versus volume, called a . When the substance behaves like an ideal gas, the ideal gas law describes the relationship between its pressure and volume. That is,
Now, assuming the number of molecules and the temperature are fixed,
For example, the volume of the gas will decrease as the pressure increases. If you plot the relationship on a diagram, you find a hyperbola. shows a graph of pressure versus volume. The hyperbolas represent ideal-gas behavior at various fixed temperatures, and are called isotherms. At lower temperatures, the curves begin to look less like hyperbolas—the gas is not behaving ideally and may even contain liquid. There is a critical point—that is, a critical temperature—above which liquid cannot exist. At sufficiently high pressure above the critical point, the gas will have the density of a liquid but will not condense. Carbon dioxide, for example, cannot be liquefied at a temperature above . Critical pressure is the minimum pressure needed for liquid to exist at the critical temperature. lists representative critical temperatures and pressures.
### Phase Diagrams
The plots of pressure versus temperatures provide considerable insight into thermal properties of substances. There are well-defined regions on these graphs that correspond to various phases of matter, so graphs are called phase diagrams. shows the phase diagram for water. Using the graph, if you know the pressure and temperature you can determine the phase of water. The solid lines—boundaries between phases—indicate temperatures and pressures at which the phases coexist (that is, they exist together in ratios, depending on pressure and temperature). For example, the boiling point of water is at 1.00 atm. As the pressure increases, the boiling temperature rises steadily to at a pressure of 218 atm. A pressure cooker (or even a covered pot) will cook food faster because the water can exist as a liquid at temperatures greater than without all boiling away. The curve ends at a point called the critical point, because at higher temperatures the liquid phase does not exist at any pressure. The critical point occurs at the critical temperature, as you can see for water from . The critical temperature for oxygen is , so oxygen cannot be liquefied above this temperature.
Similarly, the curve between the solid and liquid regions in gives the melting temperature at various pressures. For example, the melting point is at 1.00 atm, as expected. Note that, at a fixed temperature, you can change the phase from solid (ice) to liquid (water) by increasing the pressure. Ice melts from pressure in the hands of a snowball maker. From the phase diagram, we can also say that the melting temperature of ice falls with increased pressure. When a car is driven over snow, the increased pressure from the tires melts the snowflakes; afterwards the water refreezes and forms an ice layer.
At sufficiently low pressures there is no liquid phase, but the substance can exist as either gas or solid. For water, there is no liquid phase at pressures below 0.00600 atm. The phase change from solid to gas is called sublimation. It accounts for large losses of snow pack that never make it into a river, the routine automatic defrosting of a freezer, and the freeze-drying process applied to many foods. Carbon dioxide, on the other hand, sublimates at standard atmospheric pressure of 1 atm. (The solid form of is known as dry ice because it does not melt. Instead, it moves directly from the solid to the gas state.)
All three curves on the phase diagram meet at a single point, the triple point, where all three phases exist in equilibrium. For water, the triple point occurs at 273.16 K , and is a more accurate calibration temperature than the melting point of water at 1.00 atm, or 273.15 K . See for the triple point values of other substances.
### Equilibrium
Liquid and gas phases are in equilibrium at the boiling temperature. (See .) If a substance is in a closed container at the boiling point, then the liquid is boiling and the gas is condensing at the same rate without net change in their relative amount. Molecules in the liquid escape as a gas at the same rate at which gas molecules stick to the liquid, or form droplets and become part of the liquid phase. The combination of temperature and pressure has to be “just right”; if the temperature and pressure are increased, equilibrium is maintained by the same increase of boiling and condensation rates.
One example of equilibrium between liquid and gas is that of water and steam at and 1.00 atm. This temperature is the boiling point at that pressure, so they should exist in equilibrium. Why does an open pot of water at boil completely away? The gas surrounding an open pot is not pure water: it is mixed with air. If pure water and steam are in a closed container at and 1.00 atm, they would coexist—but with air over the pot, there are fewer water molecules to condense, and water boils. What about water at and 1.00 atm? This temperature and pressure correspond to the liquid region, yet an open glass of water at this temperature will completely evaporate. Again, the gas around it is air and not pure water vapor, so that the reduced evaporation rate is greater than the condensation rate of water from dry air. If the glass is sealed, then the liquid phase remains. We call the gas phase a vapor when it exists, as it does for water at , at a temperature below the boiling temperature.
### Vapor Pressure, Partial Pressure, and Dalton’s Law
Vapor pressure is defined as the pressure at which a gas coexists with its solid or liquid phase. Vapor pressure is created by faster molecules that break away from the liquid or solid and enter the gas phase. The vapor pressure of a substance depends on both the substance and its temperature—an increase in temperature increases the vapor pressure.
Partial pressure is defined as the pressure a gas would create if it occupied the total volume available. In a mixture of gases, the total pressure is the sum of partial pressures of the component gases, assuming ideal gas behavior and no chemical reactions between the components. This law is known as Dalton’s law of partial pressures, after the English scientist John Dalton (1766–1844), who proposed it. Dalton’s law is based on kinetic theory, where each gas creates its pressure by molecular collisions, independent of other gases present. It is consistent with the fact that pressures add according to Pascal’s Principle. Thus water evaporates and ice sublimates when their vapor pressures exceed the partial pressure of water vapor in the surrounding mixture of gases. If their vapor pressures are less than the partial pressure of water vapor in the surrounding gas, liquid droplets or ice crystals (frost) form.
### Section Summary
1. Most substances have three distinct phases: gas, liquid, and solid.
2. Phase changes among the various phases of matter depend on temperature and pressure.
3. The existence of the three phases with respect to pressure and temperature can be described in a phase diagram.
4. Two phases coexist (i.e., they are in thermal equilibrium) at a set of pressures and temperatures. These are described as a line on a phase diagram.
5. The three phases coexist at a single pressure and temperature. This is known as the triple point and is described by a single point on a phase diagram.
6. A gas at a temperature below its boiling point is called a vapor.
7. Vapor pressure is the pressure at which a gas coexists with its solid or liquid phase.
8. Partial pressure is the pressure a gas would create if it existed alone.
9. Dalton’s law states that the total pressure is the sum of the partial pressures of all of the gases present.
### Conceptual Questions
|
# Temperature, Kinetic Theory, and the Gas Laws
## Humidity, Evaporation, and Boiling
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the relationship between vapor pressure of water and the capacity of air to hold water vapor.
2. Explain the relationship between relative humidity and partial pressure of water vapor in the air.
3. Calculate vapor density using vapor pressure.
4. Calculate humidity and dew point.
The expression “it’s not the heat, it’s the humidity” makes a valid point. We keep cool in hot weather by evaporating sweat from our skin and water from our breathing passages. Because evaporation is inhibited by high humidity, we feel hotter at a given temperature when the humidity is high. Low humidity, on the other hand, can cause discomfort from excessive drying of mucous membranes and can lead to an increased risk of respiratory infections.
When we say humidity, we really mean relative humidity. Relative humidity tells us how much water vapor is in the air compared with the maximum possible. At its maximum, denoted as saturation, the relative humidity is 100%, and evaporation is inhibited. The amount of water vapor in the air depends on temperature. For example, relative humidity rises in the evening, as air temperature declines, sometimes reaching the dew point. At the dew point temperature, relative humidity is 100%, and fog may result from the condensation of water droplets if they are small enough to stay in suspension. Conversely, if you wish to dry something (perhaps your hair), it is more effective to blow hot air over it rather than cold air, because, among other things, the increase in temperature increases the energy of the molecules, so the rate of evaporation increases.
The amount of water vapor in the air depends on the vapor pressure of water. The liquid and solid phases are continuously giving off vapor because some of the molecules have high enough speeds to enter the gas phase; see (a). If a lid is placed over the container, as in (b), evaporation continues, increasing the pressure, until sufficient vapor has built up for condensation to balance evaporation. Then equilibrium has been achieved, and the vapor pressure is equal to the partial pressure of water in the container. Vapor pressure increases with temperature because molecular speeds are higher as temperature increases. gives representative values of water vapor pressure over a range of temperatures.
Relative humidity is related to the partial pressure of water vapor in the air. At 100% humidity, the partial pressure is equal to the vapor pressure, and no more water can enter the vapor phase. If the partial pressure is less than the vapor pressure, then evaporation will take place, as humidity is less than 100%. If the partial pressure is greater than the vapor pressure, condensation takes place. In everyday language, people sometimes refer to the capacity of air to “hold” water vapor, but this is not actually what happens. The water vapor is not held by the air. The amount of water in air is determined by the vapor pressure of water and has nothing to do with the properties of air.
We can use this and the data in to do a variety of interesting calculations, keeping in mind that relative humidity is based on the comparison of the partial pressure of water vapor in air and ice.
Why does water boil at ? You will note from that the vapor pressure of water at is , or 1.00 atm. Thus, it can evaporate without limit at this temperature and pressure. But why does it form bubbles when it boils? This is because water ordinarily contains significant amounts of dissolved air and other impurities, which are observed as small bubbles of air in a glass of water. If a bubble starts out at the bottom of the container at , it contains water vapor (about 2.30%). The pressure inside the bubble is fixed at 1.00 atm (we ignore the slight pressure exerted by the water around it). As the temperature rises, the amount of air in the bubble stays the same, but the water vapor increases; the bubble expands to keep the pressure at 1.00 atm. At , water vapor enters the bubble continuously since the partial pressure of water is equal to 1.00 atm in equilibrium. It cannot reach this pressure, however, since the bubble also contains air and total pressure is 1.00 atm. The bubble grows in size and thereby increases the buoyant force. The bubble breaks away and rises rapidly to the surface—we call this boiling! (See .)
### Section Summary
1. Relative humidity is the fraction of water vapor in a gas compared to the saturation value.
2. The saturation vapor density can be determined from the vapor pressure for a given temperature.
3. Percent relative humidity is defined to be
4. The dew point is the temperature at which air reaches 100% relative humidity.
### Conceptual Questions
### Problems & Exercises
|
# Heat and Heat Transfer Methods
## Connection for AP® Courses
Heat is one of the most intriguing of the many ways in which energy goes from one place to another. Heat is often hidden, as it only exists when energy is in transit, and the methods of transfer are distinctly different. Energy transfer by heat touches every aspect of our lives, and helps us to understand how the universe functions. It explains the chill you feel on a clear breezy night, and why Earth’s core has yet to cool.
In this chapter, the ideas of temperature and thermal energy are used to examine and define heat, how heat is affected by the thermal properties of materials, and how the various mechanisms of heat transfer function. These topics are fundamental and practical, and will be returned to in future chapters. Big Idea 4 of the AP® Physics Curriculum Framework is supported by a discussion of how systems interact through energy transfer by heat and how this leads to changes in the energy of each system. Big Idea 5 is supported by exploration of the law of energy conservation that governs any changes in the energy of a system. Heat involves the transfer of thermal energy, or internal energy, from one system to another or to its surroundings, and so leads to a change in the internal energy of the system. This is analogous to the way work transfers mechanical energy to a mass to change its kinetic or potential energy. However, heat occurs as a spontaneous process, in which thermal energy is transferred from a higher temperature system to a lower temperature system.
Big Idea 1 is supported by examination of the internal structure of systems, which determines the nature of those energy changes and the mechanism of heat transfer. Macroscopic properties, such as heat capacity, latent heat, and thermal conductivity, depend on the arrangements and interactions of the atoms or molecules in a substance. The arrangement of these particles also determines whether thermal energy will be transferred through direct physical contact between systems (conduction), through the motion of fluids with different temperatures (convection), or through emission or absorption of radiation.
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.E Materials have many macroscopic properties that result from the arrangement and interactions of the atoms and molecules that make up the material.
Essential Knowledge 1.E.3 Matter has a property called thermal conductivity.
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.C Interactions with other objects or systems can change the total energy of a system.
Essential Knowledge 4.C.3 Energy is transferred spontaneously from a higher temperature system to a lower temperature system. The process through which energy is transferred between systems at different temperatures is called heat.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.6 Energy can be transferred by thermal processes involving differences in temperature; the amount of energy transferred in this process of transfer is called heat. |
# Heat and Heat Transfer Methods
## Heat
### Learning Objectives
By the end of this section, you will be able to:
1. Define heat as transfer of energy.
In Work, Energy, and Energy Resources, we defined work as force times distance and learned that work done on an object changes its kinetic energy. We also saw in Temperature, Kinetic Theory, and the Gas Laws that temperature is proportional to the (average) kinetic energy of atoms and molecules. We say that a thermal system has a certain internal energy: its internal energy is higher if the temperature is higher. If two objects at different temperatures are brought in contact with each other, energy is transferred from the hotter to the colder object until equilibrium is reached and the bodies reach thermal equilibrium (i.e., they are at the same temperature). No work is done by either object, because no force acts through a distance. The transfer of energy is caused by the temperature difference, and ceases once the temperatures are equal. These observations lead to the following definition of heat: Heat is the spontaneous transfer of energy due to a temperature difference.
As noted in Temperature, Kinetic Theory, and the Gas Laws, heat is often confused with temperature. For example, we may say the heat was unbearable, when we actually mean that the temperature was high. Heat is a form of energy, whereas temperature is not. The misconception arises because we are sensitive to the flow of heat, rather than the temperature.
Owing to the fact that heat is a form of energy, it has the SI unit of joule (J). The calorie (cal) is a common unit of energy, defined as the energy needed to change the temperature of 1.00 g of water by —specifically, between and , since there is a slight temperature dependence. Perhaps the most common unit of heat is the kilocalorie (kcal), which is the energy needed to change the temperature of 1.00 kg of water by . Since mass is most often specified in kilograms, kilocalorie is commonly used. Food calories (given the notation Cal, and sometimes called “big calorie”) are actually kilocalories (), a fact not easily determined from package labeling.
### Mechanical Equivalent of Heat
It is also possible to change the temperature of a substance by doing work. Work can transfer energy into or out of a system. This realization helped establish the fact that heat is a form of energy. James Prescott Joule (1818–1889) performed many experiments to establish the mechanical equivalent of heat—the work needed to produce the same effects as heat transfer. In terms of the units used for these two terms, the best modern value for this equivalence is
We consider this equation as the conversion between two different units of energy.
The figure above shows one of Joule’s most famous experimental setups for demonstrating the mechanical equivalent of heat. It demonstrated that work and heat can produce the same effects, and helped establish the principle of conservation of energy. Gravitational potential energy (PE) (work done by the gravitational force) is converted into kinetic energy (KE), and then randomized by viscosity and turbulence into increased average kinetic energy of atoms and molecules in the system, producing a temperature increase. His contributions to the field of thermodynamics were so significant that the SI unit of energy was named after him.
Heat added or removed from a system changes its internal energy and thus its temperature. Such a temperature increase is observed while cooking. However, adding heat does not necessarily increase the temperature. An example is melting of ice; that is, when a substance changes from one phase to another. Work done on the system or by the system can also change the internal energy of the system. Joule demonstrated that the temperature of a system can be increased by stirring. If an ice cube is rubbed against a rough surface, work is done by the frictional force. A system has a well-defined internal energy, but we cannot say that it has a certain “heat content” or “work content”. We use the phrase “heat transfer” to emphasize its nature.
### Test Prep for AP Courses
### Summary
1. Heat and work are the two distinct methods of energy transfer.
2. Heat is energy transferred solely due to a temperature difference.
3. Any energy unit can be used for heat transfer, and the most common are kilocalorie (kcal) and joule (J).
4. Kilocalorie is defined to be the energy needed to change the temperature of 1.00 kg of water between and .
5. The mechanical equivalent of this heat transfer is
### Conceptual Questions
|
# Heat and Heat Transfer Methods
## Temperature Change and Heat Capacity
### Learning Objectives
By the end of this section, you will be able to:
1. Observe heat transfer and change in temperature and mass.
2. Calculate final temperature after heat transfer between two objects.
One of the major effects of heat transfer is temperature change: heating increases the temperature while cooling decreases it. We assume that there is no phase change and that no work is done on or by the system. Experiments show that the transferred heat depends on three factors—the change in temperature, the mass of the system, and the substance and phase of the substance.
The dependence on temperature change and mass are easily understood. Owing to the fact that the (average) kinetic energy of an atom or molecule is proportional to the absolute temperature, the internal energy of a system is proportional to the absolute temperature and the number of atoms or molecules. Owing to the fact that the transferred heat is equal to the change in the internal energy, the heat is proportional to the mass of the substance and the temperature change. The transferred heat also depends on the substance so that, for example, the heat necessary to raise the temperature is less for alcohol than for water. For the same substance, the transferred heat also depends on the phase (gas, liquid, or solid).
Values of specific heat must generally be looked up in tables, because there is no simple way to calculate them. In general, the specific heat also depends on the temperature. lists representative values of specific heat for various substances. Except for gases, the temperature and volume dependence of the specific heat of most substances is weak. We see from this table that the specific heat of water is five times that of glass and ten times that of iron, which means that it takes five times as much heat to raise the temperature of water the same amount as for glass and ten times as much heat to raise the temperature of water as for iron. In fact, water has one of the largest specific heats of any material, which is important for sustaining life on Earth.
Note that is an illustration of the mechanical equivalent of heat. Alternatively, the temperature increase could be produced by a blow torch instead of mechanically.
### Summary
1. The transfer of heat that leads to a change in the temperature of a body with mass is , where is the specific heat of the material. This relationship can also be considered as the definition of specific heat.
### Conceptual Questions
### Problems & Exercises
|
# Heat and Heat Transfer Methods
## Phase Change and Latent Heat
### Learning Objectives
By the end of this section, you will be able to:
1. Examine heat transfer.
2. Calculate final temperature from heat transfer.
So far we have discussed temperature change due to heat transfer. No temperature change occurs from heat transfer if ice melts and becomes liquid water (i.e., during a phase change). For example, consider water dripping from icicles melting on a roof warmed by the Sun. Conversely, water freezes in an ice tray cooled by lower-temperature surroundings.
Energy is required to melt a solid because the cohesive bonds between the molecules in the solid must be broken apart such that, in the liquid, the molecules can move around at comparable kinetic energies; thus, there is no rise in temperature. Similarly, energy is needed to vaporize a liquid, because molecules in a liquid interact with each other via attractive forces. There is no temperature change until a phase change is complete. The temperature of a cup of soda initially at stays at until all the ice has melted. Conversely, energy is released during freezing and condensation, usually in the form of thermal energy. Work is done by cohesive forces when molecules are brought together. The corresponding energy must be given off (dissipated) to allow them to stay together .
The energy involved in a phase change depends on two major factors: the number and strength of bonds or force pairs. The number of bonds is proportional to the number of molecules and thus to the mass of the sample. The strength of forces depends on the type of molecules. The heat required to change the phase of a sample of mass is given by
where the latent heat of fusion, , and latent heat of vaporization, , are material constants that are determined experimentally. See ().
Latent heat is measured in units of J/kg. Both and depend on the substance, particularly on the strength of its molecular forces as noted earlier. and are collectively called latent heat coefficients. They are latent, or hidden, because in phase changes, energy enters or leaves a system without causing a temperature change in the system; so, in effect, the energy is hidden. lists representative values of and , together with melting and boiling points.
The table shows that significant amounts of energy are involved in phase changes. Let us look, for example, at how much energy is needed to melt a kilogram of ice at to produce a kilogram of water at . Using the equation for a change in temperature and the value for water from , we find that is the energy to melt a kilogram of ice. This is a lot of energy as it represents the same amount of energy needed to raise the temperature of 1 kg of liquid water from to . Even more energy is required to vaporize water; it would take 2256 kJ to change 1 kg of liquid water at the normal boiling point ( at atmospheric pressure) to steam (water vapor). This example shows that the energy for a phase change is enormous compared to energy associated with temperature changes without a phase change.
Phase changes can have a tremendous stabilizing effect even on temperatures that are not near the melting and boiling points, because evaporation and condensation (conversion of a gas into a liquid state) occur even at temperatures below the boiling point. Take, for example, the fact that air temperatures in humid climates rarely go above , which is because most heat transfer goes into evaporating water into the air. Similarly, temperatures in humid weather rarely fall below the dew point because enormous heat is released when water vapor condenses.
We examine the effects of phase change more precisely by considering adding heat into a sample of ice at (). The temperature of the ice rises linearly, absorbing heat at a constant rate of until it reaches . Once at this temperature, the ice begins to melt until all the ice has melted, absorbing 79.8 cal/g of heat. The temperature remains constant at during this phase change. Once all the ice has melted, the temperature of the liquid water rises, absorbing heat at a new constant rate of . At , the water begins to boil and the temperature again remains constant while the water absorbs 539 cal/g of heat during this phase change. When all the liquid has become steam vapor, the temperature rises again, absorbing heat at a rate of .
Water can evaporate at temperatures below the boiling point. More energy is required than at the boiling point, because the kinetic energy of water molecules at temperatures below is less than that at , hence less energy is available from random thermal motions. Take, for example, the fact that, at body temperature, perspiration from the skin requires a heat input of 2428 kJ/kg, which is about 10 percent higher than the latent heat of vaporization at . This heat comes from the skin, and thus provides an effective cooling mechanism in hot weather. High humidity inhibits evaporation, so that body temperature might rise, leaving unevaporated sweat on your brow.
We have seen that vaporization requires heat transfer to a liquid from the surroundings, so that energy is released by the surroundings. Condensation is the reverse process, increasing the temperature of the surroundings. This increase may seem surprising, since we associate condensation with cold objects—the glass in the figure, for example. However, energy must be removed from the condensing molecules to make a vapor condense. The energy is exactly the same as that required to make the phase change in the other direction, from liquid to vapor, and so it can be calculated from .
Sublimation is the transition from solid to vapor phase. You may have noticed that snow can disappear into thin air without a trace of liquid water, or the disappearance of ice cubes in a freezer. The reverse is also true: Frost can form on very cold windows without going through the liquid stage. A popular effect is the making of “smoke” from dry ice, which is solid carbon dioxide. Sublimation occurs because the equilibrium vapor pressure of solids is not zero. Certain air fresheners use the sublimation of a solid to inject a perfume into the room. Moth balls are a slightly toxic example of a phenol (an organic compound) that sublimates, while some solids, such as osmium tetroxide, are so toxic that they must be kept in sealed containers to prevent human exposure to their sublimation-produced vapors.
All phase transitions involve heat. In the case of direct solid-vapor transitions, the energy required is given by the equation , where is the heat of sublimation, which is the energy required to change 1.00 kg of a substance from the solid phase to the vapor phase. is analogous to and , and its value depends on the substance. Sublimation requires energy input, so that dry ice is an effective coolant, whereas the reverse process (i.e., frosting) releases energy. The amount of energy required for sublimation is of the same order of magnitude as that for other phase transitions.
The material presented in this section and the preceding section allows us to calculate any number of effects related to temperature and phase change. In each case, it is necessary to identify which temperature and phase changes are taking place and then to apply the appropriate equation. Keep in mind that heat transfer and work can cause both temperature and phase changes.
### Problem-Solving Strategies for the Effects of Heat Transfer
1. Examine the situation to determine that there is a change in the temperature or phase. Is there heat transfer into or out of the system? When the presence or absence of a phase change is not obvious, you may wish to first solve the problem as if there were no phase changes, and examine the temperature change obtained. If it is sufficient to take you past a boiling or melting point, you should then go back and do the problem in steps—temperature change, phase change, subsequent temperature change, and so on.
2. Identify and list all objects that change temperature and phase.
3. Identify exactly what needs to be determined in the problem (identify the unknowns). A written list is useful.
4. Make a list of what is given or what can be inferred from the problem as stated (identify the knowns).
5. Solve the appropriate equation for the quantity to be determined (the unknown). If there is a temperature change, the transferred heat depends on the specific heat (see ) whereas, for a phase change, the transferred heat depends on the latent heat. See .
6. Substitute the knowns along with their units into the appropriate equation and obtain numerical solutions complete with units. You will need to do this in steps if there is more than one stage to the process (such as a temperature change followed by a phase change).
7. Check the answer to see if it is reasonable: Does it make sense? As an example, be certain that the temperature change does not also cause a phase change that you have not taken into account.
### Summary
1. Most substances can exist either in solid, liquid, and gas forms, which are referred to as “phases.”
2. Phase changes occur at fixed temperatures for a given substance at a given pressure, and these temperatures are called boiling and freezing (or melting) points.
3. During phase changes, heat absorbed or released is given by:
where
### Conceptual Questions
### Problems & Exercises
|
# Heat and Heat Transfer Methods
## Heat Transfer Methods
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the different methods of heat transfer.
Equally as interesting as the effects of heat transfer on a system are the methods by which this occurs. Whenever there is a temperature difference, heat transfer occurs. Heat transfer may occur rapidly, such as through a cooking pan, or slowly, such as through the walls of a picnic ice chest. We can control rates of heat transfer by choosing materials (such as thick wool clothing for the winter), controlling air movement (such as the use of weather stripping around doors), or by choice of color (such as a white roof to reflect summer sunlight). So many processes involve heat transfer, so that it is hard to imagine a situation where no heat transfer occurs. Yet every process involving heat transfer takes place by only three methods:
1. Conduction is heat transfer through stationary matter by physical contact. (The matter is stationary on a macroscopic scale—we know there is thermal motion of the atoms and molecules at any temperature above absolute zero.) Heat transferred between the electric burner of a stove and the bottom of a pan is transferred by conduction.
2. Convection is the heat transfer by the macroscopic movement of a fluid. This type of transfer takes place in a forced-air furnace and in weather systems, for example.
3. Heat transfer by radiation occurs when microwaves, infrared radiation, visible light, or another form of electromagnetic radiation is emitted or absorbed. An obvious example is the warming of the Earth by the Sun. A less obvious example is thermal radiation from the human body.
We examine these methods in some detail in the three following modules. Each method has unique and interesting characteristics, but all three do have one thing in common: they transfer heat solely because of a temperature difference .
### Summary
1. Heat is transferred by three different methods: conduction, convection, and radiation.
### Conceptual Questions
|
# Heat and Heat Transfer Methods
## Conduction
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate thermal conductivity.
2. Observe conduction of heat in collisions.
3. Study thermal conductivities of common substances.
Your feet feel cold as you walk barefoot across the living room carpet in your cold house and then step onto the kitchen tile floor. This result is intriguing, since the carpet and tile floor are both at the same temperature. The different sensation you feel is explained by the different rates of heat transfer: the heat loss during the same time interval is greater for skin in contact with the tiles than with the carpet, so the temperature drop is greater on the tiles.
Some materials conduct thermal energy faster than others. In general, good conductors of electricity (metals like copper, aluminum, gold, and silver) are also good heat conductors, whereas insulators of electricity (wood, plastic, and rubber) are poor heat conductors. shows molecules in two bodies at different temperatures. The (average) kinetic energy of a molecule in the hot body is higher than in the colder body. If two molecules collide, an energy transfer from the molecule with greater kinetic energy to the molecule with less kinetic energy occurs. The cumulative effect from all collisions results in a net flux of heat from the hot body to the colder body. The heat flux thus depends on the temperature difference . Therefore, you will get a more severe burn from boiling water than from hot tap water. Conversely, if the temperatures are the same, the net heat transfer rate falls to zero, and equilibrium is achieved. Owing to the fact that the number of collisions increases with increasing area, heat conduction depends on the cross-sectional area. If you touch a cold wall with your palm, your hand cools faster than if you just touch it with your fingertip.
A third factor in the mechanism of conduction is the thickness of the material through which heat transfers. The figure below shows a slab of material with different temperatures on either side. Suppose that is greater than , so that heat is transferred from left to right. Heat transfer from the left side to the right side is accomplished by a series of molecular collisions. The thicker the material, the more time it takes to transfer the same amount of heat. This model explains why thick clothing is warmer than thin clothing in winters, and why Arctic mammals protect themselves with thick blubber.
Lastly, the heat transfer rate depends on the material properties described by the coefficient of thermal conductivity. All four factors are included in a simple equation that was deduced from and is confirmed by experiments. The rate of conductive heat transfer through a slab of material, such as the one in , is given by
where is the rate of heat transfer in watts or kilocalories per second, is the thermal conductivity of the material, and are its surface area and thickness, as shown in , and is the temperature difference across the slab. gives representative values of thermal conductivity.
A combination of material and thickness is often manipulated to develop good insulators—the smaller the conductivity and the larger the thickness , the better. The ratio of will thus be large for a good insulator. The ratio is called the . The rate of conductive heat transfer is inversely proportional to . The larger the value of , the better the insulation. factors are most commonly quoted for household insulation, refrigerators, and the like—unfortunately, it is still in non-metric units of ft2·°F·h/Btu, although the unit usually goes unstated (1 British thermal unit [Btu] is the amount of energy needed to change the temperature of 1.0 lb of water by 1.0 °F). A couple of representative values are an factor of 11 for 3.5-in-thick fiberglass batts (pieces) of insulation and an factor of 19 for 6.5-in-thick fiberglass batts. Walls are usually insulated with 3.5-in batts, while ceilings are usually insulated with 6.5-in batts. In cold climates, thicker batts may be used in ceilings and walls.
Note that in , the best thermal conductors—silver, copper, gold, and aluminum—are also the best electrical conductors, again related to the density of free electrons in them. Cooking utensils are typically made from good conductors.
### Test Prep for AP Courses
### Summary
1. Heat conduction is the transfer of heat between two objects in direct contact with each other.
2. The rate of heat transfer (energy per unit time) is proportional to the temperature difference and the contact area and inversely proportional to the distance between the objects:
### Conceptual Questions
### Problems & Exercises
|
# Heat and Heat Transfer Methods
## Convection
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss the method of heat transfer by convection.
Convection is driven by large-scale flow of matter. In the case of Earth, the atmospheric circulation is caused by the flow of hot air from the tropics to the poles, and the flow of cold air from the poles toward the tropics. (Note that Earth’s rotation causes the observed easterly flow of air in the northern hemisphere). Car engines are kept cool by the flow of water in the cooling system, with the water pump maintaining a flow of cool water to the pistons. The circulatory system is used the body: when the body overheats, the blood vessels in the skin expand (dilate), which increases the blood flow to the skin where it can be cooled by sweating. These vessels become smaller when it is cold outside and larger when it is hot (so more fluid flows, and more energy is transferred).
The body also loses a significant fraction of its heat through the breathing process.
While convection is usually more complicated than conduction, we can describe convection and do some straightforward, realistic calculations of its effects. Natural convection is driven by buoyant forces: hot air rises because density decreases as temperature increases. The house in is kept warm in this manner, as is the pot of water on the stove in . Ocean currents and large-scale atmospheric circulation transfer energy from one part of the globe to another. Both are examples of natural convection.
A cold wind is much more chilling than still cold air, because convection combines with conduction in the body to increase the rate at which energy is transferred away from the body. The table below gives approximate wind-chill factors, which are the temperatures of still air that produce the same rate of cooling as air of a given temperature and speed. Wind-chill factors are a dramatic reminder of convection’s ability to transfer heat faster than conduction. For example, a 15.0 m/s wind at has the chilling equivalent of still air at about .
Although air can transfer heat rapidly by convection, it is a poor conductor and thus a good insulator. The amount of available space for airflow determines whether air acts as an insulator or conductor. The space between the inside and outside walls of a house, for example, is about 9 cm (3.5 in) —large enough for convection to work effectively. The addition of wall insulation prevents airflow, so heat loss (or gain) is decreased. Similarly, the gap between the two panes of a double-paned window is about 1 cm, which prevents convection and takes advantage of air’s low conductivity to prevent greater loss. Fur, fiber, and fiberglass also take advantage of the low conductivity of air by trapping it in spaces too small to support convection, as shown in the figure. Fur and feathers are lightweight and thus ideal for the protection of animals.
Some interesting phenomena happen when convection is accompanied by a phase change. It allows us to cool off by sweating, even if the temperature of the surrounding air exceeds body temperature. Heat from the skin is required for sweat to evaporate from the skin, but without air flow, the air becomes saturated and evaporation stops. Air flow caused by convection replaces the saturated air by dry air and evaporation continues.
Another important example of the combination of phase change and convection occurs when water evaporates from the oceans. Heat is removed from the ocean when water evaporates. If the water vapor condenses in liquid droplets as clouds form, heat is released in the atmosphere. Thus, there is an overall transfer of heat from the ocean to the atmosphere. This process is the driving power behind thunderheads, those great cumulus clouds that rise as much as 20.0 km into the stratosphere. Water vapor carried in by convection condenses, releasing tremendous amounts of energy. This energy causes the air to expand and rise, where it is colder. More condensation occurs in these colder regions, which in turn drives the cloud even higher. Such a mechanism is called positive feedback, since the process reinforces and accelerates itself. These systems sometimes produce violent storms, with lightning and hail, and constitute the mechanism driving hurricanes.
The movement of icebergs is another example of convection accompanied by a phase change. Suppose an iceberg drifts from Greenland into warmer Atlantic waters. Heat is removed from the warm ocean water when the ice melts and heat is released to the land mass when the iceberg forms on Greenland.
### Test Prep for AP Courses
### Summary
1. Convection is heat transfer by the macroscopic movement of mass. Convection can be natural or forced and generally transfers thermal energy faster than conduction. gives wind-chill factors, indicating that moving air has the same chilling effect of much colder stationary air. Convection that occurs along with a phase change can transfer energy from cold regions to warm ones.
### Conceptual Questions
### Problems & Exercises
|
# Heat and Heat Transfer Methods
## Radiation
### Learning Objectives
By the end of this section, you will be able to:
1. Discuss heat transfer by radiation.
2. Explain the power of different materials.
You can feel the heat transfer from a fire and from the Sun. Similarly, you can sometimes tell that the oven is hot without touching its door or looking inside—it may just warm you as you walk by. The space between the Earth and the Sun is largely empty, without any possibility of heat transfer by convection or conduction. In these examples, heat is transferred by radiation. That is, the hot body emits electromagnetic waves that are absorbed by our skin: no medium is required for electromagnetic waves to propagate. Different names are used for electromagnetic waves of different wavelengths: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, and gamma rays.
The energy of electromagnetic radiation depends on the wavelength (color) and varies over a wide range: a smaller wavelength (or higher frequency) corresponds to a higher energy. Because more heat is radiated at higher temperatures, a temperature change is accompanied by a color change. Take, for example, an electrical element on a stove, which glows from red to orange, while the higher-temperature steel in a blast furnace glows from yellow to white. The radiation you feel is mostly infrared, which corresponds to a lower temperature than that of the electrical element and the steel. The radiated energy depends on its intensity, which is represented in the figure below by the height of the distribution.
Electromagnetic Waves explains more about the electromagnetic spectrum and Introduction to Quantum Physics discusses how the decrease in wavelength corresponds to an increase in energy.
All objects absorb and emit electromagnetic radiation. The rate of heat transfer by radiation is largely determined by the color of the object. Black is the most effective, and white is the least effective. People living in hot climates generally avoid wearing black clothing, for instance. Similarly, black asphalt in a parking lot will be hotter than adjacent gray sidewalk on a summer day, because black absorbs better than gray. The reverse is also true—black radiates better than gray. Thus, on a clear summer night, the asphalt will be colder than the gray sidewalk, because black radiates the energy more rapidly than gray. An ideal radiator is the same color as an ideal absorber, and captures all the radiation that falls on it. In contrast, white is a poor absorber and is also a poor radiator. A white object reflects all radiation, like a mirror. (A perfect, polished white surface is mirror-like in appearance, and a crushed mirror looks white.)
Gray objects have a uniform ability to absorb all parts of the electromagnetic spectrum. Colored objects behave in similar but more complex ways, which gives them a particular color in the visible range and may make them special in other ranges of the nonvisible spectrum. Take, for example, the strong absorption of infrared radiation by the skin, which allows us to be very sensitive to it.
The rate of heat transfer by emitted radiation is determined by the Stefan-Boltzmann law of radiation:
where is the Stefan-Boltzmann constant, is the surface area of the object, and is its absolute temperature in kelvin. The symbol stands for the emissivity of the object, which is a measure of how well it radiates. An ideal jet-black (or black body) radiator has , whereas a perfect reflector has
. Real objects fall between these two values. Take, for example, tungsten light bulb filaments which have an
of about
, and carbon black (a material used in printer toner), which has the (greatest known) emissivity of about
.
The radiation rate is directly proportional to the fourth power of the absolute temperature—a remarkably strong temperature dependence. Furthermore, the radiated heat is proportional to the surface area of the object. If you knock apart the coals of a fire, there is a noticeable increase in radiation due to an increase in radiating surface area.
Skin is a remarkably good absorber and emitter of infrared radiation, having an emissivity of 0.97 in the infrared spectrum. Thus, we are all nearly (jet) black in the infrared, in spite of the obvious variations in skin color. This high infrared emissivity is why we can so easily feel radiation on our skin. It is also the basis for the use of night scopes used by law enforcement and the military to detect human beings. Even small temperature variations can be detected because of the dependence. Images, called thermographs, can be used medically to detect regions of abnormally high temperature in the body, perhaps indicative of disease. Similar techniques can be used to detect heat leaks in homes , optimize performance of blast furnaces, improve comfort levels in work environments, and even remotely map the Earth’s temperature profile.
All objects emit and absorb radiation. The net rate of heat transfer by radiation (absorption minus emission) is related to both the temperature of the object and the temperature of its surroundings. Assuming that an object with a temperature is surrounded by an environment with uniform temperature , the net rate of heat transfer by radiation is
where is the emissivity of the object alone. In other words, it does not matter whether the surroundings are white, gray, or black; the balance of radiation into and out of the object depends on how well it emits and absorbs radiation. When , the quantity is positive; that is, the net heat transfer is from hot to cold.
The Earth receives almost all its energy from radiation of the Sun and reflects some of it back into outer space. Because the Sun is hotter than the Earth, the net energy flux is from the Sun to the Earth. However, the rate of energy transfer is less than the equation for the radiative heat transfer would predict because the Sun does not fill the sky. The average emissivity () of the Earth is about 0.65, but the calculation of this value is complicated by the fact that the highly reflective cloud coverage varies greatly from day to day. There is a negative feedback (one in which a change produces an effect that opposes that change) between clouds and heat transfer; greater temperatures evaporate more water to form more clouds, which reflect more radiation back into space, reducing the temperature. The often mentioned greenhouse effect is directly related to the variation of the Earth’s emissivity with radiation type (see the figure given below). The greenhouse effect is a natural phenomenon responsible for providing temperatures suitable for life on Earth. The Earth’s relatively constant temperature is a result of the energy balance between the incoming solar radiation and the energy radiated from the Earth. Most of the infrared radiation emitted from the Earth is absorbed by carbon dioxide () and water () in the atmosphere and then re-radiated back to the Earth or into outer space. Re-radiation back to the Earth maintains its surface temperature about higher than it would be if there was no atmosphere, similar to the way glass increases temperatures in a greenhouse.
The greenhouse effect and its causes were first predicted by Eunice Newton Foote after she designed and conducted experiments on heating of different gases. After filling flasks with carbon dioxide, hydrogen, and regular air, then also modifying moisture, she placed them in the sun and carefully measured their heating and, especially, their heat retention. She discovered that the CO2 flask gained the most temperature and held it the longest. After subsequent research, her paper "Circumstances affecting the Heat of the Sun’s Rays" included conclusions that an atmosphere consisting of more carbon dioxide would be hotter resulting from the gas trapping radiation.
The greenhouse effect is also central to the discussion of global warming due to emission of carbon dioxide and methane (and other so-called greenhouse gases) into the Earth’s atmosphere from industrial production and farming. Changes in global climate could lead to more intense storms, precipitation changes (affecting agriculture), reduction in rain forest biodiversity, and rising sea levels.
Heating and cooling are often significant contributors to energy use in individual homes. Mária Telkes, a Hungarian-born American scientist, was among the foremost developers of solar energy applications in industrial and community use. After inventing a widely deployed solar seawater distiller used on World War II life rafts, she partnered with architect Eleanor Raymond to design the first modern home to be completely heated by solar power. Air warmed on rooftop collectors transferred heat to salts, which stored the heat for later use. Telkes later worked with the Department of Energy to develop the first solar-electrically powered home. Current research efforts into developing environmentally friendly homes quite often focus on reducing conventional heating and cooling through better building materials, strategically positioning windows to optimize radiation gain from the Sun, and opening spaces to allow convection. It is possible to build a zero-energy house that allows for comfortable living in most parts of the United States with hot and humid summers and cold winters.
Conversely, dark space is very cold, about , so that the Earth radiates energy into the dark sky. Owing to the fact that clouds have lower emissivity than either oceans or land masses, they reflect some of the radiation back to the surface, greatly reducing heat transfer into dark space, just as they greatly reduce heat transfer into the atmosphere during the day. The rate of heat transfer from soil and grasses can be so rapid that frost may occur on clear summer evenings, even in warm latitudes.
### Test Prep for AP Courses
### Summary
1. Radiation is the rate of heat transfer through the emission or absorption of electromagnetic waves.
2. The rate of heat transfer depends on the surface area and the fourth power of the absolute temperature:
where where
### Conceptual Questions
### Problems & Exercises
|
# Thermodynamics
## Connection for AP® Courses
Heat is energy in transit, and it can be used to do work. It can also be converted to any other form of energy. When a car engine burns fuel, for example, heat transfers into a gas. Work is done by the heated gas as it exerts a force through a distance (Essential Knowledge 5.B.5), converting its energy into a variety of other forms—into the car's kinetic or gravitational potential energy; into electrical energy to run the spark plugs, radio, and lights; and into stored energy in the car's battery. But most of the heat produced from burning fuel in the engine does not do work. Rather, the heat is released into the environment, implying that the engine is quite inefficient.
It is often said that modern gasoline engines cannot be made to be significantly more efficient. We hear the same about heat transfer to electrical energy in large power stations, whether they are coal, oil, natural gas, or nuclear powered. Why is that the case? Is the inefficiency caused by design problems that could be solved with better engineering and superior materials? Is it part of some money-making conspiracy by those who sell energy? Actually, the truth is more interesting, and reveals much about the nature of heat transfer. Basic physical laws govern how heat transfer for doing work takes place and place insurmountable limits onto its efficiency. This chapter will explore these laws as well as many applications and concepts associated with them. These topics are part of thermodynamics—the study of heat transfer and its relationship to doing work.
This chapter discusses thermodynamics in practical contexts including heat engines, heat pumps, and refrigerators, which support Big Idea 4, and that interactions between systems can result in changes in those systems. As systems either do work or have work done on them, the total energy of a system can change (Enduring Understanding 4.C). These ideas are based on the previous understanding of heat as the process of energy transfer from a higher temperature system to a lower temperature system (Essential Knowledge 4.C.3). You will learn about the first law of thermodynamics, which supports Big Idea 5, that changes that occur as a result of interactions are constrained by conservation laws. The first law of thermodynamics is a special case of energy conservation that explains the relationship between changes in the internal energy of a system (Essential Knowledge 5.B.4) and energy transfer in the form of heat or work (Essential Knowledge 5.B.7). Note that the energy of a system is conserved (Enduring Understanding 5.B). You will also learn about the second law of thermodynamics and entropy. These are applications of Big Idea 7, that the mathematics of probability can be used to describe the behavior of complex systems. For example, an isolated system will reach thermal equilibrium (Enduring Understanding 7.B), a state with higher disorder. This process has a probabilistic nature (Essential Knowledge 7.B.1) and is described by the second law of thermodynamics. The second law of thermodynamics describes the change of entropy for reversible and irreversible processes (Essential Knowledge 7.B.2). Entropy is considered qualitatively at this level.
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.C Interactions with other objects or systems can change the total energy of a system.
Essential Knowledge 4.C.3 Energy is transferred spontaneously from a higher temperature system to a lower temperature system. The process through which energy is transferred between systems at different temperatures is called heat.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.4 The internal energy of a system includes the kinetic energy of the objects that make up the system and the potential energy of the configuration of the objects that make up the system.
Essential Knowledge 5.B.5 Energy can be transferred by an external force exerted on an object or system that moves the object or system through a distance; this energy transfer is called work. Energy transfer in mechanical or electrical systems may occur at different rates. Power is defined as the rate of energy transfer into, out of, or within a system. [A piston filled with gas getting compressed or expanded is treated in Physics 2 as a part of thermodynamics.]
Essential Knowledge 5.B.7 The first law of thermodynamics is a specific case of the law of conservation of energy involving the internal energy of a system and the possible transfer of energy through work and/or heat. Examples should include P–V diagrams — isochoric process, isothermal process, isobaric process, adiabatic process. No calculations of heat or internal energy from temperature change; and in this course, examples of these relationships are qualitative and/or semi–quantitative.
Big Idea 7 The mathematics of probability can be used to describe the behavior of complex systems and to interpret the behavior of quantum mechanical systems.
Enduring Understanding 7.B The tendency of isolated systems to move toward states with higher disorder is described by probability.
Essential Knowledge 7.B.1 The approach to thermal equilibrium is a probability process.
Essential Knowledge 7.B.2 The second law of thermodynamics describes the change in entropy for reversible and irreversible processes. Only a qualitative treatment is considered in this course. |
# Thermodynamics
## The First Law of Thermodynamics
### Learning Objectives
By the end of this section, you will be able to:
1. Define the first law of thermodynamics.
2. Describe how conservation of energy relates to the first law of thermodynamics.
3. Identify instances of the first law of thermodynamics working in everyday situations, including biological metabolism.
4. Calculate changes in the internal energy of a system, after accounting for heat transfer and work done.
If we are interested in how heat transfer is converted into doing work, then the conservation of energy principle is important. The first law of thermodynamics applies the conservation of energy principle to systems where heat transfer and doing work are the methods of transferring energy into and out of the system. The first law of thermodynamics states that the change in internal energy of a system equals the net heat transfer into the system minus the net work done by the system. In equation form, the first law of thermodynamics is
Here is the change in internal energy of the system. is the net heat transferred into the system—that is, is the sum of all heat transfer into and out of the system. is the net work done by the system—that is, is the sum of all work done on or by the system. We use the following sign conventions: if is positive, then there is a net heat transfer into the system; if is positive, then there is net work done by the system. So positive adds energy to the system and positive takes energy from the system. Thus . Note also that if more heat transfer into the system occurs than work done, the difference is stored as internal energy. Heat engines are a good example of this—heat transfer into them takes place so that they can do work. (See .) We will now examine , , and further.
### Heat Q and Work W
Heat transfer () and doing work () are the two everyday means of bringing energy into or taking energy out of a system. The processes are quite different. Heat transfer, a less organized process, is driven by temperature differences. Work, a quite organized process, involves a macroscopic force exerted through a distance. Nevertheless, heat and work can produce identical results.For example, both can cause a temperature increase. Heat transfer into a system, such as when the Sun warms the air in a bicycle tire, can increase its temperature, and so can work done on the system, as when the bicyclist pumps air into the tire. Once the temperature increase has occurred, it is impossible to tell whether it was caused by heat transfer or by doing work. This uncertainty is an important point. Heat transfer and work are both energy in transit—neither is stored as such in a system. However, both can change the internal energy of a system. Internal energy is a form of energy completely different from either heat or work.
### Internal Energy U
We can think about the internal energy of a system in two different but consistent ways. The first is the atomic and molecular view, which examines the system on the atomic and molecular scale. The internal energy of a system is the sum of the kinetic and potential energies of its atoms and molecules. Recall that kinetic plus potential energy is called mechanical energy. Thus internal energy is the sum of atomic and molecular mechanical energy. Because it is impossible to keep track of all individual atoms and molecules, we must deal with averages and distributions. A second way to view the internal energy of a system is in terms of its macroscopic characteristics, which are very similar to atomic and molecular average values.
Macroscopically, we define the change in internal energy to be that given by the first law of thermodynamics:
Many detailed experiments have verified that , where is the change in total kinetic and potential energy of all atoms and molecules in a system. It has also been determined experimentally that the internal energy of a system depends only on the state of the system and not how it reached that state. More specifically, is found to be a function of a few macroscopic quantities (pressure, volume, and temperature, for example), independent of past history such as whether there has been heat transfer or work done. This independence means that if we know the state of a system, we can calculate changes in its internal energy from a few macroscopic variables.
To get a better idea of how to think about the internal energy of a system, let us examine a system going from State 1 to State 2. The system has internal energy in State 1, and it has internal energy in State 2, no matter how it got to either state. So the change in internal energy is independent of what caused the change. In other words, is independent of path. By path, we mean the method of getting from the starting point to the ending point. Why is this independence important? Note that . Both and depend on path, but does not. This path independence means that internal energy is easier to consider than either heat transfer or work done.
### Human Metabolism and the First Law of Thermodynamics
Human metabolism is the conversion of food into heat transfer, work, and stored fat. Metabolism is an interesting example of the first law of thermodynamics in action. We now take another look at these topics via the first law of thermodynamics. Considering the body as the system of interest, we can use the first law to examine heat transfer, doing work, and internal energy in activities ranging from sleep to heavy exercise. What are some of the major characteristics of heat transfer, doing work, and energy in the body? For one, body temperature is normally kept constant by heat transfer to the surroundings. This means is negative. Another fact is that the body usually does work on the outside world. This means is positive. In such situations, then, the body loses internal energy, since is negative.
Now consider the effects of eating. Eating increases the internal energy of the body by adding chemical potential energy (this is an unromantic view of a good steak). The body metabolizes all the food we consume. Basically, metabolism is an oxidation process in which the chemical potential energy of food is released. This implies that food input is in the form of work. Food energy is reported in a special unit, known as the Calorie. This energy is measured by burning food in a calorimeter, which is how the units are determined.
In chemistry and biochemistry, one calorie (spelled with a lowercase c) is defined as the energy (or heat transfer) required to raise the temperature of one gram of pure water by one degree Celsius. Nutritionists and weight-watchers tend to use the dietary calorie, which is frequently called a Calorie (spelled with a capital C). One food Calorie is the energy needed to raise the temperature of one kilogram of water by one degree Celsius. This means that one dietary Calorie is equal to one kilocalorie for the chemist, and one must be careful to avoid confusion between the two.
Again, consider the internal energy the body has lost. There are three places this internal energy can go—to heat transfer, to doing work, and to stored fat (a tiny fraction also goes to cell repair and growth). Heat transfer and doing work take internal energy out of the body, and food puts it back. If you eat just the right amount of food, then your average internal energy remains constant. Whatever you lose to heat transfer and doing work is replaced by food, so that, in the long run, . If you overeat repeatedly, then is always positive, and your body stores this extra internal energy as fat. The reverse is true if you eat too little. If is negative for a few days, then the body metabolizes its own fat to maintain body temperature and do work that takes energy from the body. This process is how dieting produces weight loss.
Life is not always this simple, as any dieter knows. The body stores fat or metabolizes it only if energy intake changes for a period of several days. Once you have been on a major diet, the next one is less successful because your body alters the way it responds to low energy intake. Your basal metabolic rate (BMR) is the rate at which food is converted into heat transfer and work done while the body is at complete rest. The body adjusts its basal metabolic rate to partially compensate for over-eating or under-eating. The body will decrease the metabolic rate rather than eliminate its own fat to replace lost food intake. You will chill more easily and feel less energetic as a result of the lower metabolic rate, and you will not lose weight as fast as before. Exercise helps to lose weight, because it produces both heat transfer from your body and work, and raises your metabolic rate even when you are at rest. Weight loss is also aided by the quite low efficiency of the body in converting internal energy to work, so that the loss of internal energy resulting from doing work is much greater than the work done.It should be noted, however, that living systems are not in thermalequilibrium.
The body provides us with an excellent indication that many thermodynamic processes are irreversible. An irreversible process can go in one direction but not the reverse, under a given set of conditions. For example, although body fat can be converted to do work and produce heat transfer, work done on the body and heat transfer into it cannot be converted to body fat. Otherwise, we could skip lunch by sunning ourselves or by walking down stairs. Another example of an irreversible thermodynamic process is photosynthesis. This process is the intake of one form of energy—light—by plants and its conversion to chemical potential energy. Both applications of the first law of thermodynamics are illustrated in . One great advantage of conservation laws such as the first law of thermodynamics is that they accurately describe the beginning and ending points of complex processes, such as metabolism and photosynthesis, without regard to the complications in between. presents a summary of terms relevant to the first law of thermodynamics.
### Test Prep for AP Courses
### Section Summary
1. The first law of thermodynamics is given as , where is the change in internal energy of a system, is the net heat transfer (the sum of all heat transfer into and out of the system), and is the net work done (the sum of all work done on or by the system).
2. Both and are energy in transit; only represents an independent quantity capable of being stored.
3. The internal energy of a system depends only on the state of the system and not how it reached that state.
4. Metabolism of living organisms, and photosynthesis of plants, are specialized types of heat transfer, doing work, and internal energy of systems.
### Conceptual Questions
### Problems & Exercises
|
# Thermodynamics
## The First Law of Thermodynamics and Some Simple Processes
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the processes of a simple heat engine.
2. Explain the differences among the simple thermodynamic processes—isobaric, isochoric, isothermal, and adiabatic.
3. Calculate total work done in a cyclical thermodynamic process.
One of the most important things we can do with heat transfer is to use it to do work for us. Such a device is called a heat engine. Car engines and steam turbines that generate electricity are examples of heat engines. shows schematically how the first law of thermodynamics applies to the typical heat engine.
The illustrations above show one of the ways in which heat transfer does work. Fuel combustion produces heat transfer to a gas in a cylinder, increasing the pressure of the gas and thereby the force it exerts on a movable piston. The gas does work on the outside world, as this force moves the piston through some distance. Heat transfer to the gas cylinder results in work being done. To repeat this process, the piston needs to be returned to its starting point. Heat transfer now occurs from the gas to the surroundings so that its pressure decreases, and a force is exerted by the surroundings to push the piston back through some distance. Variations of this process are employed daily in hundreds of millions of heat engines. We will examine heat engines in detail in the next section. In this section, we consider some of the simpler underlying processes on which heat engines are based.
### PV Diagrams and their Relationship to Work Done on or by a Gas
A process by which a gas does work on a piston at constant pressure is called an isobaric process. Since the pressure is constant, the force exerted is constant and the work done is given as
See the symbols as shown in . Now , and so
Because the volume of a cylinder is its cross-sectional area times its length , we see that , the change in volume; thus,
Note that if is positive, then is positive, meaning that work is done by the gas on the outside world.
(Note that the pressure involved in this work that we’ve called
is the pressure of the gas inside the tank. If we call the pressure outside the tank
, an expanding gas would be working against the external pressure; the work done would therefore be
(isobaric process). Many texts use this definition of work, and not the definition based on internal pressure, as the basis of the First Law of Thermodynamics. This definition reverses the sign conventions for work, and results in a statement of the first law that becomes
.)
It is not surprising that , since we have already noted in our treatment of fluids that pressure is a type of potential energy per unit volume and that pressure in fact has units of energy divided by volume. We also noted in our discussion of the ideal gas law that has units of energy. In this case, some of the energy associated with pressure becomes work.
shows a graph of pressure versus volume (that is, a diagram for an isobaric process. You can see in the figure that the work done is the area under the graph. This property of diagrams is very useful and broadly applicable: the work done on or by a system in going from one state to another equals the area under the curve on a .
We can see where this leads by considering (a), which shows a more general process in which both pressure and volume change. The area under the curve is closely approximated by dividing it into strips, each having an average constant pressure . The work done is for each strip, and the total work done is the sum of the . Thus the total work done is the total area under the curve. If the path is reversed, as in (b), then work is done on the system. The area under the curve in that case is negative, because is negative.
diagrams clearly illustrate that the work done depends on the path taken and not just the endpoints. This path dependence is seen in (a), where more work is done in going from A to C by the path via point B than by the path via point D. The vertical paths, where volume is constant, are called isochoric processes. Since volume is constant, , and no work is done in an isochoric process. Now, if the system follows the cyclical path ABCDA, as in (b), then the total work done is the area inside the loop. The negative area below path CD subtracts, leaving only the area inside the rectangle. In fact, the work done in any cyclical process (one that returns to its starting point) is the area inside the loop it forms on a diagram, as (c) illustrates for a general cyclical process. Note that the loop must be traversed in the clockwise direction for work to be positive—that is, for there to be a net work output.
### Reversible Processes
Both isothermal and adiabatic processes such as shown in are reversible in principle. A reversible process is one in which both the system and its environment can return to exactly the states they were in by following the reverse path. The reverse isothermal and adiabatic paths are BA and CA, respectively. Real macroscopic processes are never exactly reversible. In the previous examples, our system is a gas (like that in ), and its environment is the piston, cylinder, and the rest of the universe. If there are any energy-dissipating mechanisms, such as friction or turbulence, then heat transfer to the environment occurs for either direction of the piston. So, for example, if the path BA is followed and there is friction, then the gas will be returned to its original state but the environment will not—it will have been heated in both directions. Reversibility requires the direction of heat transfer to reverse for the reverse path. Since dissipative mechanisms cannot be completely eliminated, real processes cannot be reversible.
There must be reasons that real macroscopic processes cannot be reversible. We can imagine them going in reverse. For example, heat transfer occurs spontaneously from hot to cold and never spontaneously the reverse. Yet it would not violate the first law of thermodynamics for this to happen. In fact, all spontaneous processes, such as bubbles bursting, never go in reverse. There is a second thermodynamic law that forbids them from going in reverse. When we study this law, we will learn something about nature and also find that such a law limits the efficiency of heat engines. We will find that heat engines with the greatest possible theoretical efficiency would have to use reversible processes, and even they cannot convert all heat transfer into doing work. summarizes the simpler thermodynamic processes and their definitions.
### Test Prep for AP Courses
### Section Summary
1. One of the important implications of the first law of thermodynamics is that machines can be harnessed to do work that humans previously did by hand or by external energy supplies such as running water or the heat of the Sun. A machine that uses heat transfer to do work is known as a heat engine.
2. There are several simple processes, used by heat engines, that flow from the first law of thermodynamics. Among them are the isobaric, isochoric, isothermal and adiabatic processes.
3. These processes differ from one another based on how they affect pressure, volume, temperature, and heat transfer.
4. If the work done is performed on the outside environment, work () will be a positive value. If the work done is done to the heat engine system, work () will be a negative value.
5. Some thermodynamic processes, including isothermal and adiabatic processes, are reversible in theory; that is, both the thermodynamic system and the environment can be returned to their initial states. However, because of loss of energy owing to the second law of thermodynamics, complete reversibility does not work in practice.
### Conceptual Questions
### Problem Exercises
|
# Thermodynamics
## Introduction to the Second Law of Thermodynamics: Heat Engines and Their Efficiency
### Learning Objectives
By the end of this section, you will be able to:
1. State the expressions of the second law of thermodynamics.
2. Calculate the efficiency and carbon dioxide emission of a coal-fired electricity plant, using second law characteristics.
3. Describe and define the Otto cycle.
The second law of thermodynamics deals with the direction taken by spontaneous processes. Many processes occur spontaneously in one direction only—that is, they are irreversible, under a given set of conditions. Although irreversibility is seen in day-to-day life—a broken glass does not resume its original state, for instance—complete irreversibility is a statistical statement that cannot be seen during the lifetime of the universe. More precisely, an irreversible process is one that depends on path. If the process can go in only one direction, then the reverse path differs fundamentally and the process cannot be reversible. For example, as noted in the previous section, heat involves the transfer of energy from higher to lower temperature. A cold object in contact with a hot one never gets colder, transferring heat to the hot object and making it hotter. Furthermore, mechanical energy, such as kinetic energy, can be completely converted to thermal energy by friction, but the reverse is impossible. A hot stationary object never spontaneously cools off and starts moving. Yet another example is the expansion of a puff of gas introduced into one corner of a vacuum chamber. The gas expands to fill the chamber, but it never regroups in the corner. The random motion of the gas molecules could take them all back to the corner, but this is never observed to happen. (See .)
The fact that certain processes never occur suggests that there is a law forbidding them to occur. The first law of thermodynamics would allow them to occur—none of those processes violate conservation of energy. The law that forbids these processes is called the second law of thermodynamics. We shall see that the second law can be stated in many ways that may seem different, but which in fact are equivalent. Like all natural laws, the second law of thermodynamics gives insights into nature, and its several statements imply that it is broadly applicable, fundamentally affecting many apparently disparate processes.
The already familiar direction of heat transfer from hot to cold is the basis of our first version of the second law of thermodynamics.
Another way of stating this: It is impossible for any process to have as its sole result heat transfer from a cooler to a hotter object.
### Heat Engines
Now let us consider a device that uses heat transfer to do work. As noted in the previous section, such a device is called a heat engine, and one is shown schematically in (b). Gasoline and diesel engines, jet engines, and steam turbines are all heat engines that do work by using part of the heat transfer from some source. Heat transfer from the hot object (or hot reservoir) is denoted as , while heat transfer into the cold object (or cold reservoir) is , and the work done by the engine is . The temperatures of the hot and cold reservoirs are and , respectively.
Because the hot reservoir is heated externally, which is energy intensive, it is important that the work is done as efficiently as possible. In fact, we would like to equal , and for there to be no heat transfer to the environment (). Unfortunately, this is impossible. The second law of thermodynamics also states, with regard to using heat transfer to do work (the second expression of the second law):
A cyclical process brings a system, such as the gas in a cylinder, back to its original state at the end of every cycle. Most heat engines, such as reciprocating piston engines and rotating turbines, use cyclical processes. The second law, just stated in its second form, clearly states that such engines cannot have perfect conversion of heat transfer into work done. Before going into the underlying reasons for the limits on converting heat transfer into work, we need to explore the relationships among , , and , and to define the efficiency of a cyclical heat engine. As noted, a cyclical process brings the system back to its original condition at the end of every cycle. Such a system’s internal energy is the same at the beginning and end of every cycle—that is, . The first law of thermodynamics states that
where is the net heat transfer during the cycle () and is the net work done by the system. Since for a complete cycle, we have
so that
Thus the net work done by the system equals the net heat transfer into the system, or
just as shown schematically in (b). The problem is that in all processes, there is some heat transfer to the environment—and usually a very significant amount at that.
In the conversion of energy to work, we are always faced with the problem of getting less out than we put in. We define conversion efficiency to be the ratio of useful work output to the energy input (or, in other words, the ratio of what we get to what we spend). In that spirit, we define the efficiency of a heat engine to be its net work output divided by heat transfer to the engine ; that is,
Since in a cyclical process, we can also express this as
making it clear that an efficiency of 1, or 100%, is possible only if there is no heat transfer to the environment (). Note that all s are positive. The direction of heat transfer is indicated by a plus or minus sign. For example, is out of the system and so is preceded by a minus sign.
With the information given in , we can find characteristics such as the efficiency of a heat engine without any knowledge of how the heat engine operates, but looking further into the mechanism of the engine will give us greater insight. illustrates the operation of the common four-stroke gasoline engine. The four steps shown complete this heat engine’s cycle, bringing the gasoline-air mixture back to its original condition.
The Otto cycle shown in (a) is used in four-stroke internal combustion engines, although in fact the true Otto cycle paths do not correspond exactly to the strokes of the engine.
The adiabatic process AB corresponds to the nearly adiabatic compression stroke of the gasoline engine. In both cases, work is done on the system (the gas mixture in the cylinder), increasing its temperature and pressure. Along path BC of the Otto cycle, heat transfer into the gas occurs at constant volume, causing a further increase in pressure and temperature. This process corresponds to burning fuel in an internal combustion engine, and takes place so rapidly that the volume is nearly constant. Path CD in the Otto cycle is an adiabatic expansion that does work on the outside world, just as the power stroke of an internal combustion engine does in its nearly adiabatic expansion. The work done by the system along path CD is greater than the work done on the system along path AB, because the pressure is greater, and so there is a net work output. Along path DA in the Otto cycle, heat transfer from the gas at constant volume reduces its temperature and pressure, returning it to its original state. In an internal combustion engine, this process corresponds to the exhaust of hot gases and the intake of an air-gasoline mixture at a considerably lower temperature. In both cases, heat transfer into the environment occurs along this final path.
The net work done by a cyclical process is the area inside the closed path on a diagram, such as that inside path ABCDA in . Note that in every imaginable cyclical process, it is absolutely necessary for heat transfer from the system to occur in order to get a net work output. In the Otto cycle, heat transfer occurs along path DA. If no heat transfer occurs, then the return path is the same, and the net work output is zero. The lower the temperature on the path AB, the less work has to be done to compress the gas. The area inside the closed path is then greater, and so the engine does more work and is thus more efficient. Similarly, the higher the temperature along path CD, the more work output there is. (See .) So efficiency is related to the temperatures of the hot and cold reservoirs. In the next section, we shall see what the absolute limit to the efficiency of a heat engine is, and how it is related to temperature.
### Section Summary
1. The two expressions of the second law of thermodynamics are: (i) Heat transfer occurs spontaneously from higher- to lower-temperature bodies but never spontaneously in the reverse direction; and (ii) It is impossible in any system for heat transfer from a reservoir to completely convert to work in a cyclical process in which the system returns to its initial state.
2. Irreversible processes depend on path and do not return to their original state. Cyclical processes are processes that return to their original state at the end of every cycle.
3. In a cyclical process, such as a heat engine, the net work done by the system equals the net heat transfer into the system, or
, where
is the heat transfer from the hot object (hot reservoir), and
is the heat transfer into the cold object (cold reservoir).
4. Efficiency can be expressed as
,
the ratio of work output divided by the amount of energy input.
5. The four-stroke gasoline engine is often explained in terms of the Otto cycle, which is a repeating sequence of processes that convert heat into work.
### Conceptual Questions
### Problem Exercises
|
# Thermodynamics
## Carnot’s Perfect Heat Engine: The Second Law of Thermodynamics Restated
### Learning Objectives
By the end of this section, you will be able to:
1. Identify a Carnot cycle.
2. Calculate maximum theoretical efficiency of a nuclear reactor.
3. Explain how dissipative processes affect the ideal Carnot engine.
We know from the second law of thermodynamics that a heat engine cannot be 100% efficient, since there must always be some heat transfer to the environment, which is often called waste heat. How efficient, then, can a heat engine be? This question was answered at a theoretical level in 1824 by a young French engineer, Sadi Carnot (1796–1832), in his study of the then-emerging heat engine technology crucial to the Industrial Revolution. He devised a theoretical cycle, now called the Carnot cycle, which is the most efficient cyclical process possible. The second law of thermodynamics can be restated in terms of the Carnot cycle, and so what Carnot actually discovered was this fundamental law. Any heat engine employing the Carnot cycle is called a Carnot engine.
What is crucial to the Carnot cycle—and, in fact, defines it—is that only reversible processes are used. Irreversible processes involve dissipative factors, such as friction and turbulence. This increases heat transfer to the environment and reduces the efficiency of the engine. Obviously, then, reversible processes are superior.
shows the diagram for a Carnot cycle. The cycle comprises two isothermal and two adiabatic processes. Recall that both isothermal and adiabatic processes are, in principle, reversible.
Carnot also determined the efficiency of a perfect heat engine—that is, a Carnot engine. It is always true that the efficiency of a cyclical heat engine is given by:
What Carnot found was that for a perfect heat engine, the ratio equals the ratio of the absolute temperatures of the heat reservoirs. That is, for a Carnot engine, so that the maximum or Carnot efficiency is given by
where and are in kelvins (or any other absolute temperature scale). No real heat engine can do as well as the Carnot efficiency. But the ideal Carnot engine, like the drinking bird above, while a fascinating novelty, has zero power. This makes it unrealistic for any applications.
Carnot’s interesting result implies that 100% efficiency would be possible only if —that is, only if the cold reservoir were at absolute zero, a practical and theoretical impossibility. But the physical implication is this—the only way to have all heat transfer go into doing work is to remove all thermal energy, and this requires a cold reservoir at absolute zero.
It is also apparent that the greatest efficiencies are obtained when the ratio is as small as possible. Just as discussed for the Otto cycle in the previous section, this means that efficiency is greatest for the highest possible temperature of the hot reservoir and lowest possible temperature of the cold reservoir. (This setup increases the area inside the closed loop on the diagram; also, it seems reasonable that the greater the temperature difference, the easier it is to divert the heat transfer to work.) The actual reservoir temperatures of a heat engine are usually related to the type of heat source and the temperature of the environment into which heat transfer occurs. Consider the following example.
Since all real processes are irreversible, the actual efficiency of a heat engine can never be as great as that of a Carnot engine, as illustrated in (a). Even with the best heat engine possible, there are always dissipative processes in peripheral equipment, such as electrical transformers or car transmissions. These further reduce the overall efficiency by converting some of the engine’s work output back into heat transfer, as shown in (b).
### Section Summary
1. The Carnot cycle is a theoretical cycle that is the most efficient cyclical process possible. Any engine using the Carnot cycle, which uses only reversible processes (adiabatic and isothermal), is known as a Carnot engine.
2. Any engine that uses the Carnot cycle enjoys the maximum theoretical efficiency.
3. While Carnot engines are ideal engines, in reality, no engine achieves Carnot’s theoretical maximum efficiency, since dissipative processes, such as friction, play a role. Carnot cycles without heat loss may be possible at absolute zero, but this has never been seen in nature.
### Conceptual Questions
### Problem Exercises
|
# Thermodynamics
## Applications of Thermodynamics: Heat Pumps and Refrigerators
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the use of heat engines in heat pumps and refrigerators.
2. Demonstrate how a heat pump works to warm an interior space.
3. Explain the differences between heat pumps and refrigerators.
4. Calculate a heat pump’s coefficient of performance.
Heat pumps, air conditioners, and refrigerators utilize heat transfer from cold to hot. They are heat engines run backward. We say backward, rather than reverse, because except for Carnot engines, all heat engines, though they can be run backward, cannot truly be reversed. Heat transfer occurs from a cold reservoir and into a hot one. This requires work input , which is also converted to heat transfer. Thus the heat transfer to the hot reservoir is . (Note that , , and are positive, with their directions indicated on schematics rather than by sign.) A heat pump’s mission is for heat transfer to occur into a warm environment, such as a home in the winter. The mission of air conditioners and refrigerators is for heat transfer to occur from a cool environment, such as chilling a room or keeping food at lower temperatures than the environment. (Actually, a heat pump can be used both to heat and cool a space. It is essentially an air conditioner and a heating unit all in one. In this section we will concentrate on its heating mode.)
### Heat Pumps
The great advantage of using a heat pump to keep your home warm, rather than just burning fuel, is that a heat pump supplies . Heat transfer is from the outside air, even at a temperature below freezing, to the indoor space. You only pay for , and you get an additional heat transfer of from the outside at no cost; in many cases, at least twice as much energy is transferred to the heated space as is used to run the heat pump. When you burn fuel to keep warm, you pay for all of it. The disadvantage is that the work input (required by the second law of thermodynamics) is sometimes more expensive than simply burning fuel, especially if the work is done by electrical energy.
The basic components of a heat pump in its heating mode are shown in . A working fluid such as a non-CFC refrigerant is used. In the outdoor coils (the evaporator), heat transfer occurs to the working fluid from the cold outdoor air, turning it into a gas.
The electrically driven compressor (work input ) raises the temperature and pressure of the gas and forces it into the condenser coils that are inside the heated space. Because the temperature of the gas is higher than the temperature inside the room, heat transfer to the room occurs and the gas condenses to a liquid. The liquid then flows back through a pressure-reducing valve to the outdoor evaporator coils, being cooled through expansion. (In a cooling cycle, the evaporator and condenser coils exchange roles and the flow direction of the fluid is reversed.)
The quality of a heat pump is judged by how much heat transfer occurs into the warm space compared with how much work input is required. In the spirit of taking the ratio of what you get to what you spend, we define a heat pump’s coefficient of performance () to be
Since the efficiency of a heat engine is , we see that , an important and interesting fact. First, since the efficiency of any heat engine is less than 1, it means that is always greater than 1—that is, a heat pump always has more heat transfer than work put into it. Second, it means that heat pumps work best when temperature differences are small. The efficiency of a perfect, or Carnot, engine is ; thus, the smaller the temperature difference, the smaller the efficiency and the greater the (because ). In other words, heat pumps do not work as well in very cold climates as they do in more moderate climates.
Friction and other irreversible processes reduce heat engine efficiency, but they do not benefit the operation of a heat pump—instead, they reduce the work input by converting part of it to heat transfer back into the cold reservoir before it gets into the heat pump.
Real heat pumps do not perform quite as well as the ideal one in the previous example; their values of range from about 2 to 4. This range means that the heat transfer from the heat pumps is 2 to 4 times as great as the work put into them. Their economical feasibility is still limited, however, since is usually supplied by electrical energy that costs more per joule than heat transfer by burning fuels like natural gas. Furthermore, the initial cost of a heat pump is greater than that of many furnaces, so that a heat pump must last longer for its cost to be recovered. Heat pumps are most likely to be economically superior where winter temperatures are mild, electricity is relatively cheap, and other fuels are relatively expensive. Also, since they can cool as well as heat a space, they have advantages where cooling in summer months is also desired. Thus some of the best locations for heat pumps are in warm summer climates with cool winters. shows a heat pump, called a “reverse cycle” or “split-system cooler” in some countries.
### Air Conditioners and Refrigerators
Air conditioners and refrigerators are designed to cool something down in a warm environment. As with heat pumps, work input is required for heat transfer from cold to hot, and this is expensive. The quality of air conditioners and refrigerators is judged by how much heat transfer occurs from a cold environment compared with how much work input is required. What is considered the benefit in a heat pump is considered waste heat in a refrigerator. We thus define the coefficient of performance of an air conditioner or refrigerator to be
Noting again that , we can see that an air conditioner will have a lower coefficient of performance than a heat pump, because and is greater than . In this module’s Problems and Exercises, you will show that
for a heat engine used as either an air conditioner or a heat pump operating between the same two temperatures. Real air conditioners and refrigerators typically do remarkably well, having values of ranging from 2 to 6. These numbers are better than the values for the heat pumps mentioned above, because the temperature differences are smaller, but they are less than those for Carnot engines operating between the same two temperatures.
A type of rating system called the “energy efficiency rating” (
) has been developed. This rating is an example where non-SI units are still used and relevant to consumers. To make it easier for the consumer, Australia, Canada, New Zealand, and the U.S. use an Energy Star Rating out of 5 stars—the more stars, the more energy efficient the appliance.
are expressed in mixed units of British thermal units (Btu) per hour of heating or cooling divided by the power input in watts. Room air conditioners are readily available with
ranging from 6 to 12. Although not the same as the just described, these
are good for comparison purposes—the greater the
, the cheaper an air conditioner is to operate (but the higher its purchase price is likely to be).
The
of an air conditioner or refrigerator can be expressed as
where
is the amount of heat transfer from a cold environment in British thermal units,
is time in hours,
is the work input in joules, and
is time in seconds.
### Section Summary
1. An artifact of the second law of thermodynamics is the ability to heat an interior space using a heat pump. Heat pumps compress cold ambient air and, in so doing, heat it to room temperature without violation of conservation principles.
2. To calculate the heat pump’s coefficient of performance, use the equation .
3. A refrigerator is a heat pump; it takes warm ambient air and expands it to chill it.
### Conceptual Questions
### Problem Exercises
|
# Thermodynamics
## Entropy and the Second Law of Thermodynamics: Disorder and the Unavailability of Energy
### Learning Objectives
By the end of this section, you will be able to:
1. Define entropy and calculate the increase of entropy in a system with reversible and irreversible processes.
2. Explain the expected fate of the universe in entropic terms.
3. Calculate the increasing disorder of a system.
There is yet another way of expressing the second law of thermodynamics. This version relates to a concept called entropy. By examining it, we shall see that the directions associated with the second law—heat transfer from hot to cold, for example—are related to the tendency in nature for systems to become disordered and for less energy to be available for use as work. The entropy of a system can in fact be shown to be a measure of its disorder and of the unavailability of energy to do work.
We can see how entropy is defined by recalling our discussion of the Carnot engine. We noted that for a Carnot cycle, and hence for any reversible processes, . Rearranging terms yields
for any reversible process. and are absolute values of the heat transfer at temperatures and , respectively. This ratio of is defined to be the change in entropy for a reversible process,
where is the heat transfer, which is positive for heat transfer into and negative for heat transfer out of, and is the absolute temperature at which the reversible process takes place. The SI unit for entropy is joules per kelvin (J/K). If temperature changes during the process, then it is usually a good approximation (for small changes in temperature) to take to be the average temperature, avoiding the need to use integral calculus to find .
The definition of is strictly valid only for reversible processes, such as used in a Carnot engine. However, we can find precisely even for real, irreversible processes. The reason is that the entropy of a system, like internal energy , depends only on the state of the system and not how it reached that condition. Entropy is a property of state. Thus the change in entropy of a system between state 1 and state 2 is the same no matter how the change occurs. We just need to find or imagine a reversible process that takes us from state 1 to state 2 and calculate for that process. That will be the change in entropy for any process going from state 1 to state 2. (See .)
Now let us take a look at the change in entropy of a Carnot engine and its heat reservoirs for one full cycle. The hot reservoir has a loss of entropy , because heat transfer occurs out of it (remember that when heat transfers out, then has a negative sign). The cold reservoir has a gain of entropy , because heat transfer occurs into it. (We assume the reservoirs are sufficiently large that their temperatures are constant.) So the total change in entropy is
Thus, since we know that for a Carnot engine,
This result, which has general validity, means that the total change in entropy for a system in any reversible process is zero.
The entropy of various parts of the system may change, but the total change is zero. Furthermore, the system does not affect the entropy of its surroundings, since heat transfer between them does not occur. Thus the reversible process changes neither the total entropy of the system nor the entropy of its surroundings. Sometimes this is stated as follows: Reversible processes do not affect the total entropy of the universe. Real processes are not reversible, though, and they do change total entropy. We can, however, use hypothetical reversible processes to determine the value of entropy in real, irreversible processes. The following example illustrates this point.
It is reasonable that entropy increases for heat transfer from hot to cold. Since the change in entropy is , there is a larger change at lower temperatures. The decrease in entropy of the hot object is therefore less than the increase in entropy of the cold object, producing an overall increase, just as in the previous example. This result is very general:
There is an increase in entropy for any system undergoing an irreversible process.
With respect to entropy, there are only two possibilities: entropy is constant for a reversible process, and it increases for an irreversible process. There is a fourth version of the second law of thermodynamics stated in terms of entropy:
The total entropy of a system either increases or remains constant in any process; it never decreases.
For example, heat transfer cannot occur spontaneously from cold to hot, because entropy would decrease.
Entropy is very different from energy. Entropy is not conserved but increases in all real processes. Reversible processes (such as in Carnot engines) are the processes in which the most heat transfer to work takes place and are also the ones that keep entropy constant. Thus we are led to make a connection between entropy and the availability of energy to do work.
### Entropy and the Unavailability of Energy to Do Work
What does a change in entropy mean, and why should we be interested in it? One reason is that entropy is directly related to the fact that not all heat transfer can be converted into work. The next example gives some indication of how an increase in entropy results in less heat transfer into work.
When entropy increases, a certain amount of energy becomes permanently unavailable to do work. The energy is not lost, but its character is changed, so that some of it can never be converted to doing work—that is, to an organized force acting through a distance. For instance, in the previous example, 933 J less work was done after an increase in entropy of 9.33 J/K occurred in the 4000 J heat transfer from the 600 K reservoir to the 250 K reservoir. It can be shown that the amount of energy that becomes unavailable for work is
where is the lowest temperature utilized. In the previous example,
as found.
### Heat Death of the Universe: An Overdose of Entropy
In the early, energetic universe, all matter and energy were easily interchangeable and identical in nature. Gravity played a vital role in the young universe. Although it may have seemed disorderly, and therefore, superficially entropic, in fact, there was enormous potential energy available to do work—all the future energy in the universe.
As the universe matured, temperature differences arose, which created more opportunity for work. Stars are hotter than planets, for example, which are warmer than icy asteroids, which are warmer still than the vacuum of the space between them.
Most of these are cooling down from their usually violent births, at which time they were provided with energy of their own—nuclear energy in the case of stars, volcanic energy on Earth and other planets, and so on. Without additional energy input, however, their days are numbered.
As entropy increases, less and less energy in the universe is available to do work. On Earth, we still have great stores of energy such as fossil and nuclear fuels; large-scale temperature differences, which can provide wind energy; geothermal energies due to differences in temperature in Earth’s layers; and tidal energies owing to our abundance of liquid water. As these are used, a certain fraction of the energy they contain can never be converted into doing work. Eventually, all fuels will be exhausted, all temperatures will equalize, and it will be impossible for heat engines to function, or for work to be done.
Entropy increases in a closed system, such as the universe. But in parts of the universe, for instance, in the Solar system, it is not a locally closed system. Energy flows from the Sun to the planets, replenishing Earth’s stores of energy. The Sun will continue to supply us with energy for about another five billion years. We will enjoy direct solar energy, as well as side effects of solar energy, such as wind power and biomass energy from photosynthetic plants. The energy from the Sun will keep our water at the liquid state, and the Moon’s gravitational pull will continue to provide tidal energy. But Earth’s geothermal energy will slowly run down and won’t be replenished.
But in terms of the universe, and the very long-term, very large-scale picture, the entropy of the universe is increasing, and so the availability of energy to do work is constantly decreasing. Eventually, when all stars have died, all forms of potential energy have been utilized, and all temperatures have equalized (depending on the mass of the universe, either at a very high temperature following a universal contraction, or a very low one, just before all activity ceases) there will be no possibility of doing work.
Either way, the universe is destined for thermodynamic equilibrium—maximum entropy. This is often called the heat death of the universe, and will mean the end of all activity. However, whether the universe contracts and heats up, or continues to expand and cools down, the end is not near. Calculations of black holes suggest that entropy can easily continue for at least years.
### Order to Disorder
Entropy is related not only to the unavailability of energy to do work—it is also a measure of disorder. This notion was initially postulated by Ludwig Boltzmann in the 1800s. For example, melting a block of ice means taking a highly structured and orderly system of water molecules and converting it into a disorderly liquid in which molecules have no fixed positions. (See .) There is a large increase in entropy in the process, as seen in the following example.
In another easily imagined example, suppose we mix equal masses of water originally at two different temperatures, say and . The result is water at an intermediate temperature of . Three outcomes have resulted: entropy has increased, some energy has become unavailable to do work, and the system has become less orderly. Let us think about each of these results.
First, entropy has increased for the same reason that it did in the example above. Mixing the two bodies of water has the same effect as heat transfer from the hot one and the same heat transfer into the cold one. The mixing decreases the entropy of the hot water but increases the entropy of the cold water by a greater amount, producing an overall increase in entropy.
Second, once the two masses of water are mixed, there is only one temperature—you cannot run a heat engine with them. The energy that could have been used to run a heat engine is now unavailable to do work.
Third, the mixture is less orderly, or to use another term, less structured. Rather than having two masses at different temperatures and with different distributions of molecular speeds, we now have a single mass with a uniform temperature.
These three results—entropy, unavailability of energy, and disorder—are not only related but are in fact essentially equivalent.
### Life, Evolution, and the Second Law of Thermodynamics
Some people misunderstand the second law of thermodynamics, stated in terms of entropy, to say that the process of the evolution of life violates this law. Over time, complex organisms evolved from much simpler ancestors, representing a large decrease in entropy of the Earth’s biosphere. It is a fact that living organisms have evolved to be highly structured, and much lower in entropy than the substances from which they grow. But it is always possible for the entropy of one part of the universe to decrease, provided the total change in entropy of the universe increases. In equation form, we can write this as
Thus can be negative as long as is positive and greater in magnitude.
How is it possible for a system to decrease its entropy? Energy transfer is necessary. If I pick up marbles that are scattered about the room and put them into a cup, my work has decreased the entropy of that system. If I gather iron ore from the ground and convert it into steel and build a bridge, my work has decreased the entropy of that system. Energy coming from the Sun can decrease the entropy of local systems on Earth—that is, is negative. But the overall entropy of the rest of the universe increases by a greater amount—that is, is positive and greater in magnitude. Thus, , and the second law of thermodynamics is not violated.
Every time a plant stores some solar energy in the form of chemical potential energy, or an updraft of warm air lifts a soaring bird, the Earth can be viewed as a heat engine operating between a hot reservoir supplied by the Sun and a cold reservoir supplied by dark outer space—a heat engine of high complexity, causing local decreases in entropy as it uses part of the heat transfer from the Sun into deep space. There is a large total increase in entropy resulting from this massive heat transfer. A small part of this heat transfer is stored in structured systems on Earth, producing much smaller local decreases in entropy. (See .)
### Test Prep for AP Courses
### Section Summary
1. Entropy is the loss of energy available to do work.
2. Another form of the second law of thermodynamics states that the total entropy of a system either increases or remains constant; it never decreases.
3. Entropy is zero in a reversible process; it increases in an irreversible process.
4. The ultimate fate of the universe is likely to be thermodynamic equilibrium, where the universal temperature is constant and no energy is available to do work.
5. Entropy is also associated with the tendency toward disorder in a closed system.
### Conceptual Questions
### Problem Exercises
|
# Thermodynamics
## Statistical Interpretation of Entropy and the Second Law of Thermodynamics: The Underlying Explanation
### Learning Objectives
By the end of this section, you will be able to:
1. Identify probabilities in entropy.
2. Analyze statistical probabilities in entropic systems.
The various ways of formulating the second law of thermodynamics tell what happens rather than why it happens. Why should heat transfer occur only from hot to cold? Why should energy become ever less available to do work? Why should the universe become increasingly disorderly? The answer is that it is a matter of overwhelming probability. Disorder is simply vastly more likely than order.
When you watch an emerging rain storm begin to wet the ground, you will notice that the drops fall in a disorganized manner both in time and in space. Some fall close together, some far apart, but they never fall in straight, orderly rows. It is not impossible for rain to fall in an orderly pattern, just highly unlikely, because there are many more disorderly ways than orderly ones. To illustrate this fact, we will examine some random processes, starting with coin tosses.
### Coin Tosses
What are the possible outcomes of tossing 5 coins? Each coin can land either heads or tails. On the large scale, we are concerned only with the total heads and tails and not with the order in which heads and tails appear. The following possibilities exist:
These are what we call macrostates. A macrostate is an overall property of a system. It does not specify the details of the system, such as the order in which heads and tails occur or which coins are heads or tails.
Using this nomenclature, a system of 5 coins has the 6 possible macrostates just listed. Some macrostates are more likely to occur than others. For instance, there is only one way to get 5 heads, but there are several ways to get 3 heads and 2 tails, making the latter macrostate more probable. lists of all the ways in which 5 coins can be tossed, taking into account the order in which heads and tails occur. Each sequence is called a microstate—a detailed description of every element of a system.
The macrostate of 3 heads and 2 tails can be achieved in 10 ways and is thus 10 times more probable than the one having 5 heads. Not surprisingly, it is equally probable to have the reverse, 2 heads and 3 tails. Similarly, it is equally probable to get 5 tails as it is to get 5 heads. Note that all of these conclusions are based on the crucial assumption that each microstate is equally probable. With coin tosses, this requires that the coins not be asymmetric in a way that favors one side over the other, as with loaded dice. With any system, the assumption that all microstates are equally probable must be valid, or the analysis will be erroneous.
The two most orderly possibilities are 5 heads or 5 tails. (They are more structured than the others.) They are also the least likely, only 2 out of 32 possibilities. The most disorderly possibilities are 3 heads and 2 tails and its reverse. (They are the least structured.) The most disorderly possibilities are also the most likely, with 20 out of 32 possibilities for the 3 heads and 2 tails and its reverse. If we start with an orderly array like 5 heads and toss the coins, it is very likely that we will get a less orderly array as a result, since 30 out of the 32 possibilities are less orderly. So even if you start with an orderly state, there is a strong tendency to go from order to disorder, from low entropy to high entropy. The reverse can happen, but it is unlikely.
This result becomes dramatic for larger systems. Consider what happens if you have 100 coins instead of just 5. The most orderly arrangements (most structured) are 100 heads or 100 tails. The least orderly (least structured) is that of 50 heads and 50 tails. There is only 1 way (1 microstate) to get the most orderly arrangement of 100 heads. There are 100 ways (100 microstates) to get the next most orderly arrangement of 99 heads and 1 tail (also 100 to get its reverse). And there are ways to get 50 heads and 50 tails, the least orderly arrangement. is an abbreviated list of the various macrostates and the number of microstates for each macrostate. The total number of microstates—the total number of different ways 100 coins can be tossed—is an impressively large . Now, if we start with an orderly macrostate like 100 heads and toss the coins, there is a virtual certainty that we will get a less orderly macrostate. If we keep tossing the coins, it is possible, but exceedingly unlikely, that we will ever get back to the most orderly macrostate. If you tossed the coins once each second, you could expect to get either 100 heads or 100 tails once in years! This period is 1 trillion () times longer than the age of the universe, and so the chances are essentially zero. In contrast, there is an 8% chance of getting 50 heads, a 73% chance of getting from 45 to 55 heads, and a 96% chance of getting from 40 to 60 heads. Disorder is highly likely.
### Disorder in a Gas
The fantastic growth in the odds favoring disorder that we see in going from 5 to 100 coins continues as the number of entities in the system increases. Let us now imagine applying this approach to perhaps a small sample of gas. Because counting microstates and macrostates involves statistics, this is called statistical analysis. The macrostates of a gas correspond to its macroscopic properties, such as volume, temperature, and pressure; and its microstates correspond to the detailed description of the positions and velocities of its atoms. Even a small amount of gas has a huge number of atoms: of an ideal gas at 1.0 atm and has atoms. So each macrostate has an immense number of microstates. In plain language, this means that there are an immense number of ways in which the atoms in a gas can be arranged, while still having the same pressure, temperature, and so on.
The most likely conditions (or macrostates) for a gas are those we see all the time—a random distribution of atoms in space with a Maxwell-Boltzmann distribution of speeds in random directions, as predicted by kinetic theory. This is the most disorderly and least structured condition we can imagine. In contrast, one type of very orderly and structured macrostate has all of the atoms in one corner of a container with identical velocities. There are very few ways to accomplish this (very few microstates corresponding to it), and so it is exceedingly unlikely ever to occur. (See (b).) Indeed, it is so unlikely that we have a law saying that it is impossible, which has never been observed to be violated—the second law of thermodynamics.
The disordered condition is one of high entropy, and the ordered one has low entropy. With a transfer of energy from another system, we could force all of the atoms into one corner and have a local decrease in entropy, but at the cost of an overall increase in entropy of the universe. If the atoms start out in one corner, they will quickly disperse and become uniformly distributed and will never return to the orderly original state ((b)). Entropy will increase. With such a large sample of atoms, it is possible—but unimaginably unlikely—for entropy to decrease. Disorder is vastly more likely than order.
The arguments that disorder and high entropy are the most probable states are quite convincing. The great Austrian physicist Ludwig Boltzmann (1844–1906)—who, along with Maxwell, made so many contributions to kinetic theory—proved that the entropy of a system in a given state (a macrostate) can be written as
where is Boltzmann’s constant, and is the natural logarithm of the number of microstates corresponding to the given macrostate. is proportional to the probability that the macrostate will occur. Thus entropy is directly related to the probability of a state—the more likely the state, the greater its entropy. Boltzmann proved that this expression for is equivalent to the definition , which we have used extensively.
Thus the second law of thermodynamics is explained on a very basic level: entropy either remains the same or increases in every process. This phenomenon is due to the extraordinarily small probability of a decrease, based on the extraordinarily larger number of microstates in systems with greater entropy. Entropy can decrease, but for any macroscopic system, this outcome is so unlikely that it will never be observed.
### Test Prep for AP Courses
### Section Summary
1. Disorder is far more likely than order, which can be seen statistically.
2. The entropy of a system in a given state (a macrostate) can be written as
where
is Boltzmann’s constant, and is the natural logarithm of the number of microstates corresponding to the given macrostate.
### Conceptual Questions
### Problem Exercises
|
# Oscillatory Motion and Waves
## Connection for AP® Courses
In this chapter, students are introduced to oscillation, the regular variation in the position of a system about a central point accompanied by transfer of energy and momentum, and to waves. A child’s swing, a pendulum, a spring, and a vibrating string are all examples of oscillations. This chapter will address simple harmonic motion and periods of vibration, aspects of oscillation that produce waves, a common phenomenon in everyday life. Waves carry energy from one place to another.” This chapter will show how harmonic oscillations produce waves that transport energy across space and through time. The information and examples presented support Big Ideas 1, 2, and 3 of the AP® Physics Curriculum Framework.
The chapter opens by discussing the forces that govern oscillations and waves. It goes on to discuss important concepts such as simple harmonic motion, uniform harmonic motion, and damped harmonic motion. You will also learn about energy in simple harmonic motion and how it changes from kinetic to potential, and how the total sum, which would be the mechanical energy of the oscillator, remains constant or conserved at all times. The chapter also discusses characteristics of waves, such as their frequency, period of oscillation, and the forms in which they can exist, i.e., transverse or longitudinal. The chapter ends by discussing what happens when two or more waves overlap and how the amplitude of the resultant wave changes, leading to the phenomena of superposition and interference.
The concepts in this chapter support:
Big Idea 3 The interactions of an object with other objects can be described by forces.
Enduring Understanding 3.B Classically, the acceleration of an object interacting with other objects can be predicted by using
.
Essential Knowledge 3.B.3 Restoring forces can result in oscillatory motion. When a linear restoring force is exerted on an object displaced from an equilibrium position, the object will undergo a special type of motion called simple harmonic motion. Examples should include gravitational force exerted by the Earth on a simple pendulum and a mass-spring oscillator.
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.C Interactions with other objects or systems can change the total energy of a system.
Essential Knowledge 4.C.1 The energy of a system includes its kinetic energy, potential energy, and microscopic internal energy. Examples should include gravitational potential energy, elastic potential energy, and kinetic energy.
Essential Knowledge 4.C.2 Mechanical energy (the sum of kinetic and potential energy) is transferred into or out of a system when an external force is exerted on a system such that a component of the force is parallel to its displacement. The process through which the energy is transferred is called work.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.2 A system with internal structure can have internal energy, and changes in a system’s internal structure can result in changes in internal energy. [Physics 1: includes mass-spring oscillators and simple pendulums. Physics 2: includes charged object in electric fields and examining changes in internal energy with changes in configuration.]
Big Idea 6 Waves can transfer energy and momentum from one location to another without the permanent transfer of mass and serve as a mathematical model for the description of other phenomena.
Enduring Understanding 6.A A wave is a traveling disturbance that transfers energy and momentum.
Essential Knowledge 6.A.1 Waves can propagate via different oscillation modes such as transverse and longitudinal.
Essential Knowledge 6.A.2 For propagation, mechanical waves require a medium, while electromagnetic waves do not require a physical medium. Examples should include light traveling through a vacuum and sound not traveling through a vacuum.
Essential Knowledge 6.A.3 The amplitude is the maximum displacement of a wave from its equilibrium value.
Essential Knowledge 6.A.4 Classically, the energy carried by a wave depends on and increases with amplitude. Examples should include sound waves.
Enduring Understanding 6.B A periodic wave is one that repeats as a function of both time and position and can be described by its amplitude, frequency, wavelength, speed, and energy.
Essential Knowledge 6.B.1 The period is the repeat time of the wave. The frequency is the number of repetitions over a period of time.
Essential Knowledge 6.B.2 The wavelength is the repeat distance of the wave.
Essential Knowledge 6.B.3 A simple wave can be described by an equation involving one sine or cosine function involving the wavelength, amplitude, and frequency of the wave.
Essential Knowledge 6.B.4 The wavelength is the ratio of speed over frequency.
Enduring Understanding 6.C Only waves exhibit interference and diffraction.
Essential Knowledge 6.C.1 When two waves cross, they travel through each other; they do not bounce off each other. Where the waves overlap, the resulting displacement can be determined by adding the displacements of the two waves. This is called superposition.
Enduring Understanding 6.D Interference and superposition lead to standing waves and beats.
Essential Knowledge 6.D.1 Two or more wave pulses can interact in such a way as to produce amplitude variations in the resultant wave. When two pulses cross, they travel through each other; they do not bounce off each other. Where the pulses overlap, the resulting displacement can be determined by adding the displacements of the two pulses. This is called superposition.
Essential Knowledge 6.D.2 Two or more traveling waves can interact in such a way as to produce amplitude variations in the resultant wave.
Essential Knowledge 6.D.3 Standing waves are the result of the addition of incident and reflected waves that are confined to a region and have nodes and antinodes. Examples should include waves on a fixed length of string, and sound waves in both closed and open tubes.
Essential Knowledge 6.D.4 The possible wavelengths of a standing wave are determined by the size of the region to which it is confined.
Essential Knowledge 6.D.5 Beats arise from the addition of waves of slightly different frequency. |
# Oscillatory Motion and Waves
## Hooke’s Law: Stress and Strain Revisited
### Learning Objectives
By the end of this section, you will be able to:
1. Explain Newton’s third law of motion with respect to stress and deformation.
2. Describe the restoration of force and displacement.
3. Calculate the energy in Hooke’s Law of deformation, and the stored energy in a spring.
Newton’s first law implies that an object oscillating back and forth is experiencing forces. Without force, the object would move in a straight line at a constant speed rather than oscillate. Consider, for example, plucking a plastic ruler to the left as shown in . The deformation of the ruler creates a force in the opposite direction, known as a restoring force. Once released, the restoring force causes the ruler to move back toward its stable equilibrium position, where the net force on it is zero. However, by the time the ruler gets there, it gains momentum and continues to move to the right, producing the opposite deformation. It is then forced to the left, back through equilibrium, and the process is repeated until dissipative forces dampen the motion. These forces remove mechanical energy from the system, gradually reducing the motion until the ruler comes to rest.
The simplest oscillations occur when the restoring force is directly proportional to displacement. When stress and strain were covered in Newton’s Third Law of Motion, the name was given to this relationship between force and displacement was Hooke’s law:
Here, is the restoring force, is the displacement from equilibrium or deformation, and is a constant related to the difficulty in deforming the system. The minus sign indicates the restoring force is in the direction opposite to the displacement.
The force constant is related to the rigidity (or stiffness) of a system—the larger the force constant, the greater the restoring force, and the stiffer the system. The units of are newtons per meter (N/m). For example, is directly related to Young’s modulus when we stretch a string. shows a graph of the absolute value of the restoring force versus the displacement for a system that can be described by Hooke’s law—a simple spring in this case. The slope of the graph equals the force constant in newtons per meter. A common physics laboratory exercise is to measure restoring forces created by springs, determine if they follow Hooke’s law, and calculate their force constants if they do.
### Energy in Hooke’s Law of Deformation
In order to produce a deformation, work must be done. That is, a force must be exerted through a distance, whether you pluck a guitar string or compress a car spring. If the only result is deformation, and no work goes into thermal, sound, or kinetic energy, then all the work is initially stored in the deformed object as some form of potential energy. The potential energy stored in a spring is . Here, we generalize the idea to elastic potential energy for a deformation of any system that can be described by Hooke’s law. Hence,
where is the elastic potential energy stored in any deformed system that obeys Hooke’s law and has a displacement from equilibrium and a force constant .
It is possible to find the work done in deforming a system in order to find the energy stored. This work is performed by an applied force . The applied force is exactly opposite to the restoring force (action-reaction), and so . shows a graph of the applied force versus deformation for a system that can be described by Hooke’s law. Work done on the system is force multiplied by distance, which equals the area under the curve or (Method A in the figure). Another way to determine the work is to note that the force increases linearly from 0 to , so that the average force is , the distance moved is , and thus (Method B in the figure).
### Test Prep for AP Courses
### Section Summary
1. An oscillation is a back and forth motion of an object between two points of deformation.
2. An oscillation may create a wave, which is a disturbance that propagates from where it was created.
3. The simplest type of oscillations and waves are related to systems that can be described by Hooke’s law:
where
4. Elastic potential energy stored in the deformation of a system that can be described by Hooke’s law is given by
### Conceptual Questions
### Problems & Exercises
|
# Oscillatory Motion and Waves
## Period and Frequency in Oscillations
### Learning Objectives
By the end of this section, you will be able to:
1. Observe the vibrations of a guitar string.
2. Determine the frequency of oscillations.
When you pluck a guitar string, the resulting sound has a steady tone and lasts a long time. Each successive vibration of the string takes the same time as the previous one. We define periodic motion to be a motion that repeats itself at regular time intervals, such as exhibited by the guitar string or by an object on a spring moving up and down. The time to complete one oscillation remains constant and is called the period . Its units are usually seconds, but may be any convenient unit of time. The word period refers to the time for some event whether repetitive or not; but we shall be primarily interested in periodic motion, which is by definition repetitive. A concept closely related to period is the frequency of an event. For example, if you get a paycheck twice a month, the frequency of payment is two per month and the period between checks is half a month. Frequency is defined to be the number of events per unit time. For periodic motion, frequency is the number of oscillations per unit time. The relationship between frequency and period is
The SI unit for frequency is the cycle per second, which is defined to be a hertz (Hz):
A cycle is one complete oscillation. Note that a vibration can be a single or multiple event, whereas oscillations are usually repetitive for a significant number of cycles.
### Test Prep for AP Courses
### Section Summary
1. Periodic motion is a repetitious oscillation.
2. The time for one oscillation is the period .
3. The number of oscillations per unit time is the frequency .
4. These quantities are related by
### Problems & Exercises
|
# Oscillatory Motion and Waves
## Simple Harmonic Motion: A Special Periodic Motion
### Learning Objectives
By the end of this section, you will be able to:
1. Describe a simple harmonic oscillator.
2. Explain the link between simple harmonic motion and waves.
The oscillations of a system in which the net force can be described by Hooke’s law are of special importance, because they are very common. They are also the simplest oscillatory systems. Simple Harmonic Motion (SHM) is the name given to oscillatory motion for a system where the net force can be described by Hooke’s law, and such a system is called a simple harmonic oscillator. If the net force can be described by Hooke’s law and there is no damping (by friction or other non-conservative forces), then a simple harmonic oscillator will oscillate with equal displacement on either side of the equilibrium position, as shown for an object on a spring in . The maximum displacement from equilibrium is called the amplitude . The units for amplitude and displacement are the same, but depend on the type of oscillation. For the object on the spring, the units of amplitude and displacement are meters; whereas for sound oscillations, they have units of pressure (and other types of oscillations have yet other units). Because amplitude is the maximum displacement, it is related to the energy in the oscillation.
What is so significant about simple harmonic motion? One special thing is that the period and frequency of a simple harmonic oscillator are independent of amplitude. The string of a guitar, for example, will oscillate with the same frequency whether plucked gently or hard. Because the period is constant, a simple harmonic oscillator can be used as a clock.
Two important factors do affect the period of a simple harmonic oscillator. The period is related to how stiff the system is. A very stiff object has a large force constant , which causes the system to have a smaller period. For example, you can adjust a diving board’s stiffness—the stiffer it is, the faster it vibrates, and the shorter its period. Period also depends on the mass of the oscillating system. The more massive the system is, the longer the period. For example, a heavy person on a diving board bounces up and down more slowly than a light one.
In fact, the mass and the force constant are the only factors that affect the period and frequency of simple harmonic motion.
### The Link between Simple Harmonic Motion and Waves
If a time-exposure photograph of the bouncing car were taken as it drove by, the headlight would make a wavelike streak, as shown in . Similarly, shows an object bouncing on a spring as it leaves a wavelike "trace" of its position on a moving strip of paper. Both waves are sine functions. All simple harmonic motion is intimately related to sine and cosine waves.
The displacement as a function of time t in any simple harmonic motion—that is, one in which the net restoring force can be described by Hooke’s law, is given by
where is amplitude. At , the initial position is , and the displacement oscillates back and forth with a period . (When , we get again because .). Furthermore, from this expression for , the velocity as a function of time is given by:
where . The object has zero velocity at maximum displacement—for example, when , and at that time . The minus sign in the first equation for gives the correct direction for the velocity. Just after the start of the motion, for instance, the velocity is negative because the system is moving back toward the equilibrium point. Finally, we can get an expression for acceleration using Newton’s second law. [Then we have and , the quantities needed for kinematics and a description of simple harmonic motion.] According to Newton’s second law, the acceleration is . So, is also a cosine function:
Hence, is directly proportional to and in the opposite direction to .
shows the simple harmonic motion of an object on a spring and presents graphs of and versus time.
The most important point here is that these equations are mathematically straightforward and are valid for all simple harmonic motion. They are very useful in visualizing waves associated with simple harmonic motion, including visualizing how waves add with one another.
### Test Prep for AP Courses
### Section Summary
1. Simple harmonic motion is oscillatory motion for a system that can be described only by Hooke’s law. Such a system is also called a simple harmonic oscillator.
2. Maximum displacement is the amplitude . The period and frequency of a simple harmonic oscillator are given by
3. Displacement in simple harmonic motion as a function of time is given by
4. The velocity is given by
, where
.
5. The acceleration is found to be
### Conceptual Questions
### Problems & Exercises
|
# Oscillatory Motion and Waves
## The Simple Pendulum
### Learning Objectives
By the end of this section, you will be able to:
1. Measure acceleration due to gravity.
Pendulums are in common usage. Some have crucial uses, such as in clocks; some are for fun, such as a child’s swing; and some are just there, such as the sinker on a fishing line. For small displacements, a pendulum is a simple harmonic oscillator. A simple pendulum is defined to have an object that has a small mass, also known as the pendulum bob, which is suspended from a light wire or string, such as shown in . Exploring the simple pendulum a bit further, we can discover the conditions under which it performs simple harmonic motion, and we can derive an interesting expression for its period.
We begin by defining the displacement to be the arc length . We see from that the net force on the bob is tangent to the arc and equals . (The weight has components along the string and tangent to the arc.) Tension in the string exactly cancels the component parallel to the string. This leaves a net restoring force back toward the equilibrium position at .
Now, if we can show that the restoring force is directly proportional to the displacement, then we have a simple harmonic oscillator. In trying to determine if we have a simple harmonic oscillator, we should note that for small angles (less than about ), ( and differ by about 1% or less at smaller angles). Thus, for angles less than about , the restoring force is
The displacement is directly proportional to . When is expressed in radians, the arc length in a circle is related to its radius ( in this instance) by:
so that
For small angles, then, the expression for the restoring force is:
This expression is of the form:
where the force constant is given by and the displacement is given by . For angles less than about , the restoring force is directly proportional to the displacement, and the simple pendulum is a simple harmonic oscillator.
Using this equation, we can find the period of a pendulum for amplitudes less than about . For the simple pendulum:
Thus,
for the period of a simple pendulum. This result is interesting because of its simplicity. The only things that affect the period of a simple pendulum are its length and the acceleration due to gravity. The period is completely independent of other factors, such as mass. As with simple harmonic oscillators, the period for a pendulum is nearly independent of amplitude, especially if is less than about . Even simple pendulum clocks can be finely adjusted and accurate.
Note the dependence of on . If the length of a pendulum is precisely known, it can actually be used to measure the acceleration due to gravity. Consider the following example.
### Test Prep for AP Courses
### Section Summary
1. A mass suspended by a wire of length is a simple pendulum and undergoes simple harmonic motion for amplitudes less than about
The period of a simple pendulum is
where
### Conceptual Questions
### Problems & Exercises
As usual, the acceleration due to gravity in these problems is taken to be , unless otherwise specified.
|
# Oscillatory Motion and Waves
## Energy and the Simple Harmonic Oscillator
### Learning Objectives
By the end of this section, you will be able to:
1. Determine the maximum speed of an oscillating system.
To study the energy of a simple harmonic oscillator, we first consider all the forms of energy it can have We know from Hooke’s Law: Stress and Strain Revisited that the energy stored in the deformation of a simple harmonic oscillator is a form of potential energy given by:
Because a simple harmonic oscillator has no dissipative forces, the other important form of energy is kinetic energy . Conservation of energy for these two forms is:
or
This statement of conservation of energy is valid for all simple harmonic oscillators, including ones where the gravitational force plays a role
Namely, for a simple pendulum we replace the velocity with , the spring constant with , and the displacement term with . Thus
In the case of undamped simple harmonic motion, the energy oscillates back and forth between kinetic and potential, going completely from one to the other as the system oscillates. So for the simple example of an object on a frictionless surface attached to a spring, as shown again in , the motion starts with all of the energy stored in the spring. As the object starts to move, the elastic potential energy is converted to kinetic energy, becoming entirely kinetic energy at the equilibrium position. It is then converted back into elastic potential energy by the spring, the velocity becomes zero when the kinetic energy is completely converted, and so on. This concept provides extra insight here and in later applications of simple harmonic motion, such as alternating current circuits.
The conservation of energy principle can be used to derive an expression for velocity . If we start our simple harmonic motion with zero velocity and maximum displacement (), then the total energy is
This total energy is constant and is shifted back and forth between kinetic energy and potential energy, at most times being shared by each. The conservation of energy for this system in equation form is thus:
Solving this equation for yields:
Manipulating this expression algebraically gives:
and so
where
From this expression, we see that the velocity is a maximum () at , as stated earlier in . Notice that the maximum velocity depends on three factors. Maximum velocity is directly proportional to amplitude. As you might guess, the greater the maximum displacement the greater the maximum velocity. Maximum velocity is also greater for stiffer systems, because they exert greater force for the same displacement. This observation is seen in the expression for it is proportional to the square root of the force constant . Finally, the maximum velocity is smaller for objects that have larger masses, because the maximum velocity is inversely proportional to the square root of . For a given force, objects that have large masses accelerate more slowly.
A similar calculation for the simple pendulum produces a similar result, namely:
### Test Prep for AP Courses
### Section Summary
1. Energy in the simple harmonic oscillator is shared between elastic potential energy and kinetic energy, with the total being constant:
2. Maximum velocity depends on three factors: it is directly proportional to amplitude, it is greater for stiffer systems, and it is smaller for objects that have larger masses:
### Conceptual Questions
### Problems & Exercises
|
# Oscillatory Motion and Waves
## Uniform Circular Motion and Simple Harmonic Motion
### Learning Objectives
By the end of this section, you will be able to:
1. Compare simple harmonic motion with uniform circular motion.
There is an easy way to produce simple harmonic motion by using uniform circular motion. shows one way of using this method. A ball is attached to a uniformly rotating vertical turntable, and its shadow is projected on the floor as shown. The shadow undergoes simple harmonic motion. Hooke’s law usually describes uniform circular motions ( constant) rather than systems that have large visible displacements. So observing the projection of uniform circular motion, as in , is often easier than observing a precise large-scale simple harmonic oscillator. If studied in sufficient depth, simple harmonic motion produced in this manner can give considerable insight into many aspects of oscillations and waves and is very useful mathematically. In our brief treatment, we shall indicate some of the major features of this relationship and how they might be useful.
shows the basic relationship between uniform circular motion and simple harmonic motion. The point P travels around the circle at constant angular velocity . The point P is analogous to an object on the merry-go-round. The projection of the position of P onto a fixed axis undergoes simple harmonic motion and is analogous to the shadow of the object. At the time shown in the figure, the projection has position and moves to the left with velocity . The velocity of the point P around the circle equals .The projection of on the -axis is the velocity of the simple harmonic motion along the -axis.
To see that the projection undergoes simple harmonic motion, note that its position is given by
where , is the constant angular velocity, and is the radius of the circular path. Thus,
The angular velocity is in radians per unit time; in this case radians is the time for one revolution . That is, . Substituting this expression for , we see that the position is given by:
This expression is the same one we had for the position of a simple harmonic oscillator in Simple Harmonic Motion: A Special Periodic Motion. If we make a graph of position versus time as in , we see again the wavelike character (typical of simple harmonic motion) of the projection of uniform circular motion onto the -axis.
Now let us use to do some further analysis of uniform circular motion as it relates to simple harmonic motion. The triangle formed by the velocities in the figure and the triangle formed by the displacements ( and ) are similar right triangles. Taking ratios of similar sides, we see that
We can solve this equation for the speed or
This expression for the speed of a simple harmonic oscillator is exactly the same as the equation obtained from conservation of energy considerations in Energy and the Simple Harmonic Oscillator.You can begin to see that it is possible to get all of the characteristics of simple harmonic motion from an analysis of the projection of uniform circular motion.
Finally, let us consider the period of the motion of the projection. This period is the time it takes the point P to complete one revolution. That time is the circumference of the circle divided by the velocity around the circle, . Thus, the period is
We know from conservation of energy considerations that
Solving this equation for gives
Substituting this expression into the equation for yields
Thus, the period of the motion is the same as for a simple harmonic oscillator. We have determined the period for any simple harmonic oscillator using the relationship between uniform circular motion and simple harmonic motion.
Some modules occasionally refer to the connection between uniform circular motion and simple harmonic motion. Moreover, if you carry your study of physics and its applications to greater depths, you will find this relationship useful. It can, for example, help to analyze how waves add when they are superimposed.
### Test Prep for AP Courses
### Section Summary
A projection of uniform circular motion undergoes simple harmonic oscillation.
### Problems & Exercises
|
# Oscillatory Motion and Waves
## Damped Harmonic Motion
### Learning Objectives
By the end of this section, you will be able to:
1. Compare and discuss underdamped and overdamped oscillating systems.
2. Explain critically damped system.
A guitar string stops oscillating a few seconds after being plucked. To keep a child happy on a swing, you must keep pushing. Although we can often make friction and other non-conservative forces negligibly small, completely undamped motion is rare. In fact, we may even want to damp oscillations, such as with car shock absorbers.
For a system that has a small amount of damping, the period and frequency are nearly the same as for simple harmonic motion, but the amplitude gradually decreases as shown in . This occurs because the non-conservative damping force removes energy from the system, usually in the form of thermal energy. In general, energy removal by non-conservative forces is described as
where is work done by a non-conservative force (here the damping force). For a damped harmonic oscillator, is negative because it removes mechanical energy (KE + PE) from the system.
If you gradually increase the amount of damping in a system, the period and frequency begin to be affected, because damping opposes and hence slows the back and forth motion. (The net force is smaller in both directions.) If there is very large damping, the system does not even oscillate—it slowly moves toward equilibrium. shows the displacement of a harmonic oscillator for different amounts of damping. When we want to damp out oscillations, such as in the suspension of a car, we may want the system to return to equilibrium as quickly as possible Critical damping is defined as the condition in which the damping of an oscillator results in it returning as quickly as possible to its equilibrium position The critically damped system may overshoot the equilibrium position, but if it does, it will do so only once. Critical damping is represented by Curve A in . With less-than critical damping, the system will return to equilibrium faster but will overshoot and cross over one or more times. Such a system is underdamped; its displacement is represented by the curve in . Curve B in represents an overdamped system. As with critical damping, it too may overshoot the equilibrium position, but will reach equilibrium over a longer period of time.
Critical damping is often desired, because such a system returns to equilibrium rapidly and remains at equilibrium as well. In addition, a constant force applied to a critically damped system moves the system to a new equilibrium position in the shortest time possible without overshooting or oscillating about the new position. For example, when you stand on bathroom scales that have a needle gauge, the needle moves to its equilibrium position without oscillating. It would be quite inconvenient if the needle oscillated about the new equilibrium position for a long time before settling. Damping forces can vary greatly in character. Friction, for example, is sometimes independent of velocity (as assumed in most places in this text). But many damping forces depend on velocity—sometimes in complex ways, sometimes simply being proportional to velocity.
### Test Prep for AP Courses
### Section Summary
1. Damped harmonic oscillators have non-conservative forces that dissipate their energy.
2. Critical damping returns the system to equilibrium as fast as possible without overshooting.
3. An underdamped system will oscillate through the equilibrium position.
4. An overdamped system moves more slowly toward equilibrium than one that is critically damped.
### Conceptual Questions
### Problems & Exercises
|
# Oscillatory Motion and Waves
## Forced Oscillations and Resonance
### Learning Objectives
By the end of this section, you will be able to:
1. Observe resonance of a paddle ball on a string.
2. Observe amplitude of a damped harmonic oscillator.
Sit in front of a piano sometime and sing a loud brief note at it with the dampers off its strings. It will sing the same note back at you—the strings, having the same frequencies as your voice, are resonating in response to the forces from the sound waves that you sent to them. Your voice and a piano’s strings is a good example of the fact that objects—in this case, piano strings—can be forced to oscillate but oscillate best at their natural frequency. In this section, we shall briefly explore applying a periodic driving force acting on a simple harmonic oscillator. The driving force puts energy into the system at a certain frequency, not necessarily the same as the natural frequency of the system. The natural frequency is the frequency at which a system would oscillate if there were no driving and no damping force.
Most of us have played with toys involving an object supported on an elastic band, something like the paddle ball suspended from a finger in . Imagine the finger in the figure is your finger. At first you hold your finger steady, and the ball bounces up and down with a small amount of damping. If you move your finger up and down slowly, the ball will follow along without bouncing much on its own. As you increase the frequency at which you move your finger up and down, the ball will respond by oscillating with increasing amplitude. When you drive the ball at its natural frequency, the ball’s oscillations increase in amplitude with each oscillation for as long as you drive it. The phenomenon of driving a system with a frequency equal to its natural frequency is called resonance. A system being driven at its natural frequency is said to resonate. As the driving frequency gets progressively higher than the resonant or natural frequency, the amplitude of the oscillations becomes smaller, until the oscillations nearly disappear and your finger simply moves up and down with little effect on the ball.
shows a graph of the amplitude of a damped harmonic oscillator as a function of the frequency of the periodic force driving it. There are three curves on the graph, each representing a different amount of damping. All three curves peak at the point where the frequency of the driving force equals the natural frequency of the harmonic oscillator. The highest peak, or greatest response, is for the least amount of damping, because less energy is removed by the damping force.
It is interesting that the widths of the resonance curves shown in depend on damping: the less the damping, the narrower the resonance. The message is that if you want a driven oscillator to resonate at a very specific frequency, you need as little damping as possible. Little damping is the case for piano strings and many other musical instruments. Conversely, if you want small-amplitude oscillations, such as in a car’s suspension system, then you want heavy damping. Heavy damping reduces the amplitude, but the tradeoff is that the system responds at more frequencies.
These features of driven harmonic oscillators apply to a huge variety of systems. When you tune a radio, for example, you are adjusting its resonant frequency so that it only oscillates to the desired station’s broadcast (driving) frequency. The more selective the radio is in discriminating between stations, the smaller its damping. Magnetic resonance imaging (MRI) is a widely used medical diagnostic tool in which atomic nuclei (mostly hydrogen nuclei) are made to resonate by incoming radio waves (on the order of 100 MHz). A child on a swing is driven by a parent at the swing’s natural frequency to achieve maximum amplitude. In all of these cases, the efficiency of energy transfer from the driving force into the oscillator is best at resonance. Speed bumps and gravel roads prove that even a car’s suspension system is not immune to resonance. In spite of finely engineered shock absorbers, which ordinarily convert mechanical energy to thermal energy almost as fast as it comes in, speed bumps still cause a large-amplitude oscillation. On gravel roads that are corrugated, you may have noticed that if you travel at the “wrong” speed, the bumps are very noticeable whereas at other speeds you may hardly feel the bumps at all. shows a photograph of a famous example (the Tacoma Narrows Bridge) of the destructive effects of a driven harmonic oscillation. The Millennium Bridge in London was closed for a short period of time for the same reason while inspections were carried out.
In our bodies, the chest cavity is a clear example of a system at resonance. The diaphragm and chest wall drive the oscillations of the chest cavity which result in the lungs inflating and deflating. The system is critically damped and the muscular diaphragm oscillates at the resonant value for the system, making it highly efficient.
### Test Prep for AP Courses
### Section Summary
1. A system’s natural frequency is the frequency at which the system will oscillate if not affected by driving or damping forces.
2. A periodic force driving a harmonic oscillator at its natural frequency produces resonance. The system is said to resonate.
3. The less damping a system has, the higher the amplitude of the forced oscillations near resonance. The more damping a system has, the broader response it has to varying driving frequencies.
### Conceptual Questions
### Problems & Exercises
|
# Oscillatory Motion and Waves
## Waves
### Learning Objectives
By the end of this section, you will be able to:
1. State the characteristics of a wave.
2. Calculate the velocity of wave propagation.
What do we mean when we say something is a wave? The most intuitive and easiest wave to imagine is the familiar water wave. More precisely, a wave is a disturbance that propagates, or moves from the place it was created. For water waves, the disturbance is in the surface of the water, perhaps created by a rock thrown into a pond or by a swimmer splashing the surface repeatedly. For sound waves, the disturbance is a change in air pressure, perhaps created by the oscillating cone inside a speaker. For earthquakes, there are several types of disturbances, including disturbance of Earth’s surface and pressure disturbances under the surface. Even radio waves are most easily understood using an analogy with water waves. Visualizing water waves is useful because there is more to it than just a mental image. Water waves exhibit characteristics common to all waves, such as amplitude, period, frequency and energy. All wave characteristics can be described by a small set of underlying principles.
A wave is a disturbance that propagates, or moves from the place it was created. The simplest waves repeat themselves for several cycles and are associated with simple harmonic motion. Let us start by considering the simplified water wave in . The wave is an up and down disturbance of the water surface. It causes a sea gull to move up and down in simple harmonic motion as the wave crests and troughs (peaks and valleys) pass under the bird. The time for one complete up and down motion is the wave’s period . The wave’s frequency is , as usual. The wave itself moves to the right in the figure. This movement of the wave is actually the disturbance moving to the right, not the water itself (or the bird would move to the right). We define wave velocity to be the speed at which the disturbance moves. Wave velocity is sometimes also called the propagation velocity or propagation speed, because the disturbance propagates from one location to another.
The water wave in the figure also has a length associated with it, called its wavelength
, the distance between adjacent identical parts of a wave. ( is the distance parallel to the direction of propagation.) The speed of propagation is the distance the wave travels in a given time, which is one wavelength in the time of one period. In equation form, that is
or
This fundamental relationship holds for all types of waves. For water waves, is the speed of a surface wave; for sound, is the speed of sound; and for visible light, is the speed of light, for example.
### Transverse and Longitudinal Waves
A simple wave consists of a periodic disturbance that propagates from one place to another. The wave in propagates in the horizontal direction while the surface is disturbed in the vertical direction. Such a wave is called a transverse wave or shear wave; in such a wave, the disturbance is perpendicular to the direction of propagation. In contrast, in a longitudinal wave or compressional wave, the disturbance is parallel to the direction of propagation. shows an example of a longitudinal wave. The size of the disturbance is its amplitude X and is completely independent of the speed of propagation .
Waves may be transverse, longitudinal, or a combination of the two. (Water waves are actually a combination of transverse and longitudinal. The simplified water wave illustrated in
shows no longitudinal motion of the bird.) The waves on the strings of musical instruments are transverse—so are electromagnetic waves, such as visible light.
Sound waves in air and water are longitudinal. Their disturbances are periodic variations in pressure that are transmitted in fluids. Fluids do not have appreciable shear strength, and thus the sound waves in them must be longitudinal or compressional. Sound in solids can be both longitudinal and transverse.
Earthquake waves under Earth’s surface also have both longitudinal and transverse components (called compressional or P-waves and shear or S-waves, respectively). These components have important individual characteristics—they propagate at different speeds, for example. Earthquakes also have surface waves that are similar to surface waves on water.
### Test Prep for AP Courses
### Section Summary
1. A wave is a disturbance that moves from the point of creation with a wave velocity .
2. A wave has a wavelength , which is the distance between adjacent identical parts of the wave.
3. Wave velocity and wavelength are related to the wave’s frequency and period by
or
4. A transverse wave has a disturbance perpendicular to its direction of propagation, whereas a longitudinal wave has a disturbance parallel to its direction of propagation.
### Conceptual Questions
### Problems & Exercises
|
# Oscillatory Motion and Waves
## Superposition and Interference
### Learning Objectives
By the end of this section, you will be able to:
1. Explain standing waves.
2. Describe the mathematical representation of overtones and beat frequency.
Most waves do not look very simple. They look more like the waves in than like the simple water wave considered in Waves. (Simple waves may be created by a simple harmonic oscillation, and thus have a sinusoidal shape). Complex waves are more interesting, even beautiful, but they look formidable. Most waves appear complex because they result from several simple waves adding together. Luckily, the rules for adding waves are quite simple.
When two or more waves arrive at the same point, they superimpose themselves on one another. More specifically, the disturbances of waves are superimposed when they come together—a phenomenon called superposition. Each disturbance corresponds to a force, and forces add. If the disturbances are along the same line, then the resulting wave is a simple addition of the disturbances of the individual waves—that is, their amplitudes add. and illustrate superposition in two special cases, both of which produce simple results.
shows two identical waves that arrive at the same point exactly in phase. The crests of the two waves are precisely aligned, as are the troughs. This superposition produces pure constructive interference. Because the disturbances add, pure constructive interference produces a wave that has twice the amplitude of the individual waves, but has the same wavelength.
shows two identical waves that arrive exactly out of phase—that is, precisely aligned crest to trough—producing pure destructive interference. Because the disturbances are in the opposite direction for this superposition, the resulting amplitude is zero for pure destructive interference—the waves completely cancel.
While pure constructive and pure destructive interference do occur, they require precisely aligned identical waves. The superposition of most waves produces a combination of constructive and destructive interference and can vary from place to place and time to time. Sound from a stereo, for example, can be loud in one spot and quiet in another. Varying loudness means the sound waves add partially constructively and partially destructively at different locations. A stereo has at least two speakers creating sound waves, and waves can reflect from walls. All these waves superimpose. An example of sounds that vary over time from constructive to destructive is found in the combined whine of airplane jets heard by a stationary passenger. The combined sound can fluctuate up and down in volume as the sound from the two engines varies in time from constructive to destructive. These examples are of waves that are similar.
An example of the superposition of two dissimilar waves is shown in . Here again, the disturbances add and subtract, producing a more complicated looking wave.
### Standing Waves
Sometimes waves do not seem to move; rather, they just vibrate in place. Unmoving waves can be seen on the surface of a glass of milk in a refrigerator, for example. Vibrations from the refrigerator motor create waves on the milk that oscillate up and down but do not seem to move across the surface. These waves are formed by the superposition of two or more moving waves, such as illustrated in for two identical waves moving in opposite directions. The waves move through each other with their disturbances adding as they go by. If the two waves have the same amplitude and wavelength, then they alternate between constructive and destructive interference. The resultant looks like a wave standing in place and, thus, is called a standing wave. Waves on the glass of milk are one example of standing waves. There are other standing waves, such as on guitar strings and in organ pipes. With the glass of milk, the two waves that produce standing waves may come from reflections from the side of the glass.
A closer look at earthquakes provides evidence for conditions appropriate for resonance, standing waves, and constructive and destructive interference. A building may be vibrated for several seconds with a driving frequency matching that of the natural frequency of vibration of the building—producing a resonance resulting in one building collapsing while neighboring buildings do not. Often buildings of a certain height are devastated while other taller buildings remain intact. The building height matches the condition for setting up a standing wave for that particular height. As the earthquake waves travel along the surface of Earth and reflect off denser rocks, constructive interference occurs at certain points. Often areas closer to the epicenter are not damaged while areas farther away are damaged.
Standing waves are also found on the strings of musical instruments and are due to reflections of waves from the ends of the string. and show three standing waves that can be created on a string that is fixed at both ends. Nodes are the points where the string does not move; more generally, nodes are where the wave disturbance is zero in a standing wave. The fixed ends of strings must be nodes, too, because the string cannot move there. The word antinode is used to denote the location of maximum amplitude in standing waves. Standing waves on strings have a frequency that is related to the propagation speed of the disturbance on the string. The wavelength is determined by the distance between the points where the string is fixed in place.
The lowest frequency, called the fundamental frequency, is thus for the longest wavelength, which is seen to be . Therefore, the fundamental frequency is . In this case, the overtones or harmonics are multiples of the fundamental frequency. As seen in , the first harmonic can easily be calculated since . Thus, . Similarly, , and so on. All of these frequencies can be changed by adjusting the tension in the string. The greater the tension, the greater is and the higher the frequencies. This observation is familiar to anyone who has ever observed a string instrument being tuned. We will see in later chapters that standing waves are crucial to many resonance phenomena, such as in sounding boxes on string instruments.
### Beats
Striking two adjacent keys on a piano produces a warbling combination usually considered to be unpleasant. The superposition of two waves of similar but not identical frequencies is the culprit. Another example is often noticeable in jet aircraft, particularly the two-engine variety, while taxiing. The combined sound of the engines goes up and down in loudness. This varying loudness happens because the sound waves have similar but not identical frequencies. The discordant warbling of the piano and the fluctuating loudness of the jet engine noise are both due to alternately constructive and destructive interference as the two waves go in and out of phase. illustrates this graphically.
The wave resulting from the superposition of two similar-frequency waves has a frequency that is the average of the two. This wave fluctuates in amplitude, or beats, with a frequency called the beat frequency. We can determine the beat frequency by adding two waves together mathematically. Note that a wave can be represented at one point in space as
where is the frequency of the wave. Adding two waves that have different frequencies but identical amplitudes produces a resultant
More specifically,
Using a trigonometric identity, it can be shown that
where
is the beat frequency, and is the average of and . These results mean that the resultant wave has twice the amplitude and the average frequency of the two superimposed waves, but it also fluctuates in overall amplitude at the beat frequency . The first cosine term in the expression effectively causes the amplitude to go up and down. The second cosine term is the wave with frequency . This result is valid for all types of waves. However, if it is a sound wave, providing the two frequencies are similar, then what we hear is an average frequency that gets louder and softer (or warbles) at the beat frequency.
While beats may sometimes be annoying in audible sounds, we will find that beats have many applications. Observing beats is a very useful way to compare similar frequencies. There are applications of beats as apparently disparate as in ultrasonic imaging and radar speed traps.
### Test Prep for AP Courses
### Section Summary
1. Superposition is the combination of two waves at the same location.
2. Constructive interference occurs when two identical waves are superimposed in phase.
3. Destructive interference occurs when two identical waves are superimposed exactly out of phase.
4. A standing wave is one in which two waves superimpose to produce a wave that varies in amplitude but does not propagate.
5. Nodes are points of no motion in standing waves.
6. An antinode is the location of maximum amplitude of a standing wave.
7. Waves on a string are resonant standing waves with a fundamental frequency and can occur at higher multiples of the fundamental, called overtones or harmonics.
8. Beats occur when waves of similar frequencies and are superimposed. The resulting amplitude oscillates with a beat frequency given by
### Conceptual Questions
### Problems & Exercises
|
# Oscillatory Motion and Waves
## Energy in Waves: Intensity
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the intensity and the power of rays and waves.
All waves carry energy. The energy of some waves can be directly observed. Earthquakes can shake whole cities to the ground, performing the work of thousands of wrecking balls.
Loud sounds pulverize nerve cells in the inner ear, causing permanent hearing loss. Ultrasound is used for deep-heat treatment of muscle strains. A laser beam can burn away a malignancy. Water waves chew up beaches.
The amount of energy in a wave is related to its amplitude. Large-amplitude earthquakes produce large ground displacements. Loud sounds have higher pressure amplitudes and come from larger-amplitude source vibrations than soft sounds. Large ocean breakers churn up the shore more than small ones. More quantitatively, a wave is a displacement that is resisted by a restoring force. The larger the displacement , the larger the force needed to create it. Because work is related to force multiplied by distance () and energy is put into the wave by the work done to create it, the energy in a wave is related to amplitude. In fact, a wave’s energy is directly proportional to its amplitude squared because
The energy effects of a wave depend on time as well as amplitude. For example, the longer deep-heat ultrasound is applied, the more energy it transfers. Waves can also be concentrated or spread out. Sunlight, for example, can be focused to burn wood. Earthquakes spread out, so they do less damage the farther they get from the source. In both cases, changing the area the waves cover has important effects. All these pertinent factors are included in the definition of intensity as power per unit area:
where is the power carried by the wave through area . The definition of intensity is valid for any energy in transit, including that carried by waves. The SI unit for intensity is watts per square meter (). For example, infrared and visible energy from the Sun impinge on Earth at an intensity of just above the atmosphere. There are other intensity-related units in use, too. The most common is the decibel. For example, a 90 decibel sound level corresponds to an intensity of . (This quantity is not much power per unit area considering that 90 decibels is a relatively high sound level. Decibels will be discussed in some detail in a later chapter.
### Section Summary
Intensity is defined to be the power per unit area:
and has units of .
### Conceptual Questions
### Problems & Exercises
|
# Physics of Hearing
## Connection for AP® Courses
In this chapter, the concept of waves is specifically applied to the phenomena of sound. As such, Big Idea 6 continues to be supported, as sound waves carry energy and momentum from one location to another without the permanent transfer of mass. This energy is carried through vibrations caused by disturbances in air pressure (Enduring Understanding 6.A). As air pressure increases, amplitudes of vibration and energy transfer do as well. This idea (Enduring Understanding 6.A.4) explains why a very loud sound can break glass.
The chapter continues the fundamental analysis of waves addressed in Chapter 16. Sound waves are periodic, and can therefore be expressed as a function of position and time. Furthermore, sound waves are described by amplitude, frequency, wavelength, and speed (Enduring Understanding 6.B). The relationship between speed and frequency is analyzed further in Section 17.4, as the frequency of sound depends upon the relative motion between the source and observer. This concept, known as the Doppler effect, supports Essential Knowledge 6.B.5.
Like all other waves, sound waves can overlap. When they do so, their interaction will produce an amplitude variation within the resultant wave. This amplitude can be determined by adding the displacement of the two pulses, through a process called superposition. This process, covered in Section 17.5, reinforces the content in Enduring Understanding 6.D.1.
In situations where the interfering waves are confined, such as on a fixed length of string or in a tube, standing waves can result. These waves are the result of interference between the incident and reflecting wave. Standing waves are described using nodes and antinodes, and their wavelengths are determined by the size of the region to which they are confined. This chapter’s description of both standing waves and the concept of beats strongly support Enduring Understanding 6.D, as well as Essential Knowledge 6.D.1, 6.D.3, and 6.D.4.
The concepts in this chapter support:
Big Idea 6 Waves can transfer energy and momentum from one location to another without the permanent transfer of mass and serve as a mathematical model for the description of other phenomena.
Enduring Understanding 6.B A periodic wave is one that repeats as a function of both time and position and can be described by its amplitude, frequency, wavelength, speed, and energy.
Essential Knowledge 6.B.5 The observed frequency of a wave depends on the relative motion of the source and the observer. This is a qualitative measurement only.
Enduring Understanding 6.D Interference and superposition lead to standing waves and beats.
Essential Knowledge 6.D.1 Two or more wave pulses can interact in such a way as to produce amplitude variations in the resultant wave. When two pulses cross, they travel through each other; they do not bounce off each other. Where the pulses overlap, the resulting displacement can be determined by adding the displacements of the two pulses. This is called superposition.
Essential Knowledge 6.D.3 Standing waves are the result of the addition of incident and reflected waves that are confined to a region and have nodes and antinodes. Examples should include waves on a fixed length of string, and sound waves in both closed and open tubes.
Essential Knowledge 6.D.4 The possible wavelengths of a standing wave are determined by the size of the region in which it is confined. |
# Physics of Hearing
## Sound
### Learning Objectives
By the end of this section, you will be able to:
1. Define sound and hearing.
2. Describe sound as a longitudinal wave.
Sound can be used as a familiar illustration of waves. Because hearing is one of our most important senses, it is interesting to see how the physical properties of sound correspond to our perceptions of it. Hearing is the perception of sound, just as vision is the perception of visible light. But sound has important applications beyond hearing. Ultrasound, for example, is not heard but can be employed to form medical images and is also used in treatment.
The physical phenomenon of sound is defined to be a disturbance of matter that is transmitted from its source outward. Sound is a wave. On the atomic scale, it is a disturbance of atoms that is far more ordered than their thermal motions. In many instances, sound is a periodic wave, and the atoms undergo simple harmonic motion. In this text, we shall explore such periodic sound waves.
A vibrating string produces a sound wave as illustrated in , , and . As the string oscillates back and forth, it transfers energy to the air, mostly as thermal energy created by turbulence. But a small part of the string’s energy goes into compressing and expanding the surrounding air, creating slightly higher and lower local pressures. These compressions (high pressure regions) and rarefactions (low pressure regions) move out as longitudinal pressure waves having the same frequency as the string—they are the disturbance that is a sound wave. (Sound waves in air and most fluids are longitudinal, because fluids have almost no shear strength. In solids, sound waves can be both transverse and longitudinal.) shows a graph of gauge pressure versus distance from the vibrating string.
The amplitude of a sound wave decreases with distance from its source, because the energy of the wave is spread over a larger and larger area. But it is also absorbed by objects, such as the eardrum in , and converted to thermal energy by the viscosity of air. In addition, during each compression a little heat transfers to the air and during each rarefaction even less heat transfers from the air, so that the heat transfer reduces the organized disturbance into random thermal motions. (These processes can be viewed as a manifestation of the second law of thermodynamics presented in Introduction to the Second Law of Thermodynamics: Heat Engines and Their Efficiency.) Whether the heat transfer from compression to rarefaction is significant depends on how far apart they are—that is, it depends on wavelength. Wavelength, frequency, amplitude, and speed of propagation are important for sound, as they are for all waves.
### Section Summary
1. Sound is a disturbance of matter that is transmitted from its source outward.
2. Sound is one type of wave.
3. Hearing is the perception of sound. |
# Physics of Hearing
## Speed of Sound, Frequency, and Wavelength
### Learning Objectives
By the end of this section, you will be able to:
1. Define pitch.
2. Describe the relationship between the speed of sound, its frequency, and its wavelength.
3. Describe the effects on the speed of sound as it travels through various media.
4. Describe the effects of temperature on the speed of sound.
Sound, like all waves, travels at a certain speed and has the properties of frequency and wavelength. You can observe direct evidence of the speed of sound while watching a fireworks display. The flash of an explosion is seen well before its sound is heard, implying both that sound travels at a finite speed and that it is much slower than light. You can also directly sense the frequency of a sound. Perception of frequency is called pitch. The wavelength of sound is not directly sensed, but indirect evidence is found in the correlation of the size of musical instruments with their pitch. Small instruments, such as a piccolo, typically make high-pitch sounds, while large instruments, such as a tuba, typically make low-pitch sounds. High pitch means small wavelength, and the size of a musical instrument is directly related to the wavelengths of sound it produces. So a small instrument creates short-wavelength sounds. Similar arguments hold that a large instrument creates long-wavelength sounds.
The relationship of the speed of sound, its frequency, and wavelength is the same as for all waves:
where is the speed of sound, is its frequency, and is its wavelength. The wavelength of a sound is the distance between adjacent identical parts of a wave—for example, between adjacent compressions as illustrated in . The frequency is the same as that of the source and is the number of waves that pass a point per unit time.
makes it apparent that the speed of sound varies greatly in different media. The speed of sound in a medium is determined by a combination of the medium’s rigidity (or compressibility in gases) and its density. The more rigid (or less compressible) the medium, the faster the speed of sound. For materials that have similar rigidities, sound will travel faster through the one with the lower density because the sound energy is more easily transferred from particle to particle. The speed of sound in air is low, because air is compressible. Because liquids and solids are relatively rigid and very difficult to compress, the speed of sound in such media is generally greater than in gases.
Earthquakes, essentially sound waves in Earth’s crust, are an interesting example of how the speed of sound depends on the rigidity of the medium. Earthquakes have both longitudinal and transverse components, and these travel at different speeds. The bulk modulus of granite is greater than its shear modulus. For that reason, the speed of longitudinal or pressure waves (P-waves) in earthquakes in granite is significantly higher than the speed of transverse or shear waves (S-waves). Both components of earthquakes travel slower in less rigid material, such as sediments. P-waves have speeds of 4 to 7 km/s, and S-waves correspondingly range in speed from 2 to 5 km/s, both being faster in more rigid material. The P-wave gets progressively farther ahead of the S-wave as they travel through Earth’s crust. The time between the P- and S-waves is routinely used to determine the distance to their source, the epicenter of the earthquake. The time and nature of these wave differences also provides the evidence for the nature of Earth's core. Through careful analysis of seismographic records of large earthquakes whose waves could be clearly detected around the world, Richard Dixon Oldham established that waves passing through the center of the Earth behaved as if they were moving through a different medium: a liquid. Later on, Inge Lehmann used more precise observations (partly based on a better coordinated network of seismographs she helped set up) to better define the nature of the core: that it was a solid inner core surrounded by a liquid outer core.
The speed of sound is affected by temperature in a given medium. For air at sea level, the speed of sound is given by
where the temperature (denoted as
) is in units of kelvin. The speed of sound in gases is related to the average speed of particles in the gas, , and that
where
is the Boltzmann constant (
) and
is the mass of each (identical) particle in the gas. So, it is reasonable that the speed of sound in air and other gases should depend on the square root of temperature. While not negligible, this is not a strong dependence. At
, the speed of sound is 331 m/s, whereas at
it is 343 m/s, less than a 4% increase. shows a use of the speed of sound by a bat to sense distances. Echoes are also used in medical imaging.
One of the more important properties of sound is that its speed is nearly independent of frequency. This independence is certainly true in open air for sounds in the audible range of 20 to 20,000 Hz. If this independence were not true, you would certainly notice it for music played by a marching band in a football stadium, for example. Suppose that high-frequency sounds traveled faster—then the farther you were from the band, the more the sound from the low-pitch instruments would lag that from the high-pitch ones. But the music from all instruments arrives in cadence independent of distance, and so all frequencies must travel at nearly the same speed. Recall that
In a given medium under fixed conditions,
is constant, so that there is a relationship between and ; the higher the frequency, the smaller the wavelength. See and consider the following example.
The speed of sound can change when sound travels from one medium to another. However, the frequency usually remains the same because it is like a driven oscillation and has the frequency of the original source. If
changes and remains the same, then the wavelength
must change. That is, because
,
the higher the speed of a sound, the greater its wavelength for a given frequency.
### Test Prep for AP Courses
### Section Summary
The relationship of the speed of sound , its frequency , and its wavelength is given by
which is the same relationship given for all waves.
In air, the speed of sound is related to air temperature by
is the same for all frequencies and wavelengths.
### Conceptual Questions
### Problems & Exercises
|
# Physics of Hearing
## Sound Intensity and Sound Level
### Learning Objectives
By the end of this section, you will be able to:
1. Define intensity, sound intensity, and sound pressure level.
2. Calculate sound intensity levels in decibels (dB).
In a quiet forest, you can sometimes hear a single leaf fall to the ground. After settling into bed, you may hear your blood pulsing through your ears. But when a passing motorist has his stereo turned up, you cannot even hear what the person next to you in your car is saying. We are all very familiar with the loudness of sounds and aware that they are related to how energetically the source is vibrating. In cartoons depicting a screaming person (or an animal making a loud noise), the cartoonist often shows an open mouth with a vibrating uvula, the hanging tissue at the back of the mouth, to suggest a loud sound coming from the throat . High noise exposure is hazardous to hearing, and it is common for musicians to have hearing losses that are sufficiently severe that they interfere with the musicians’ abilities to perform. The relevant physical quantity is sound intensity, a concept that is valid for all sounds whether or not they are in the audible range.
Intensity is defined to be the power per unit area carried by a wave. Power is the rate at which energy is transferred by the wave. In equation form, intensity is
where is the power through an area . The SI unit for is . The intensity of a sound wave is related to its amplitude squared by the following relationship:
Here is the pressure variation or pressure amplitude (half the difference between the maximum and minimum pressure in the sound wave) in units of pascals (Pa) or . (We are using a lower case for pressure to distinguish it from power, denoted by above.) The energy (as kinetic energy
) of an oscillating element of air due to a traveling sound wave is proportional to its amplitude squared. In this equation,
is the density of the material in which the sound wave travels, in units of , and is the speed of sound in the medium, in units of m/s. The pressure variation is proportional to the amplitude of the oscillation, and so varies as (). This relationship is consistent with the fact that the sound wave is produced by some vibration; the greater its pressure amplitude, the more the air is compressed in the sound it creates.
Sound intensity levels are quoted in decibels (dB) much more often than sound intensities in watts per meter squared. Decibels are the unit of choice in the scientific literature as well as in the popular media. The reasons for this choice of units are related to how we perceive sounds. How our ears perceive sound can be more accurately described by the logarithm of the intensity rather than directly to the intensity. The sound intensity level in decibels of a sound having an intensity in watts per meter squared is defined to be
where is a reference intensity. In particular, is the lowest or threshold intensity of sound a person with normal hearing can perceive at a frequency of 1000 Hz. Sound intensity level is not the same as intensity. Because is defined in terms of a ratio, it is a unitless quantity telling you the level of the sound relative to a fixed standard (, in this case). The units of decibels (dB) are used to indicate this ratio is multiplied by 10 in its definition. The bel, upon which the decibel is based, is named for Alexander Graham Bell, the inventor of the telephone.
The decibel level of a sound having the threshold intensity of is , because . That is, the threshold of hearing is 0 decibels. gives levels in decibels and intensities in watts per meter squared for some familiar sounds.
One of the more striking things about the intensities in is that the intensity in watts per meter squared is quite small for most sounds. The ear is sensitive to as little as a trillionth of a watt per meter squared—even more impressive when you realize that the area of the eardrum is only about , so that only W falls on it at the threshold of hearing! Air molecules in a sound wave of this intensity vibrate over a distance of less than one molecular diameter, and the gauge pressures involved are less than atm.
Another impressive feature of the sounds in is their numerical range. Sound intensity varies by a factor of from threshold to a sound that causes damage in seconds. You are unaware of this tremendous range in sound intensity because how your ears respond can be described approximately as the logarithm of intensity. Thus, sound intensity levels in decibels fit your experience better than intensities in watts per meter squared. The decibel scale is also easier to relate to because most people are more accustomed to dealing with numbers such as 0, 53, or 120 than numbers such as .
One more observation readily verified by examining or using is that each factor of 10 in intensity corresponds to 10 dB. For example, a 90 dB sound compared with a 60 dB sound is 30 dB greater, or three factors of 10 (that is, times) as intense. Another example is that if one sound is as intense as another, it is 70 dB higher. See .
It should be noted at this point that there is another decibel scale in use, called the sound pressure level, based on the ratio of the pressure amplitude to a reference pressure. This scale is used particularly in applications where sound travels in water. It is beyond the scope of most introductory texts to treat this scale because it is not commonly used for sounds in air, but it is important to note that very different decibel levels may be encountered when sound pressure levels are quoted. For example, ocean noise pollution produced by ships may be as great as 200 dB expressed in the sound pressure level, where the more familiar sound intensity level we use here would be something under 140 dB for the same sound.
### Test Prep for AP Courses
### Section Summary
1.
Intensity is the same for a sound wave as was defined for all waves; it is
2. Sound intensity level in units of decibels (dB) is
### Conceptual Questions
### Problems & Exercises
|
# Physics of Hearing
## Doppler Effect and Sonic Booms
### Learning Objectives
By the end of this section, you will be able to:
1. Define Doppler effect, Doppler shift, and sonic boom.
2. Calculate the frequency of a sound heard by someone observing Doppler shift.
3. Describe the sounds produced by objects moving faster than the speed of sound.
The characteristic sound of a motorcycle buzzing by is an example of the Doppler effect. The high-pitch scream shifts dramatically to a lower-pitch roar as the motorcycle passes by a stationary observer. The closer the motorcycle brushes by, the more abrupt the shift. The faster the motorcycle moves, the greater the shift. We also hear this characteristic shift in frequency for passing race cars, airplanes, and trains. It is so familiar that it is used to imply motion and children often mimic it in play.
The Doppler effect is an alteration in the observed frequency of a sound due to motion of either the source or the observer. Although less familiar, this effect is easily noticed for a stationary source and moving observer. For example, if you ride a train past a stationary warning bell, you will hear the bell’s frequency shift from high to low as you pass by. The actual change in frequency due to relative motion of source and observer is called a Doppler shift. The Doppler effect and Doppler shift are named for the Austrian physicist and mathematician Christian Johann Doppler (1803–1853), who did experiments with both moving sources and moving observers. Doppler, for example, had musicians play on a moving open train car and also play standing next to the train tracks as a train passed by. Their music was observed both on and off the train, and changes in frequency were measured.
What causes the Doppler shift? , , and compare sound waves emitted by stationary and moving sources in a stationary air mass. Each disturbance spreads out spherically from the point where the sound was emitted. If the source is stationary, then all of the spheres representing the air compressions in the sound wave centered on the same point, and the stationary observers on either side see the same wavelength and frequency as emitted by the source, as in . If the source is moving, as in , then the situation is different. Each compression of the air moves out in a sphere from the point where it was emitted, but the point of emission moves. This moving emission point causes the air compressions to be closer together on one side and farther apart on the other. Thus, the wavelength is shorter in the direction the source is moving (on the right in ), and longer in the opposite direction (on the left in ). Finally, if the observers move, as in , the frequency at which they receive the compressions changes. The observer moving toward the source receives them at a higher frequency, and the person moving away from the source receives them at a lower frequency.
We know that wavelength and frequency are related by , where is the fixed speed of sound. The sound moves in a medium and has the same speed in that medium whether the source is moving or not. Thus multiplied by is a constant. Because the observer on the right in receives a shorter wavelength, the frequency she receives must be higher. Similarly, the observer on the left receives a longer wavelength, and hence he hears a lower frequency. The same thing happens in . A higher frequency is received by the observer moving toward the source, and a lower frequency is received by an observer moving away from the source. In general, then, relative motion of source and observer toward one another increases the received frequency. Relative motion apart decreases frequency. The greater the relative speed is, the greater the effect.
For a stationary observer and a moving source, the frequency fobs received by the observer can be shown to be
where is the frequency of the source, is the speed of the source along a line joining the source and observer, and is the speed of sound. The minus sign is used for motion toward the observer and the plus sign for motion away from the observer, producing the appropriate shifts up and down in frequency. Note that the greater the speed of the source, the greater the effect. Similarly, for a stationary source and moving observer, the frequency received by the observer is given by
where is the speed of the observer along a line joining the source and observer. Here the plus sign is for motion toward the source, and the minus is for motion away from the source.
### Sonic Booms to Bow Wakes
What happens to the sound produced by a moving source, such as a jet airplane, that approaches or even exceeds the speed of sound? The answer to this question applies not only to sound but to all other waves as well.
Suppose a jet airplane is coming nearly straight at you, emitting a sound of frequency . The greater the plane’s speed , the greater the Doppler shift and the greater the value observed for . Now, as approaches the speed of sound, approaches infinity, because the denominator in
approaches zero. At the speed of sound, this result means that in front of the source, each successive wave is superimposed on the previous one because the source moves forward at the speed of sound. The observer gets them all at the same instant, and so the frequency is infinite. (Before airplanes exceeded the speed of sound, some people argued it would be impossible because such constructive superposition would produce pressures great enough to destroy the airplane.) If the source exceeds the speed of sound, no sound is received by the observer until the source has passed, so that the sounds from the approaching source are mixed with those from it when receding. This mixing appears messy, but something interesting happens—a sonic boom is created. (See .)
There is constructive interference along the lines shown (a cone in three dimensions) from similar sound waves arriving there simultaneously. This superposition forms a disturbance called a sonic boom, a constructive interference of sound created by an object moving faster than sound. Inside the cone, the interference is mostly destructive, and so the sound intensity there is much less than on the shock wave. An aircraft creates two sonic booms, one from its nose and one from its tail. (See .) During television coverage of space shuttle landings, two distinct booms could often be heard. These were separated by exactly the time it would take the shuttle to pass by a point. Observers on the ground often do not see the aircraft creating the sonic boom, because it has passed by before the shock wave reaches them, as seen in . If the aircraft flies close by at low altitude, pressures in the sonic boom can be destructive and break windows as well as rattle nerves. Because of how destructive sonic booms can be, supersonic flights are banned over populated areas of the United States.
Sonic booms are one example of a broader phenomenon called bow wakes. A bow wake, such as the one in , is created when the wave source moves faster than the wave propagation speed. Water waves spread out in circles from the point where created, and the bow wake is the familiar V-shaped wake trailing the source. A more exotic bow wake is created when a subatomic particle travels through a medium faster than the speed of light travels in that medium. (In a vacuum, the maximum speed of light will be ; in the medium of water, the speed of light is closer to . If the particle creates light in its passage, that light spreads on a cone with an angle indicative of the speed of the particle, as illustrated in . Such a bow wake is called Cerenkov radiation and is commonly observed in particle physics.
Doppler shifts and sonic booms are interesting sound phenomena that occur in all types of waves. They can be of considerable use. For example, the Doppler shift in ultrasound can be used to measure blood velocity, while police use the Doppler shift in radar (a microwave) to measure car velocities. In meteorology, the Doppler shift is used to track the motion of storm clouds; such “Doppler Radar” can give velocity and direction and rain or snow potential of imposing weather fronts. In astronomy, we can examine the light emitted from distant galaxies and determine their speed relative to ours. As galaxies move away from us, their light is shifted to a lower frequency, and so to a longer wavelength—the so-called red shift. Such information from galaxies far, far away has allowed us to estimate the age of the universe (from the Big Bang) as about 14 billion years.
### Test Prep for AP Courses
### Section Summary
1. The Doppler effect is an alteration in the observed frequency of a sound due to motion of either the source or the observer.
2. The actual change in frequency is called the Doppler shift.
3. A sonic boom is constructive interference of sound created by an object moving faster than sound.
4. A sonic boom is a type of bow wake created when any wave source moves faster than the wave propagation speed.
5. For a stationary observer and a moving source, the observed frequency is:
where is the frequency of the source, is the speed of the source, and is the speed of sound. The minus sign is used for motion toward the observer and the plus sign for motion away.
6. For a stationary source and moving observer, the observed frequency is:
where is the speed of the observer.
### Conceptual Questions
### Problems & Exercises
|
# Physics of Hearing
## Sound Interference and Resonance: Standing Waves in Air Columns
### Learning Objectives
By the end of this section, you will be able to:
1. Define antinode, node, fundamental, overtones, and harmonics.
2. Identify instances of sound interference in everyday situations.
3. Describe how sound interference occurring inside open and closed tubes changes the characteristics of the sound, and how this applies to sounds produced by musical instruments.
4. Calculate the length of a tube using sound wave measurements.
Interference is the hallmark of waves, all of which exhibit constructive and destructive interference exactly analogous to that seen for water waves. In fact, one way to prove something “is a wave” is to observe interference effects. So, sound being a wave, we expect it to exhibit interference; we have already mentioned a few such effects, such as the beats from two similar notes played simultaneously.
shows a clever use of sound interference to cancel noise. Larger-scale applications of active noise reduction by destructive interference are contemplated for entire passenger compartments in commercial aircraft. To obtain destructive interference, a fast electronic analysis is performed, and a second sound is introduced with its maxima and minima exactly reversed from the incoming noise. Sound waves in fluids are pressure waves and consistent with Pascal’s principle; pressures from two different sources add and subtract like simple numbers; that is, positive and negative gauge pressures add to a much smaller pressure, producing a lower-intensity sound. Although completely destructive interference is possible only under the simplest conditions, it is possible to reduce noise levels by 30 dB or more using this technique.
Where else can we observe sound interference? All sound resonances, such as in musical instruments, are due to constructive and destructive interference. Only the resonant frequencies interfere constructively to form standing waves, while others interfere destructively and are absent. From the toot made by blowing over a bottle, to the characteristic flavor of a violin’s sounding box, to the recognizability of a great singer’s voice, resonance and standing waves play a vital role.
Suppose we hold a tuning fork near the end of a tube that is closed at the other end, as shown in , , , and . If the tuning fork has just the right frequency, the air column in the tube resonates loudly, but at most frequencies it vibrates very little. This observation just means that the air column has only certain natural frequencies. The figures show how a resonance at the lowest of these natural frequencies is formed. A disturbance travels down the tube at the speed of sound and bounces off the closed end. If the tube is just the right length, the reflected sound arrives back at the tuning fork exactly half a cycle later, and it interferes constructively with the continuing sound produced by the tuning fork. The incoming and reflected sounds form a standing wave in the tube as shown.
The standing wave formed in the tube has its maximum air displacement (an antinode) at the open end, where motion is unconstrained, and no displacement (a node) at the closed end, where air movement is halted. The distance from a node to an antinode is one-fourth of a wavelength, and this equals the length of the tube; thus, . This same resonance can be produced by a vibration introduced at or near the closed end of the tube, as shown in . It is best to consider this a natural vibration of the air column independently of how it is induced.
Given that maximum air displacements are possible at the open end and none at the closed end, there are other, shorter wavelengths that can resonate in the tube, such as the one shown in . Here the standing wave has three-fourths of its wavelength in the tube, or , so that . Continuing this process reveals a whole series of shorter-wavelength and higher-frequency sounds that resonate in the tube. We use specific terms for the resonances in any system. The lowest resonant frequency is called the fundamental, while all higher resonant frequencies are called overtones. All resonant frequencies are integral multiples of the fundamental, and they are collectively called harmonics. The fundamental is the first harmonic, the first overtone is the second harmonic, and so on. shows the fundamental and the first three overtones (the first four harmonics) in a tube closed at one end.
The fundamental and overtones can be present simultaneously in a variety of combinations. For example, middle C on a trumpet has a sound distinctively different from middle C on a clarinet, both instruments being modified versions of a tube closed at one end. The fundamental frequency is the same (and usually the most intense), but the overtones and their mix of intensities are different and subject to shading by the musician. This mix is what gives various musical instruments (and human voices) their distinctive characteristics, whether they have air columns, strings, sounding boxes, or drumheads. In fact, much of our speech is determined by shaping the cavity formed by the throat and mouth and positioning the tongue to adjust the fundamental and combination of overtones. Simple resonant cavities can be made to resonate with the sound of the vowels, for example. (See .) In males, at puberty, the larynx grows and the shape of the resonant cavity changes giving rise to the difference in predominant frequencies in speech between different sexes.
Now let us look for a pattern in the resonant frequencies for a simple tube that is closed at one end. The fundamental has , and frequency is related to wavelength and the speed of sound as given by:
Solving for in this equation gives
where is the speed of sound in air. Similarly, the first overtone has (see ), so that
Because , we call the first overtone the third harmonic. Continuing this process, we see a pattern that can be generalized in a single expression. The resonant frequencies of a tube closed at one end are
where is the fundamental, is the first overtone, and so on. It is interesting that the resonant frequencies depend on the speed of sound and, hence, on temperature. This dependence poses a noticeable problem for organs in old unheated cathedrals, and it is also the reason why musicians commonly bring their wind instruments to room temperature before playing them.
Another type of tube is one that is open at both ends. Examples are some organ pipes, flutes, and oboes. The resonances of tubes open at both ends can be analyzed in a very similar fashion to those for tubes closed at one end. The air columns in tubes open at both ends have maximum air displacements at both ends, as illustrated in . Standing waves form as shown.
Based on the fact that a tube open at both ends has maximum air displacements at both ends, and using as a guide, we can see that the resonant frequencies of a tube open at both ends are:
where is the fundamental, is the first overtone, is the second overtone, and so on. Note that a tube open at both ends has a fundamental frequency twice what it would have if closed at one end. It also has a different spectrum of overtones than a tube closed at one end. So if you had two tubes with the same fundamental frequency but one was open at both ends and the other was closed at one end, they would sound different when played because they have different overtones. Middle C, for example, would sound richer played on an open tube, because it has even multiples of the fundamental as well as odd. A closed tube has only odd multiples.
Wind instruments use resonance in air columns to amplify tones made by lips or vibrating reeds. Other instruments also use air resonance in clever ways to amplify sound. shows a violin and a guitar, both of which have sounding boxes but with different shapes, resulting in different overtone structures. The vibrating string creates a sound that resonates in the sounding box, greatly amplifying the sound and creating overtones that give the instrument its characteristic flavor. The more complex the shape of the sounding box, the greater its ability to resonate over a wide range of frequencies. The marimba, like the one shown in uses pots or gourds below the wooden slats to amplify their tones. The resonance of the pot can be adjusted by adding water.
We have emphasized sound applications in our discussions of resonance and standing waves, but these ideas apply to any system that has wave characteristics. Vibrating strings, for example, are actually resonating and have fundamentals and overtones similar to those for air columns. More subtle are the resonances in atoms due to the wave character of their electrons. Their orbitals can be viewed as standing waves, which have a fundamental (ground state) and overtones (excited states). It is fascinating that wave characteristics apply to such a wide range of physical systems.
### Test Prep for AP Courses
### Section Summary
1. Sound interference and resonance have the same properties as defined for all waves.
2. In air columns, the lowest-frequency resonance is called the fundamental, whereas all higher resonant frequencies are called overtones. Collectively, they are called harmonics.
3. The resonant frequencies of a tube closed at one end are:
is the fundamental and is the length of the tube.
4. The resonant frequencies of a tube open at both ends are:
### Conceptual Questions
### Problems & Exercises
|
# Physics of Hearing
## Hearing
### Learning Objectives
By the end of this section, you will be able to:
1. Define hearing, pitch, loudness, timbre, note, tone, phon, ultrasound, and infrasound.
2. Compare loudness to frequency and intensity of a sound.
3. Identify structures of the inner ear and explain how they relate to sound perception.
The human ear has a tremendous range and sensitivity. It can give us a wealth of simple information—such as pitch, loudness, and direction. And from its input we can detect musical quality and nuances of voiced emotion. How is our hearing related to the physical qualities of sound, and how does the hearing mechanism work?
Hearing is the perception of sound. (Perception is commonly defined to be awareness through the senses, a typically circular definition of higher-level processes in living organisms.) Normal human hearing encompasses frequencies from 20 to 20,000 Hz, an impressive range. Sounds below 20 Hz are called infrasound, whereas those above 20,000 Hz are ultrasound. Neither is perceived by the ear, although infrasound can sometimes be felt as vibrations. When we do hear low-frequency vibrations, such as the sounds of a diving board, we hear the individual vibrations only because there are higher-frequency sounds in each. Other animals have hearing ranges different from that of humans. Dogs can hear sounds as high as 30,000 Hz, whereas bats and dolphins can hear up to 100,000-Hz sounds. You may have noticed that dogs respond to the sound of a dog whistle which produces sound out of the range of human hearing. Elephants are known to respond to frequencies below 20 Hz.
The perception of frequency is called pitch. Most of us have excellent relative pitch, which means that we can tell whether one sound has a different frequency from another. Typically, we can discriminate between two sounds if their frequencies differ by 0.3% or more. For example, 500.0 and 501.5 Hz are noticeably different. Pitch perception is directly related to frequency and is not greatly affected by other physical quantities such as intensity. Musical notes are particular sounds that can be produced by most instruments and in Western music have particular names. Combinations of notes constitute music. Some people can identify musical notes, such as A-sharp, C, or E-flat, just by listening to them. This uncommon ability is called perfect pitch.
The ear is remarkably sensitive to low-intensity sounds. The lowest audible intensity or threshold is about or 0 dB. Sounds as much as more intense can be briefly tolerated. Very few measuring devices are capable of observations over a range of a trillion. The perception of intensity is called loudness. At a given frequency, it is possible to discern differences of about 1 dB, and a change of 3 dB is easily noticed. But loudness is not related to intensity alone. Frequency has a major effect on how loud a sound seems. The ear has its maximum sensitivity to frequencies in the range of 2000 to 5000 Hz, so that sounds in this range are perceived as being louder than, say, those at 500 or 10,000 Hz, even when they all have the same intensity. Sounds near the high- and low-frequency extremes of the hearing range seem even less loud, because the ear is even less sensitive at those frequencies. gives the dependence of certain human hearing perceptions on physical quantities.
When a violin plays middle C, there is no mistaking it for a piano playing the same note. The reason is that each instrument produces a distinctive set of frequencies and intensities. We call our perception of these combinations of frequencies and intensities tone quality, or more commonly the timbre of the sound. It is more difficult to correlate timbre perception to physical quantities than it is for loudness or pitch perception. Timbre is more subjective. Terms such as dull, brilliant, warm, cold, pure, and rich are employed to describe the timbre of a sound. So the consideration of timbre takes us into the realm of perceptual psychology, where higher-level processes in the brain are dominant. This is true for other perceptions of sound, such as music and noise. We shall not delve further into them; rather, we will concentrate on the question of loudness perception.
A unit called a phon is used to express loudness numerically. Phons differ from decibels because the phon is a unit of loudness perception, whereas the decibel is a unit of physical intensity. shows the relationship of loudness to intensity (or intensity level) and frequency for persons with normal hearing. The curved lines are equal-loudness curves. Each curve is labeled with its loudness in phons. Any sound along a given curve will be perceived as equally loud by the average person. The curves were determined by having large numbers of people compare the loudness of sounds at different frequencies and sound intensity levels. At a frequency of 1000 Hz, phons are taken to be numerically equal to decibels. The following example helps illustrate how to use the graph:
Further examination of the graph in reveals some interesting facts about human hearing. First, sounds below the 0-phon curve are not perceived by most people. So, for example, a 60 Hz sound at 40 dB is inaudible. The 0-phon curve represents the threshold of normal hearing. We can hear some sounds at intensity levels below 0 dB. For example, a 3-dB, 5000-Hz sound is audible, because it lies above the 0-phon curve. The loudness curves all have dips in them between about 2000 and 5000 Hz. These dips mean the ear is most sensitive to frequencies in that range. For example, a 15-dB sound at 4000 Hz has a loudness of 20 phons, the same as a 20-dB sound at 1000 Hz. The curves rise at both extremes of the frequency range, indicating that a greater-intensity level sound is needed at those frequencies to be perceived to be as loud as at middle frequencies. For example, a sound at 10,000 Hz must have an intensity level of 30 dB to seem as loud as a 20 dB sound at 1000 Hz. Sounds above 120 phons are painful as well as damaging.
We do not often utilize our full range of hearing. This is particularly true for frequencies above 8000 Hz, which are rare in the environment and are unnecessary for understanding conversation or appreciating music. In fact, people who have lost the ability to hear such high frequencies are usually unaware of their loss until tested. The shaded region in is the frequency and intensity region where most conversational sounds fall. The curved lines indicate what effect hearing losses of 40 and 60 phons will have. A 40-phon hearing loss at all frequencies still allows a person to understand conversation, although it will seem very quiet. A person with a 60-phon loss at all frequencies will hear only the lowest frequencies and will not be able to understand speech unless it is much louder than normal. Even so, speech may seem indistinct, because higher frequencies are not as well perceived. The conversational speech region also has a gender component, in that female voices are usually characterized by higher frequencies. So the person with a 60-phon hearing impediment might have difficulty understanding the normal conversation of a woman.
Hearing tests are performed over a range of frequencies, usually from 250 to 8000 Hz, and can be displayed graphically in an audiogram like that in . The hearing threshold is measured in dB relative to the normal threshold, so that normal hearing registers as 0 dB at all frequencies. Hearing loss caused by noise typically shows a dip near the 4000 Hz frequency, irrespective of the frequency that caused the loss and often affects both ears. The most common form of hearing loss comes with age and is called presbycusis—literally elder ear. Such loss is increasingly severe at higher frequencies, and interferes with music appreciation and speech recognition.
The outer ear, or ear canal, carries sound to the recessed protected eardrum. The air column in the ear canal resonates and is partially responsible for the sensitivity of the ear to sounds in the 2000 to 5000 Hz range. The middle ear converts sound into mechanical vibrations and applies these vibrations to the cochlea. The lever system of the middle ear takes the force exerted on the eardrum by sound pressure variations, amplifies it and transmits it to the inner ear via the oval window, creating pressure waves in the cochlea approximately 40 times greater than those impinging on the eardrum. (See .) Two muscles in the middle ear (not shown) protect the inner ear from very intense sounds. They react to intense sound in a few milliseconds and reduce the force transmitted to the cochlea. This protective reaction can also be triggered by your own voice, so that humming while shooting a gun, for example, can reduce noise damage.
shows the middle and inner ear in greater detail. Pressure waves moving through the cochlea cause the tectorial membrane to vibrate, rubbing cilia (called hair cells), which stimulate nerves that send electrical signals to the brain. The membrane resonates at different positions for different frequencies, with high frequencies stimulating nerves at the near end and low frequencies at the far end. The complete operation of the cochlea is still not understood, but several mechanisms for sending information to the brain are known to be involved. For sounds below about 1000 Hz, the nerves send signals at the same frequency as the sound. For frequencies greater than about 1000 Hz, the nerves signal frequency by position. There is a structure to the cilia, and there are connections between nerve cells that perform signal processing before information is sent to the brain. Intensity information is partly indicated by the number of nerve signals and by volleys of signals. The brain processes the cochlear nerve signals to provide additional information such as source direction (based on time and intensity comparisons of sounds from both ears). Higher-level processing produces many nuances, such as music appreciation.
Hearing losses can occur because of problems in the middle or inner ear. Conductive losses in the middle ear can be partially overcome by sending sound vibrations to the cochlea through the skull. Hearing aids for this purpose usually press against the bone behind the ear, rather than simply amplifying the sound sent into the ear canal as many hearing aids do. Damage to the nerves in the cochlea is not repairable, but amplification can partially compensate. There is a risk that amplification will produce further damage. Another common failure in the cochlea is damage or loss of the cilia but with nerves remaining functional. Cochlear implants that stimulate the nerves directly are now available and widely accepted. Over 100,000 implants are in use, in about equal numbers of adults and children.
The cochlear implant was pioneered in Melbourne, Australia, by Graeme Clark in the 1970s for his deaf father. The implant consists of three external components and two internal components. The external components are a microphone for picking up sound and converting it into an electrical signal, a speech processor to select certain frequencies and a transmitter to transfer the signal to the internal components through electromagnetic induction. The internal components consist of a receiver/transmitter secured in the bone beneath the skin, which converts the signals into electric impulses and sends them through an internal cable to the cochlea and an array of about 24 electrodes wound through the cochlea. These electrodes in turn send the impulses directly into the brain. The electrodes basically emulate the cilia.
### Section Summary
1. The range of audible frequencies is 20 to 20,000 Hz.
2. Those sounds above 20,000 Hz are ultrasound, whereas those below 20 Hz are infrasound.
3. The perception of frequency is pitch.
4. The perception of intensity is loudness.
5. Loudness has units of phons.
### Conceptual Questions
### Problems & Exercises
|
# Physics of Hearing
## Ultrasound
### Learning Objectives
By the end of this section, you will be able to:
1. Define acoustic impedance and intensity reflection coefficient.
2. Describe medical and other uses of ultrasound technology.
3. Calculate acoustic impedance using density values and the speed of ultrasound.
4. Calculate the velocity of a moving object using Doppler-shifted ultrasound.
Any sound with a frequency above 20,000 Hz (or 20 kHz)—that is, above the highest audible frequency—is defined to be ultrasound. In practice, it is possible to create ultrasound frequencies up to more than a gigahertz. (Higher frequencies are difficult to create; furthermore, they propagate poorly because they are very strongly absorbed.) Ultrasound has a tremendous number of applications, which range from burglar alarms to use in cleaning delicate objects to the guidance systems of bats. We begin our discussion of ultrasound with some of its applications in medicine, in which it is used extensively both for diagnosis and for therapy.
### Ultrasound in Medical Therapy
Ultrasound, like any wave, carries energy that can be absorbed by the medium carrying it, producing effects that vary with intensity. When focused to intensities of to , ultrasound can be used to shatter gallstones or pulverize cancerous tissue in surgical procedures. (See .) Intensities this great can damage individual cells, variously causing their protoplasm to stream inside them, altering their permeability, or rupturing their walls through cavitation. Cavitation is the creation of vapor cavities in a fluid—the longitudinal vibrations in ultrasound alternatively compress and expand the medium, and at sufficient amplitudes the expansion separates molecules. Most cavitation damage is done when the cavities collapse, producing even greater shock pressures.
Most of the energy carried by high-intensity ultrasound in tissue is converted to thermal energy. In fact, intensities of to are commonly used for deep-heat treatments called ultrasound diathermy. Frequencies of 0.8 to 1 MHz are typical. In both athletics and physical therapy, ultrasound diathermy is most often applied to injured or overworked muscles to relieve pain and improve flexibility. Skill is needed by the therapist to avoid “bone burns” and other tissue damage caused by overheating and cavitation, sometimes made worse by reflection and focusing of the ultrasound by joint and bone tissue.
In some instances, you may encounter a different decibel scale, called the sound pressure level, when ultrasound travels in water or in human and other biological tissues. We shall not use the scale here, but it is notable that numbers for sound pressure levels range 60 to 70 dB higher than you would quote for , the sound intensity level used in this text. Should you encounter a sound pressure level of 220 decibels, then, it is not an astronomically high intensity, but equivalent to about 155 dB—high enough to destroy tissue, but not as unreasonably high as it might seem at first.
### Ultrasound in Medical Diagnostics
When used for imaging, ultrasonic waves are emitted from a transducer, a crystal exhibiting the piezoelectric effect (the expansion and contraction of a substance when a voltage is applied across it, causing a vibration of the crystal). These high-frequency vibrations are transmitted into any tissue in contact with the transducer. Similarly, if a pressure is applied to the crystal (in the form of a wave reflected off tissue layers), a voltage is produced which can be recorded. The crystal therefore acts as both a transmitter and a receiver of sound. Ultrasound is also partially absorbed by tissue on its path, both on its journey away from the transducer and on its return journey. From the time between when the original signal is sent and when the reflections from various boundaries between media are received, (as well as a measure of the intensity loss of the signal), the nature and position of each boundary between tissues and organs may be deduced.
Reflections at boundaries between two different media occur because of differences in a characteristic known as the acoustic impedance of each substance. Impedance is defined as
where is the density of the medium (in ) and is the speed of sound through the medium (in m/s). The units for are therefore .
shows the density and speed of sound through various media (including various soft tissues) and the associated acoustic impedances. Note that the acoustic impedances for soft tissue do not vary much but that there is a big difference between the acoustic impedance of soft tissue and air and also between soft tissue and bone.
At the boundary between media of different acoustic impedances, some of the wave energy is reflected and some is transmitted. The greater the difference in acoustic impedance between the two media, the greater the reflection and the smaller the transmission.
The intensity reflection coefficient is defined as the ratio of the intensity of the reflected wave relative to the incident (transmitted) wave. This statement can be written mathematically as
where and are the acoustic impedances of the two media making up the boundary. A reflection coefficient of zero (corresponding to total transmission and no reflection) occurs when the acoustic impedances of the two media are the same. An impedance “match” (no reflection) provides an efficient coupling of sound energy from one medium to another. The image formed in an ultrasound is made by tracking reflections (as shown in ) and mapping the intensity of the reflected sound waves in a two-dimensional plane.
The applications of ultrasound in medical diagnostics have produced untold benefits with no known risks. Diagnostic intensities are too low (about ) to cause thermal damage. More significantly, ultrasound has been in use for several decades and detailed follow-up studies do not show evidence of ill effects, quite unlike the case for x-rays.
The most common ultrasound applications produce an image like that shown in . The speaker-microphone broadcasts a directional beam, sweeping the beam across the area of interest. This is accomplished by having multiple ultrasound sources in the probe’s head, which are phased to interfere constructively in a given, adjustable direction. Echoes are measured as a function of position as well as depth. A computer constructs an image that reveals the shape and density of internal structures.
How much detail can ultrasound reveal? The image in is typical of low-cost systems, but that in shows the remarkable detail possible with more advanced systems, including 3D imaging. Ultrasound today is commonly used in prenatal care. Such imaging can be used to see if the fetus is developing at a normal rate, and help in the determination of serious problems early in the pregnancy. Ultrasound is also in wide use to image the chambers of the heart and the flow of blood within the beating heart, using the Doppler effect (echocardiology).
Whenever a wave is used as a probe, it is very difficult to detect details smaller than its wavelength . Indeed, current technology cannot do quite this well. Abdominal scans may use a 7-MHz frequency, and the speed of sound in tissue is about 1540 m/s—so the wavelength limit to detail would be . In practice, 1-mm detail is attainable, which is sufficient for many purposes. Higher-frequency ultrasound would allow greater detail, but it does not penetrate as well as lower frequencies do. The accepted rule of thumb is that you can effectively scan to a depth of about into tissue. For 7 MHz, this penetration limit is , which is 0.11 m. Higher frequencies may be employed in smaller organs, such as the eye, but are not practical for looking deep into the body.
In addition to shape information, ultrasonic scans can produce density information superior to that found in X-rays, because the intensity of a reflected sound is related to changes in density. Sound is most strongly reflected at places where density changes are greatest.
Another major use of ultrasound in medical diagnostics is to detect motion and determine velocity through the Doppler shift of an echo, known as Doppler-shifted ultrasound. This technique is used to monitor fetal heartbeat, measure blood velocity, and detect occlusions in blood vessels, for example. (See .) The magnitude of the Doppler shift in an echo is directly proportional to the velocity of whatever reflects the sound. Because an echo is involved, there is actually a double shift. The first occurs because the reflector (say a fetal heart) is a moving observer and receives a Doppler-shifted frequency. The reflector then acts as a moving source, producing a second Doppler shift.
A clever technique is used to measure the Doppler shift in an echo. The frequency of the echoed sound is superimposed on the broadcast frequency, producing beats. The beat frequency is , and so it is directly proportional to the Doppler shift () and hence, the reflector’s velocity. The advantage in this technique is that the Doppler shift is small (because the reflector’s velocity is small), so that great accuracy would be needed to measure the shift directly. But measuring the beat frequency is easy, and it is not affected if the broadcast frequency varies somewhat. Furthermore, the beat frequency is in the audible range and can be amplified for audio feedback to the medical observer.
### Section Summary
1. The acoustic impedance is defined as:
is the density of a medium through which the sound travels and is the speed of sound through that medium.
2. The intensity reflection coefficient , a measure of the ratio of the intensity of the wave reflected off a boundary between two media relative to the intensity of the incident wave, is given by
3. The intensity reflection coefficient is a unitless quantity.
### Conceptual Questions
### Problems & Exercises
Unless otherwise indicated, for problems in this section, assume that the speed of sound through human tissues is 1540 m/s.
|
# Electric Charge and Electric Field
## Connection for AP® Courses
The image of American politician and scientist Benjamin Franklin (1706–1790) flying a kite in a thunderstorm (shown in ) is familiar to many schoolchildren. In this experiment, Franklin demonstrated a connection between lightning and static electricity. Sparks were drawn from a key hung on a kite string during an electrical storm. These sparks were like those produced by static electricity, such as the spark that jumps from your finger to a metal doorknob after you walk across a wool carpet. Much has been written about Franklin. His experiments were only part of the life of a man who was a scientist, inventor, revolutionary, statesman, and writer. Franklin's experiments were not performed in isolation, nor were they the only ones to reveal connections.
When Benjamin Franklin demonstrated that lightning was related to static electricity, he made a connection that is now part of the evidence that all directly experienced forces (except gravitational force) are manifestations of the electromagnetic force. For example, the Italian scientist Luigi Galvani (1737-1798) performed a series of experiments in which static electricity was used to stimulate contractions of leg muscles of dead frogs, an effect already known in humans subjected to static discharges. But Galvani also found that if he joined one end of two metal wires (say copper and zinc) and touched the other ends of the wires to muscles; he produced the same effect in frogs as static discharge. Alessandro Volta (1745-1827), partly inspired by Galvani's work, experimented with various combinations of metals and developed the battery.
During the same era, other scientists made progress in discovering fundamental connections. The periodic table was developed as systematic properties of the elements were discovered. This influenced the development and refinement of the concept of atoms as the basis of matter. Such submicroscopic descriptions of matter also help explain a great deal more. Atomic and molecular interactions, such as the forces of friction, cohesion, and adhesion, are now known to be manifestations of the electromagnetic force.
Static electricity is just one aspect of the electromagnetic force, which also includes moving electricity and magnetism. All the macroscopic forces that we experience directly, such as the sensations of touch and the tension in a rope, are due to the electromagnetic force, one of the four fundamental forces in nature. The gravitational force, another fundamental force, is actually sensed through the electromagnetic interaction of molecules, such as between those in our feet and those on the top of a bathroom scale. (The other two fundamental forces, the strong nuclear force and the weak nuclear force, cannot be sensed on the human scale.)
This chapter begins the study of electromagnetic phenomena at a fundamental level. The next several chapters will cover static electricity, moving electricity, and magnetism – collectively known as electromagnetism. In this chapter, we begin with the study of electric phenomena due to charges that are at least temporarily stationary, called electrostatics, or static electricity.
The chapter introduces several very important concepts of charge, electric force, and electric field, as well as defining the relationships between these concepts. The charge is defined as a property of a system (Big Idea 1) that can affect its interaction with other charged systems (Enduring Understanding 1.B). The law of conservation of electric charge is also discussed (Essential Knowledge 1.B.1). The two kinds of electric charge are defined as positive and negative, providing an explanation for having positively charged, negatively charged, or neutral objects (containing equal quantities of positive and negative charges) (Essential Knowledge 1.B.2). The discrete nature of the electric charge is introduced in this chapter by defining the elementary charge as the smallest observed unit of charge that can be isolated, which is the electron charge (Essential Knowledge 1.B.3). The concepts of a system (having internal structure) and of an object (having no internal structure) are implicitly introduced to explain charges carried by the electron and proton (Enduring Understanding 1.A, Essential Knowledge 1.A.1).
An electric field is caused by the presence of charged objects (Enduring Understanding 2.C) and can be used to explain interactions between electrically charged objects (Big Idea 2). The electric force represents the effect of an electric field on a charge placed in the field. The magnitude and direction of the electric force are defined by the magnitude and direction of the electric field and magnitude and sign of the charge (Essential Knowledge 2.C.1). The magnitude of the electric field is proportional to the net charge of the objects that created that field (Essential Knowledge 2.C.2). For the special case of a spherically symmetric charged object, the electric field outside the object is radial, and its magnitude varies as the inverse square of the radial distance from the center of that object (Essential Knowledge 2.C.3). The chapter provides examples of vector field maps for various charged systems, including point charges, spherically symmetric charge distributions, and uniformly charged parallel plates (Essential Knowledge 2.C.1, Essential Knowledge 2.C.2). For multiple point charges, the chapter explains how to find the vector field map by adding the electric field vectors of each individual object, including the special case of two equal charges having opposite signs, known as an electric dipole (Essential Knowledge 2.C.4). The special case of two oppositely charged parallel plates with uniformly distributed electric charge when the electric field is perpendicular to the plates and is constant in both magnitude and direction is described in detail, providing many opportunities for problem solving and applications (Essential Knowledge 2.C.5).
The idea that interactions can be described by forces is also reinforced in this chapter (Big Idea 3). Like all other forces that you have learned about so far, electric force is a vector that affects the motion according to Newton's laws (Enduring Understanding 3.A). It is clearly stated in the chapter that electric force appears as a result of interactions between two charged objects (Essential Knowledge 3.A.3, Essential Knowledge 3.C.2). At the macroscopic level the electric force is a long-range force (Enduring Understanding 3.C); however, at the microscopic level many contact forces, such as friction, can be explained by interatomic electric forces (Essential Knowledge 3.C.4). This understanding of friction is helpful when considering properties of conductors and insulators and the transfer of charge by conduction.
Interactions between systems can result in changes in those systems (Big Idea 4). In the case of charged systems, such interactions can lead to changes of electric properties (Enduring Understanding 4.E), such as charge distribution (Essential Knowledge 4.E.3). Any changes are governed by conservation laws (Big Idea 5). Depending on whether the system is closed or open, certain quantities of the system remain the same or changes in those quantities are equal to the amount of transfer of this quantity from or to the system (Enduring Understanding 5.A). The electric charge is one of these quantities (Essential Knowledge 5.A.2). Therefore, the electric charge of a system is conserved (Enduring Understanding 5.C) and the exchange of electric charge between objects in a system does not change the total electric charge of the system (Essential Knowledge 5.C.2).
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.A The internal structure of a system determines many properties of the system.
Essential Knowledge 1.A.1 A system is an object or a collection of objects. Objects are treated as having no internal structure.
Enduring Understanding 1.B Electric charge is a property of an object or system that affects its interactions with other objects or systems containing charge.
Essential Knowledge 1.B.1 Electric charge is conserved. The net charge of a system is equal to the sum of the charges of all the objects in the system.
Essential Knowledge 1.B.2 There are only two kinds of electric charge. Neutral objects or systems contain equal quantities of positive and negative charge, with the exception of some fundamental particles that have no electric charge.
Essential Knowledge 1.B.3 The smallest observed unit of charge that can be isolated is the electron charge, also known as the elementary charge.
Big Idea 2 Fields existing in space can be used to explain interactions.
Enduring Understanding 2.C An electric field is caused by an object with electric charge.
Essential Knowledge 2.C.1 The magnitude of the electric force F exerted on an object with electric charge q by an electric field ( is . The direction of the force is determined by the direction of the field and the sign of the charge, with positively charged objects accelerating in the direction of the field and negatively charged objects accelerating in the direction opposite the field. This should include a vector field map for positive point charges, negative point charges, spherically symmetric charge distribution, and uniformly charged parallel plates.
Essential Knowledge 2.C.2 The magnitude of the electric field vector is proportional to the net electric charge of the object(s) creating that field. This includes positive point charges, negative point charges, spherically symmetric charge distributions, and uniformly charged parallel plates.
Essential Knowledge 2.C.3 The electric field outside a spherically symmetric charged object is radial, and its magnitude varies as the inverse square of the radial distance from the center of that object. Electric field lines are not in the curriculum. Students will be expected to rely only on the rough intuitive sense underlying field lines, wherein the field is viewed as analogous to something emanating uniformly from a source.
Essential Knowledge 2.C.4 The electric field around dipoles and other systems of electrically charged objects (that can be modeled as point objects) is found by vector addition of the field of each individual object. Electric dipoles are treated qualitatively in this course as a teaching analogy to facilitate student understanding of magnetic dipoles.
Essential Knowledge 2.C.5 Between two oppositely charged parallel plates with uniformly distributed electric charge, at points far from the edges of the plates, the electric field is perpendicular to the plates and is constant in both magnitude and direction.
Big Idea 3 The interactions of an object with other objects can be described by forces.
Enduring Understanding 3.A All forces share certain common characteristics when considered by observers in inertial reference frames.
Essential Knowledge 3.A.3 A force exerted on an object is always due to the interaction of that object with another object.
Enduring Understanding 3.C At the macroscopic level, forces can be categorized as either long-range (action-at-a-distance) forces or contact forces.
Essential Knowledge 3.C.2 Electric force results from the interaction of one object that has an electric charge with another object that has an electric charge.
Essential Knowledge 3.C.4 Contact forces result from the interaction of one object touching another object, and they arise from interatomic electric forces. These forces include tension, friction, normal, spring (Physics 1), and buoyant (Physics 2).
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.E The electric and magnetic properties of a system can change in response to the presence of, or changes in, other objects or systems.
Essential Knowledge 4.E.3 The charge distribution in a system can be altered by the effects of electric forces produced by a charged object.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.A Certain quantities are conserved, in the sense that the changes of those quantities in a given system are always equal to the transfer of that quantity to or from the system by all possible interactions with other systems.
Essential Knowledge 5.A.2 For all systems under all circumstances, energy, charge, linear momentum, and angular momentum are conserved.
Enduring Understanding 5.C The electric charge of a system is conserved.
Essential Knowledge 5.C.2 The exchange of electric charges among a set of objects in a system conserves electric charge. |
# Electric Charge and Electric Field
## Static Electricity and Charge: Conservation of Charge
### Learning Objectives
By the end of this section, you will be able to:
1. Define electric charge, and describe how the two types of charge interact.
2. Describe three common situations that generate static electricity.
3. State the law of conservation of charge.
What makes plastic wrap cling? Static electricity. Not only are applications of static electricity common these days, its existence has been known since ancient times. The first record of its effects dates to ancient Greeks who noted more than 500 years B.C. that polishing amber temporarily enabled it to attract bits of straw (see ). The very word electric derives from the Greek word for amber (electron).
Many of the characteristics of static electricity can be explored by rubbing things together. Rubbing creates the spark you get from walking across a wool carpet, for example. Static cling generated in a clothes dryer and the attraction of straw to recently polished amber also result from rubbing. Similarly, lightning results from air movements under certain weather conditions. You can also rub a balloon on your hair, and the static electricity created can then make the balloon cling to a wall. We also have to be cautious of static electricity, especially in dry climates. When we pump gasoline, we are warned to discharge ourselves (after sliding across the seat) on a metal surface before grabbing the gas nozzle. Attendants in hospital operating rooms must wear booties with a conductive strip of aluminum foil on the bottoms to avoid creating sparks which may ignite flammable anesthesia gases combined with the oxygen being used.
Some of the most basic characteristics of static electricity include:
1. The effects of static electricity are explained by a physical quantity not previously introduced, called electric charge.
2. There are only two types of charge, one called positive and the other called negative.
3. Like charges repel, whereas unlike charges attract.
4. The force between charges decreases with distance.
How do we know there are two types of electric charge? When various materials are rubbed together in controlled ways, certain combinations of materials always produce one type of charge on one material and the opposite type on the other. By convention, we call one type of charge “positive”, and the other type “negative.” For example, when glass is rubbed with silk, the glass becomes positively charged and the silk negatively charged. Since the glass and silk have opposite charges, they attract one another like clothes that have rubbed together in a dryer. Two glass rods rubbed with silk in this manner will repel one another, since each rod has positive charge on it. Similarly, two silk cloths so rubbed will repel, since both cloths have negative charge. shows how these simple materials can be used to explore the nature of the force between charges.
More sophisticated questions arise. Where do these charges come from? Can you create or destroy charge? Is there a smallest unit of charge? Exactly how does the force depend on the amount of charge and the distance between charges? Such questions obviously occurred to Benjamin Franklin and other early researchers, and they interest us even today.
### Charge Carried by Electrons and Protons
Franklin wrote in his letters and books that he could see the effects of electric charge but did not understand what caused the phenomenon. Today we have the advantage of knowing that normal matter is made of atoms, and that atoms contain positive and negative charges, usually in equal amounts.
shows a simple model of an atom with negative electrons orbiting its positive nucleus. The nucleus is positive due to the presence of positively charged protons. Nearly all charge in nature is due to electrons and protons, which are two of the three building blocks of most matter. (The third is the neutron, which is neutral, carrying no charge.) Other charge-carrying particles are observed in cosmic rays and nuclear decay, and are created in particle accelerators. All but the electron and proton survive only a short time and are quite rare by comparison.
The charges of electrons and protons are identical in magnitude but opposite in sign. Furthermore, all charged objects in nature are integral multiples of this basic quantity of charge, meaning that all charges are made of combinations of a basic unit of charge. Usually, charges are formed by combinations of electrons and protons. The magnitude of this basic charge is
The symbol is commonly used for charge and the subscript indicates the charge of a single electron (or proton).
The SI unit of charge is the coulomb (C). The number of protons needed to make a charge of 1.00 C is
Similarly, electrons have a combined charge of −1.00 coulomb. Just as there is a smallest bit of an element (an atom), there is a smallest bit of charge. There is no directly observed charge smaller than (see Things Great and Small: The Submicroscopic Origin of Charge), and all observed charges are integral multiples of .
shows a person touching a Van de Graaff generator and receiving excess positive charge. The expanded view of a hair shows the existence of both types of charges but an excess of positive. The repulsion of these positive like charges causes the strands of hair to repel other strands of hair and to stand up. The further blowup shows an artist’s conception of an electron and a proton perhaps found in an atom in a strand of hair.
The electron seems to have no substructure; in contrast, when the substructure of protons is explored by scattering extremely energetic electrons from them, it appears that there are point-like particles inside the proton. These sub-particles, named quarks, have never been directly observed, but they are believed to carry fractional charges as seen in . Charges on electrons and protons and all other directly observable particles are unitary, but these quark substructures carry charges of either or . There are continuing attempts to observe fractional charge directly and to learn of the properties of quarks, which are perhaps the ultimate substructure of matter.
### Separation of Charge in Atoms
Charges in atoms and molecules can be separated—for example, by rubbing materials together. Some atoms and molecules have a greater affinity for electrons than others and will become negatively charged by close contact in rubbing, leaving the other material positively charged. (See .) Positive charge can similarly be induced by rubbing. Methods other than rubbing can also separate charges. Batteries, for example, use combinations of substances that interact in such a way as to separate charges. Chemical interactions may transfer negative charge from one substance to the other, making one battery terminal negative and leaving the first one positive.
No charge is actually created or destroyed when charges are separated as we have been discussing. Rather, existing charges are moved about. In fact, in all situations the total amount of charge is always constant. This universally obeyed law of nature is called the law of conservation of charge.
In more exotic situations, such as in particle accelerators, mass, , can be created from energy in the amount
. Sometimes, the created mass is charged, such as when an electron is created. Whenever a charged particle is created, another having an opposite charge is always created along with it, so that the total charge created is zero. Usually, the two particles are “matter-antimatter” counterparts. For example, an antielectron would usually be created at the same time as an electron. The antielectron has a positive charge (it is called a positron), and so the total charge created is zero. (See .) All particles have antimatter counterparts with opposite signs. When matter and antimatter counterparts are brought together, they completely annihilate one another. By annihilate, we mean that the mass of the two particles is converted to energy E, again obeying the relationship . Since the two particles have equal and opposite charge, the total charge is zero before and after the annihilation; thus, total charge is conserved.
The law of conservation of charge is absolute—it has never been observed to be violated. Charge, then, is a special physical quantity, joining a very short list of other quantities in nature that are always conserved. Other conserved quantities include energy, momentum, and angular momentum.
### Test Prep for AP Courses
### Section Summary
1. There are only two types of charge, which we call positive and negative.
2. Like charges repel, unlike charges attract, and the force between charges decreases with the square of the distance.
3. The vast majority of positive charge in nature is carried by protons, while the vast majority of negative charge is carried by electrons.
4. The electric charge of one electron is equal in magnitude and opposite in sign to the charge of one proton.
5. An ion is an atom or molecule that has nonzero total charge due to having unequal numbers of electrons and protons.
6. The SI unit for charge is the coulomb (C), with protons and electrons having charges of opposite sign but equal magnitude; the magnitude of this basic charge
is
7. Whenever charge is created or destroyed, equal amounts of positive and negative are involved.
8. Most often, existing charges are separated from neutral objects to obtain some net charge.
9. Both positive and negative charges exist in neutral objects and can be separated by rubbing one object with another. For macroscopic objects, negatively charged means an excess of electrons and positively charged means a depletion of electrons.
10. The law of conservation of charge ensures that whenever a charge is created, an equal charge of the opposite sign is created at the same time.
### Conceptual Questions
### Problems & Exercises
|
# Electric Charge and Electric Field
## Conductors and Insulators
### Learning Objectives
By the end of this section, you will be able to:
1. Define conductor and insulator, explain the difference, and give examples of each.
2. Describe three methods for charging an object.
3. Explain what happens to an electric force as you move farther from the source.
4. Define polarization.
Some substances, such as metals and salty water, allow charges to move through them with relative ease. Some of the electrons in metals and similar conductors are not bound to individual atoms or sites in the material. These free electrons can move through the material much as air moves through loose sand. Any substance that has free electrons and allows charge to move relatively freely through it is called a conductor. The moving electrons may collide with fixed atoms and molecules, losing some energy, but they can move in a conductor. Superconductors allow the movement of charge without any loss of energy. Salty water and other similar conducting materials contain free ions that can move through them. An ion is an atom or molecule having a positive or negative (nonzero) total charge. In other words, the total number of electrons is not equal to the total number of protons.
Other substances, such as glass, do not allow charges to move through them. These are called insulators. Electrons and ions in insulators are bound in the structure and cannot move easily—as much as times more slowly than in conductors. Pure water and dry table salt are insulators, for example, whereas molten salt and salty water are conductors.
### Charging by Contact
shows an electroscope being charged by touching it with a positively charged glass rod. Because the glass rod is an insulator, it must actually touch the electroscope to transfer charge to or from it. (Note that the extra positive charges reside on the surface of the glass rod as a result of rubbing it with silk before starting the experiment.) Since only electrons move in metals, we see that they are attracted to the top of the electroscope. There, some are transferred to the positive rod by touch, leaving the electroscope with a net positive charge.
Electrostatic repulsion in the leaves of the charged electroscope separates them. The electrostatic force has a horizontal component that results in the leaves moving apart as well as a vertical component that is balanced by the gravitational force. Similarly, the electroscope can be negatively charged by contact with a negatively charged object.
### Charging by Induction
It is not necessary to transfer excess charge directly to an object in order to charge it. shows a method of induction wherein a charge is created in a nearby object, without direct contact. Here we see two neutral metal spheres in contact with one another but insulated from the rest of the world. A positively charged rod is brought near one of them, attracting negative charge to that side, leaving the other sphere positively charged.
This is an example of induced polarization of neutral objects. Polarization is the separation of charges in an object that remains neutral. If the spheres are now separated (before the rod is pulled away), each sphere will have a net charge. Note that the object closest to the charged rod receives an opposite charge when charged by induction. Note also that no charge is removed from the charged rod, so that this process can be repeated without depleting the supply of excess charge.
Another method of charging by induction is shown in . The neutral metal sphere is polarized when a charged rod is brought near it. The sphere is then grounded, meaning that a conducting wire is run from the sphere to the ground. Since the earth is large and most ground is a good conductor, it can supply or accept excess charge easily. In this case, electrons are attracted to the sphere through a wire called the ground wire, because it supplies a conducting path to the ground. The ground connection is broken before the charged rod is removed, leaving the sphere with an excess charge opposite to that of the rod. Again, an opposite charge is achieved when charging by induction and the charged rod loses none of its excess charge.
Neutral objects can be attracted to any charged object. The pieces of straw attracted to polished amber are neutral, for example. If you run a plastic comb through your hair, the charged comb can pick up neutral pieces of paper. shows how the polarization of atoms and molecules in neutral objects results in their attraction to a charged object.
When a charged rod is brought near a neutral substance, an insulator in this case, the distribution of charge in atoms and molecules is shifted slightly. Opposite charge is attracted nearer the external charged rod, while like charge is repelled. Since the electrostatic force decreases with distance, the repulsion of like charges is weaker than the attraction of unlike charges, and so there is a net attraction. Thus a positively charged glass rod attracts neutral pieces of paper, as will a negatively charged rubber rod. Some molecules, like water, are polar molecules. Polar molecules have a natural or inherent separation of charge, although they are neutral overall. Polar molecules are particularly affected by other charged objects and show greater polarization effects than molecules with naturally uniform charge distributions.
### Test Prep for AP Courses
### Section Summary
1. Polarization is the separation of positive and negative charges in a neutral object.
2. A conductor is a substance that allows charge to flow freely through its atomic structure.
3. An insulator holds charge within its atomic structure.
4. Objects with like charges repel each other, while those with unlike charges attract each other.
5. A conducting object is said to be grounded if it is connected to the Earth through a conductor. Grounding allows transfer of charge to and from the earth’s large reservoir.
6. Objects can be charged by contact with another charged object and obtain the same sign charge.
7. If an object is temporarily grounded, it can be charged by induction, and obtains the opposite sign charge.
8. Polarized objects have their positive and negative charges concentrated in different areas, giving them a non-symmetrical charge.
9. Polar molecules have an inherent separation of charge.
### Conceptual Questions
### Problems & Exercises
|
# Electric Charge and Electric Field
## Conductors and Electric Fields in Static Equilibrium
### Learning Objectives
By the end of this section, you will be able to:
1. List the three properties of a conductor in electrostatic equilibrium.
2. Explain the effect of an electric field on free charges in a conductor.
3. Explain why no electric field may exist inside a conductor.
4. Describe the electric field surrounding Earth.
5. Explain what happens to an electric field applied to an irregular conductor.
6. Describe how a lightning rod works.
7. Explain how a metal car may protect passengers inside from the dangerous electric fields caused by a downed line touching the car.
Conductors contain free charges that move easily. When excess charge is placed on a conductor or the conductor is put into a static electric field, charges in the conductor quickly respond to reach a steady state called electrostatic equilibrium.
shows the effect of an electric field on free charges in a conductor. The free charges move until the field is perpendicular to the conductor’s surface. There can be no component of the field parallel to the surface in electrostatic equilibrium, since, if there were, it would produce further movement of charge. A positive free charge is shown, but free charges can be either positive or negative and are, in fact, negative in metals. The motion of a positive charge is equivalent to the motion of a negative charge in the opposite direction.
A conductor placed in an electric field will be polarized. shows the result of placing a neutral conductor in an originally uniform electric field. The field becomes stronger near the conductor but entirely disappears inside it.
The properties of a conductor are consistent with the situations already discussed and can be used to analyze any conductor in electrostatic equilibrium. This can lead to some interesting new insights, such as described below.
How can a very uniform electric field be created? Consider a system of two metal plates with opposite charges on them, as shown in . The properties of conductors in electrostatic equilibrium indicate that the electric field between the plates will be uniform in strength and direction. Except near the edges, the excess charges distribute themselves uniformly, producing field lines that are uniformly spaced (hence uniform in strength) and perpendicular to the surfaces (hence uniform in direction, since the plates are flat). The edge effects are less important when the plates are close together.
### Earth’s Electric Field
A near uniform electric field of approximately 150 N/C, directed downward, surrounds Earth, with the magnitude increasing slightly as we get closer to the surface. What causes the electric field? At around 100 km above the surface of Earth we have a layer of charged particles, called the ionosphere. The ionosphere is responsible for a range of phenomena including the electric field surrounding Earth. In fair weather the ionosphere is positive and the Earth largely negative, maintaining the electric field ((a)).
In storm conditions clouds form and localized electric fields can be larger and reversed in direction ((b)). The exact charge distributions depend on the local conditions, and variations of (b) are possible.
If the electric field is sufficiently large, the insulating properties of the surrounding material break down and it becomes conducting. For air this occurs at around N/C. Air ionizes ions and electrons recombine, and we get discharge in the form of lightning sparks and corona discharge.
### Electric Fields on Uneven Surfaces
So far we have considered excess charges on a smooth, symmetrical conductor surface. What happens if a conductor has sharp corners or is pointed? Excess charges on a nonuniform conductor become concentrated at the sharpest points. Additionally, excess charge may move on or off the conductor at the sharpest points.
To see how and why this happens, consider the charged conductor in . The electrostatic repulsion of like charges is most effective in moving them apart on the flattest surface, and so they become least concentrated there. This is because the forces between identical pairs of charges at either end of the conductor are identical, but the components of the forces parallel to the surfaces are different. The component parallel to the surface is greatest on the flattest surface and, hence, more effective in moving the charge.
The same effect is produced on a conductor by an externally applied electric field, as seen in (c). Since the field lines must be perpendicular to the surface, more of them are concentrated on the most curved parts.
### Applications of Conductors
On a very sharply curved surface, such as shown in , the charges are so concentrated at the point that the resulting electric field can be great enough to remove them from the surface. This can be useful.
Lightning rods work best when they are most pointed. The large charges created in storm clouds induce an opposite charge on a building that can result in a lightning bolt hitting the building. The induced charge is bled away continually by a lightning rod, preventing the more dramatic lightning strike.
Of course, we sometimes wish to prevent the transfer of charge rather than to facilitate it. In that case, the conductor should be very smooth and have as large a radius of curvature as possible. (See .) Smooth surfaces are used on high-voltage transmission lines, for example, to avoid leakage of charge into the air.
Another device that makes use of some of these principles is a Faraday cage. This is a metal shield that encloses a volume. All electrical charges will reside on the outside surface of this shield, and there will be no electrical field inside. A Faraday cage is used to prohibit stray electrical fields in the environment from interfering with sensitive measurements, such as the electrical signals inside a nerve cell.
During electrical storms if you are driving a car, it is best to stay inside the car as its metal body acts as a Faraday cage with zero electrical field inside. If in the vicinity of a lightning strike, its effect is felt on the outside of the car and the inside is unaffected, provided you remain totally inside. This is also true if an active (“hot”) electrical wire was broken (in a storm or an accident) and fell on your car.
### Test Prep for AP Courses
### Section Summary
1. A conductor allows free charges to move about within it.
2. The electrical forces around a conductor will cause free charges to move around inside the conductor until static equilibrium is reached.
3. Any excess charge will collect along the surface of a conductor.
4. Conductors with sharp corners or points will collect more charge at those points.
5. A lightning rod is a conductor with sharply pointed ends that collect excess charge on the building caused by an electrical storm and allow it to dissipate back into the air.
6. Electrical storms result when the electrical field of Earth’s surface in certain locations becomes more strongly charged, due to changes in the insulating effect of the air.
7. A Faraday cage acts like a shield around an object, preventing electric charge from penetrating inside.
### Conceptual Questions
### Problems & Exercises
|
# Electric Charge and Electric Field
## Coulomb’s Law
### Learning Objectives
By the end of this section, you will be able to:
1. State Coulomb’s law in terms of how the electrostatic force changes with the distance between two objects.
2. Calculate the electrostatic force between two charged point forces, such as electrons or protons.
3. Compare the electrostatic force to the gravitational attraction for a proton and an electron; for a human and the Earth.
Through the work of scientists in the late 18th century, the main features of the electrostatic force—the existence of two types of charge, the observation that like charges repel, unlike charges attract, and the decrease of force with distance—were eventually refined, and expressed as a mathematical formula. The mathematical formula for the electrostatic force is called Coulomb’s law after the French physicist Charles Coulomb (1736–1806), who performed experiments and first proposed a formula to calculate it.
Although the formula for Coulomb’s law is simple, it was no mean task to prove it. The experiments Coulomb did, with the primitive equipment then available, were difficult. Modern experiments have verified Coulomb’s law to great precision. For example, it has been shown that the force is inversely proportional to distance between two objects squared to an accuracy of 1 part in . No exceptions have ever been found, even at the small distances within the atom.
As the example implies, gravitational force is completely negligible on a small scale, where the interactions of individual charged particles are important. On a large scale, such as between the Earth and a person, the reverse is true. Most objects are nearly electrically neutral, and so attractive and repulsive Coulomb forces nearly cancel. Gravitational force on a large scale dominates interactions between large objects because it is always attractive, while Coulomb forces tend to cancel.
### Test Prep for AP Courses
### Section Summary
1. Frenchman Charles Coulomb was the first to publish the mathematical equation that describes the electrostatic force between two objects.
2. Coulomb’s law gives the magnitude of the force between point charges. It is
where
3. This Coulomb force is extremely basic, since most charges are due to point-like particles. It is responsible for all electrostatic effects and underlies most macroscopic forces.
4. The Coulomb force is extraordinarily strong compared with the gravitational force, another basic force—but unlike gravitational force it can cancel, since it can be either attractive or repulsive.
5. The electrostatic force between two subatomic particles is far greater than the gravitational force between the same two particles.
### Conceptual Questions
### Problems & Exercises
|
# Electric Charge and Electric Field
## Electric Field: Concept of a Field Revisited
### Learning Objectives
By the end of this section, you will be able to:
1. Describe a force field and calculate the strength of an electric field due to a point charge.
2. Calculate the force exerted on a test charge by an electric field.
3. Explain the relationship between electrical force (F) on a test charge and electrical field strength (E).
Contact forces, such as between a baseball and a bat, are explained on the small scale by the interaction of the charges in atoms and molecules in close proximity. They interact through forces that include the Coulomb force. Action at a distance is a force between objects that are not close enough for their atoms to “touch.” That is, they are separated by more than a few atomic diameters.
For example, a charged rubber comb attracts neutral bits of paper from a distance via the Coulomb force. It is very useful to think of an object being surrounded in space by a force field. The force field carries the force to another object (called a test object) some distance away.
### Concept of a Field
A field is a way of conceptualizing and mapping the force that surrounds any object and acts on another object at a distance without apparent physical connection. For example, the gravitational field surrounding the earth (and all other masses) represents the gravitational force that would be experienced if another mass were placed at a given point within the field.
In the same way, the Coulomb force field surrounding any charge extends throughout space. Using Coulomb’s law, , its magnitude is given by the equation
, for a point charge (a particle having a charge ) acting on a test charge at a distance (see ). Both the magnitude and direction of the Coulomb force field depend on and the test charge .
To simplify things, we would prefer to have a field that depends only on and not on the test charge . The electric field is defined in such a manner that it represents only the charge creating it and is unique at every point in space. Specifically, the electric field is defined to be the ratio of the Coulomb force to the test charge:
where is the electrostatic force (or Coulomb force) exerted on a positive test charge
. It is understood that
is in the same direction as
. It is also assumed that is so small that it does not alter the charge distribution creating the electric field. The units of electric field are newtons per coulomb (N/C). If the electric field is known, then the electrostatic force on any charge is simply obtained by multiplying charge times electric field, or
. Consider the electric field due to a point charge . According to Coulomb’s law, the force it exerts on a test charge
is
. Thus the magnitude of the electric field,
, for a point charge is
Since the test charge cancels, we see that
The electric field is thus seen to depend only on the charge and the distance ; it is completely independent of the test charge .
### Test Prep for AP Courses
### Section Summary
1. The electrostatic force field surrounding a charged object extends out into space in all directions.
2. The electrostatic force exerted by a point charge on a test charge at a distance depends on the charge of both charges, as well as the distance between the two.
3. The electric field is defined to be
where
4. The magnitude of the electric field created by a point charge is
where
### Conceptual Questions
### Problem Exercises
|
# Electric Charge and Electric Field
## Electric Field Lines: Multiple Charges
### Learning Objectives
By the end of this section, you will be able to:
1. Calculate the total force (magnitude and direction) exerted on a test charge from more than one charge
2. Describe an electric field diagram of a positive point charge; of a negative point charge with twice the magnitude of positive charge
3. Draw the electric field lines between two points of the same charge; between two points of opposite charge.
Drawings using lines to represent electric fields around charged objects are very useful in visualizing field strength and direction. Since the electric field has both magnitude and direction, it is a vector. Like all vectors, the electric field can be represented by an arrow that has length proportional to its magnitude and that points in the correct direction. (We have used arrows extensively to represent force vectors, for example.)
shows two pictorial representations of the same electric field created by a positive point charge . (b) shows the standard representation using continuous lines. (a) shows numerous individual arrows with each arrow representing the force on a test charge . Field lines are essentially a map of infinitesimal force vectors.
Note that the electric field is defined for a positive test charge , so that the field lines point away from a positive charge and toward a negative charge. (See .) The electric field strength is exactly proportional to the number of field lines per unit area, since the magnitude of the electric field for a point charge is and area is proportional to . This pictorial representation, in which field lines represent the direction and their closeness (that is, their areal density or the number of lines crossing a unit area) represents strength, is used for all fields: electrostatic, gravitational, magnetic, and others.
In many situations, there are multiple charges. The total electric field created by multiple charges is the vector sum of the individual fields created by each charge. The following example shows how to add electric field vectors.
shows how the electric field from two point charges can be drawn by finding the total field at representative points and drawing electric field lines consistent with those points. While the electric fields from multiple charges are more complex than those of single charges, some simple features are easily noticed.
For example, the field is weaker between like charges, as shown by the lines being farther apart in that region. (This is because the fields from each charge exert opposing forces on any charge placed between them.) (See and (a).) Furthermore, at a great distance from two like charges, the field becomes identical to the field from a single, larger charge.
(b) shows the electric field of two unlike charges. The field is stronger between the charges. In that region, the fields from each charge are in the same direction, and so their strengths add. The field of two unlike charges is weak at large distances, because the fields of the individual charges are in opposite directions and so their strengths subtract. At very large distances, the field of two unlike charges looks like that of a smaller single charge.
We use electric field lines to visualize and analyze electric fields (the lines are a pictorial tool, not a physical entity in themselves). The properties of electric field lines for any charge distribution can be summarized as follows:
1. Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges.
2. The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge.
3. The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines.
4. The direction of the electric field is tangent to the field line at any point in space.
5. Field lines can never cross.
The last property means that the field is unique at any point. The field line represents the direction of the field; so if they crossed, the field would have two directions at that location (an impossibility if the field is unique).
### Test Prep for AP Courses
### Section Summary
1. Drawings of electric field lines are useful visual tools. The properties of electric field lines for any charge distribution are that:
2. Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges.
3. The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge.
4. The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines.
5. The direction of the electric field is tangent to the field line at any point in space.
6. Field lines can never cross.
### Conceptual Questions
### Problem Exercises
|
# Electric Charge and Electric Field
## Electric Forces in Biology
### Learning Objectives
By the end of this section, you will be able to:
1. Describe how a water molecule is polar.
2. Explain electrostatic screening by a water molecule within a living cell.
Classical electrostatics has an important role to play in modern molecular biology. Large molecules such as proteins, nucleic acids, and so on—so important to life—are usually electrically charged. DNA itself is highly charged; it is the electrostatic force that not only holds the molecule together but gives the molecule structure and strength. is a schematic of the DNA double helix.
The four nucleotide bases are given the symbols A (adenine), C (cytosine), G (guanine), and T (thymine). The order of the four bases varies in each strand, but the pairing between bases is always the same. C and G are always paired and A and T are always paired, which helps to preserve the order of bases in cell division (mitosis) so as to pass on the correct genetic information. Since the Coulomb force drops with distance (), the distances between the base pairs must be small enough that the electrostatic force is sufficient to hold them together.
DNA is a highly charged molecule, with about (fundamental charge) per m. The distance separating the two strands that make up the DNA structure is about 1 nm, while the distance separating the individual atoms within each base is about 0.3 nm.
One might wonder why electrostatic forces do not play a larger role in biology than they do if we have so many charged molecules. The reason is that the electrostatic force is “diluted” due to screening between molecules. This is due to the presence of other charges in the cell.
### Polarity of Water Molecules
The best example of this charge screening is the water molecule, represented as . Water is a strongly polar molecule. Its 10 electrons (8 from the oxygen atom and 2 from the two hydrogen atoms) tend to remain closer to the oxygen nucleus than the hydrogen nuclei. This creates two centers of equal and opposite charges—what is called a dipole, as illustrated in . The magnitude of the dipole is called the dipole moment.
These two centers of charge will terminate some of the electric field lines coming from a free charge, as on a DNA molecule. This results in a reduction in the strength of the Coulomb interaction. One might say that screening makes the Coulomb force a short range force rather than long range.
### Cell Membranes
Other ions of importance in biology that can reduce or screen Coulomb interactions are and and . These ions are located both inside and outside of living cells. The movement of these ions through cell membranes is crucial to the motion of nerve impulses through nerve axons.
Recent studies of electrostatics in biology seem to show that electric fields in cells can be extended over larger distances, in spite of screening, by “microtubules” within the cell. These microtubules are hollow tubes composed of proteins that guide the movement of chromosomes when cells divide, the motion of other organisms within the cell, and provide mechanisms for motion of some cells (as motors).
You are likely familiar with the role of electrical signals in nerve conduction and the importance of charges in cardiac and related activity. Changes in electrical properties are also essential in core biological processes. Ernest Everett Just, whose expertise in understanding and handling egg cells led to a number of critical experimental discoveries, investigated the role of the cell membrane in reproductive fertilization. In one key experiment, Just established that the egg membrane undergoes a depolarizing "wave of negativity" the moment it fuses with a sperm cell. This change in charge is now known as the "fast block" that ensures that only one sperm cell fuses with an egg cell and is critical for embryonic development.
### Bioelectricity and Wound Healing
Just as electrical forces drive activities in healthy cells and systems, they are also critical in damaged ones. Scientists have long known that injuries or infections are managed by the body through various responses, including increased white blood cell concentrations, swelling, and tissue repair. For example, human cells damaged by wounds heal through a complex process. But what triggers it?
Physicists and biologists working together at Vanderbilt University used an ultra-precise laser to uncover the processes organisms use to repair damage. Lead researchers Andrea Page-Degraw and Shane Hutson and study author Erica Shannon discovered that immediately upon damage, cells release calcium ions and eventually other molecules, driving an electrochemical response that initiates the healing process. Shannon notes that different types of damage lead to different chemical releases, demonstrating how organisms may initiate specific responses to best address the injury.
While far more research is required to understand the triggering and response method, other research indicates that bioelectricity is highly involved in wound healing. Several studies have indicated that precise and low-level electrical stimulation of wounds (such as those from surgeries) leads to faster healing. While the mechanisms are not fully understood, electrical stimulation is a growing area of research and practice in medicine.
### Section Summary
1. Many molecules in living organisms, such as DNA, carry a charge.
2. An uneven distribution of the positive and negative charges within a polar molecule produces a dipole.
3. The effect of a Coulomb field generated by a charged object may be reduced or blocked by other nearby charged objects.
4. Biological systems contain water, and because water molecules are polar, they have a strong effect on other molecules in living systems.
### Conceptual Question
|
# Electric Charge and Electric Field
## Applications of Electrostatics
### Learning Objectives
By the end of this section, you will be able to:
The study of electrostatics has proven useful in many areas. This module covers just a few of the many applications of electrostatics.
1. Name several real-world applications of the study of electrostatics.
### The Van de Graaff Generator
Van de Graaff generators (or Van de Graaffs) are not only spectacular devices used to demonstrate high voltage due to static electricity—they are also used for serious research. The first was built by Robert Van de Graaff in 1931 (based on original suggestions by Lord Kelvin) for use in nuclear physics research. shows a schematic of a large research version. Van de Graaffs utilize both smooth and pointed surfaces, and conductors and insulators to generate large static charges and, hence, large voltages.
A very large excess charge can be deposited on the sphere, because it moves quickly to the outer surface. Practical limits arise because the large electric fields polarize and eventually ionize surrounding materials, creating free charges that neutralize excess charge or allow it to escape. Nevertheless, voltages of 15 million volts are well within practical limits.
### Xerography
Most copy machines use an electrostatic process called xerography—a word coined from the Greek words xeros for dry and graphos for writing. The heart of the process is shown in simplified form in .
A selenium-coated aluminum drum is sprayed with positive charge from points on a device called a corotron. Selenium is a substance with an interesting property—it is a photoconductor. That is, selenium is an insulator when in the dark and a conductor when exposed to light.
In the first stage of the xerography process, the conducting aluminum drum is grounded so that a negative charge is induced under the thin layer of uniformly positively charged selenium. In the second stage, the surface of the drum is exposed to the image of whatever is to be copied. Where the image is light, the selenium becomes conducting, and the positive charge is neutralized. In dark areas, the positive charge remains, and so the image has been transferred to the drum.
The third stage takes a dry black powder, called toner, and sprays it with a negative charge so that it will be attracted to the positive regions of the drum. Next, a blank piece of paper is given a greater positive charge than on the drum so that it will pull the toner from the drum. Finally, the paper and electrostatically held toner are passed through heated pressure rollers, which melt and permanently adhere the toner within the fibers of the paper.
### Laser Printers
Laser printers use the xerographic process to make high-quality images on paper, employing a laser to produce an image on the photoconducting drum as shown in . In its most common application, the laser printer receives output from a computer, and it can achieve high-quality output because of the precision with which laser light can be controlled. Many laser printers do significant information processing, such as making sophisticated letters or fonts, and may contain a computer more powerful than the one giving them the raw data to be printed.
### Ink Jet Printers and Electrostatic Painting
The ink jet printer, commonly used to print computer-generated text and graphics, also employs electrostatics. A nozzle makes a fine spray of tiny ink droplets, which are then given an electrostatic charge. (See .)
Once charged, the droplets can be directed, using pairs of charged plates, with great precision to form letters and images on paper. Ink jet printers can produce color images by using a black jet and three other jets with primary colors, usually cyan, magenta, and yellow, much as a color television produces color. (This is more difficult with xerography, requiring multiple drums and toners.)
Electrostatic painting employs electrostatic charge to spray paint onto odd-shaped surfaces. Mutual repulsion of like charges causes the paint to fly away from its source. Surface tension forms drops, which are then attracted by unlike charges to the surface to be painted. Electrostatic painting can reach those hard-to-get at places, applying an even coat in a controlled manner. If the object is a conductor, the electric field is perpendicular to the surface, tending to bring the drops in perpendicularly. Corners and points on conductors will receive extra paint. Felt can similarly be applied.
### Smoke Precipitators and Electrostatic Air Cleaning
Another important application of electrostatics is found in air cleaners, both large and small. The electrostatic part of the process places excess (usually positive) charge on smoke, dust, pollen, and other particles in the air and then passes the air through an oppositely charged grid that attracts and retains the charged particles. (See .)
Large electrostatic precipitators are used industrially to remove over 99% of the particles from stack gas emissions associated with the burning of coal and oil. Home precipitators, often in conjunction with the home heating and air conditioning system, are very effective in removing polluting particles, irritants, and allergens.
### Integrated Concepts
The Integrated Concepts exercises for this module involve concepts such as electric charges, electric fields, and several other topics. Physics is most interesting when applied to general situations involving more than a narrow set of physical principles. The electric field exerts force on charges, for example, and hence the relevance of Dynamics: Force and Newton’s Laws of Motion. The following topics are involved in some or all of the problems labeled “Integrated Concepts”:
The following worked example illustrates how this strategy is applied to an Integrated Concept problem:
### Section Summary
1. Electrostatics is the study of electric fields in static equilibrium.
2. In addition to research using equipment such as a Van de Graaff generator, many practical applications of electrostatics exist, including photocopiers, laser printers, ink-jet printers and electrostatic air filters.
### Problems & Exercises
|
# Electric Potential and Electric Field
## Connection for AP® Courses
In Electric Charge and Electric Field, we just scratched the surface (or at least rubbed it) of electrical phenomena. Two of the most familiar aspects of electricity are its energy and voltage. We know, for example, that great amounts of electrical energy can be stored in batteries, are transmitted cross-country through power lines, and may jump from clouds to explode the sap of trees. In a similar manner, at molecular levels, ions cross cell membranes and transfer information. We also know about voltages associated with electricity. Batteries are typically a few volts, the outlets in your home produce 120 volts, and power lines can be as high as hundreds of thousands of volts. But energy and voltage are not the same thing. A motorcycle battery, for example, is small and would not be very successful in replacing the much larger battery in a car, yet each has the same voltage. In this chapter, we shall examine the relationship between voltage and electrical energy and begin to explore some of the many applications of electricity. We do so by introducing the concept of electric potential and describing the relationship between electric field and electric potential.
This chapter presents the concept of equipotential lines (lines of equal potential) as a way to visualize the electric field (Enduring Understanding 2.E, Essential Knowledge 2.E.2). An analogy between the isolines on topographic maps for gravitational field and equipotential lines for the electric field is used to develop a conceptual understanding of equipotential lines (Essential Knowledge 2.E.1). The relationship between the magnitude of an electric field, change in electric potential, and displacement is stated for a uniform field and extended for the more general case using the concept of the “average value” of the electric field (Essential Knowledge 2.E.3).
The concept that an electric field is caused by charged objects (Enduring Understanding 2.C) supports Big Idea 2, that fields exist in space and can be used to explain interactions. The relationship between the electric field, electric charge, and electric force (Essential Knowledge 2.C.1) is used to describe the behavior of charged particles. The uniformity of the electric field between two oppositely charged parallel plates with uniformly distributed electric charge (Essential Knowledge 2.C.5), as well as the properties of materials and their geometry, are used to develop understanding of the capacitance of a capacitor (Essential Knowledge 4.E.4).
This chapter also supports Big Idea 4, that interactions between systems result in changes in those systems. This idea is applied to electric properties of various systems of charged objects, demonstrating the effect of electric interactions on electric properties of systems (Enduring Understanding 4.E). This fact in turn supports Big Idea 5, that changes due to interactions are governed by conservation laws. In particular, the energy of a system is conserved (Enduring Understanding 5.B). Any system that has internal structure can have internal energy. For a system of charged objects, internal energy can change as a result of changes in the arrangement of charges and their geometric configuration as long as work is done on, or by, the system (Essential Knowledge 5.B.2). When objects within the system interact with conservative forces, such as electric forces, the internal energy is defined by the potential energy of that interaction (Essential Knowledge 5.B.3). In general, the internal energy of a system is the sum of the kinetic energies of all its objects and the potential energy of interaction between the objects within the system (Essential Knowledge 5.B.4).
The concepts in this chapter support:
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.E Materials have many macroscopic properties that result from the arrangement and interactions of the atoms and molecules that make up the material.
Essential Knowledge 1.E.4 Matter has a property called electric permittivity.
Big Idea 2 Fields existing in space can be used to explain interactions.
Enduring Understanding 2.C An electric field is caused by an object with electric charge.
Essential Knowledge 2.C.1 The magnitude of the electric force F exerted on an object with electric charge q by an electric field F= qE. The direction of the force is determined by the direction of the field and the sign of the charge, with positively charged objects accelerating in the direction of the field and negatively charged objects accelerating in the direction opposite the field. This should include a vector field map for positive point charges, negative point charges, spherically symmetric charge distribution, and uniformly charged parallel plates.
Essential Knowledge 2.C.5 Between two oppositely charged parallel plates with uniformly distributed electric charge, at points far from the edges of the plates, the electric field is perpendicular to the plates and is constant in both magnitude and direction.
Enduring Understanding 2.E Physicists often construct a map of isolines connecting points of equal value for some quantity related to a field and use these maps to help visualize the field.
Essential Knowledge 2.E.1 Isolines on a topographic (elevation) map describe lines of approximately equal gravitational potential energy per unit mass (gravitational equipotential). As the distance between two different isolines decreases, the steepness of the surface increases. [Contour lines on topographic maps are useful teaching tools for introducing the concept of equipotential lines. Students are encouraged to use the analogy in their answers when explaining gravitational and electrical potential and potential differences.]
Essential Knowledge 2.E.2 Isolines in a region where an electric field exists represent lines of equal electric potential, referred to as equipotential lines.
Essential Knowledge 2.E.3 The average value of the electric field in a region equals the change in electric potential across that region divided by the change in position (displacement) in the relevant direction.
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.E The electric and magnetic properties of a system can change in response to the presence of, or changes in, other objects or systems.
Essential Knowledge 4.E.4 The resistance of a resistor, and the capacitance of a capacitor, can be understood from the basic properties of electric fields and forces, as well as the properties of materials and their geometry.
Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.2 A system with internal structure can have internal energy, and changes in a system’s internal structure can result in changes in internal energy. [Physics 1: includes mass–spring oscillators and simple pendulums. Physics 2: charged object in electric fields and examining changes in internal energy with changes in configuration.]
Essential Knowledge 5.B.3 A system with internal structure can have potential energy. Potential energy exists within a system if the objects within that system interact with conservative forces.
Essential Knowledge 5.B.4 The internal energy of a system includes the kinetic energy of the objects that make up the system and the potential energy of the configuration of the objects that make up the system. |
# Electric Potential and Electric Field
## Electric Potential Energy: Potential Difference
### Learning Objectives
By the end of this section, you will be able to:
1. Define electric potential and electric potential energy.
2. Describe the relationship between potential difference and electrical potential energy.
3. Explain electron volt and its usage in submicroscopic process.
4. Determine electric potential energy given potential difference and amount of charge.
When a free positive charge is accelerated by an electric field, such as shown in , it is given kinetic energy. The process is analogous to an object being accelerated by a gravitational field. It is as if the charge is going down an electrical hill where its electric potential energy is converted to kinetic energy. Let us explore the work done on a charge by the electric field in this process, so that we may develop a definition of electric potential energy.
The electrostatic or Coulomb force is conservative, which means that the work done on is independent of the path taken. This is exactly analogous to the gravitational force in the absence of dissipative forces such as friction. When a force is conservative, it is possible to define a potential energy associated with the force, and it is usually easier to deal with the potential energy (because it depends only on position) than to calculate the work directly.
We use the letters PE to denote electric potential energy, which has units of joules (J). The change in potential energy, , is crucial, since the work done by a conservative force is the negative of the change in potential energy; that is, . For example, work
done to accelerate a positive charge from rest is positive and results from a loss in PE, or a negative . There must be a minus sign in front of to make
positive. PE can be found at any point by taking one point as a reference and calculating the work needed to move a charge to the other point.
Gravitational potential energy and electric potential energy are quite analogous. Potential energy accounts for work done by a conservative force and gives added insight regarding energy and energy transformation without the necessity of dealing with the force directly. It is much more common, for example, to use the concept of voltage (related to electric potential energy) than to deal with the Coulomb force directly.
Calculating the work directly is generally difficult, since and the direction and magnitude of can be complex for multiple charges, for odd-shaped objects, and along arbitrary paths. But we do know that, since , the work, and hence , is proportional to the test charge To have a physical quantity that is independent of test charge, we define electric potential (or simply potential, since electric is understood) to be the potential energy per unit charge:
Since PE is proportional to , the dependence on cancels. Thus does not depend on . The change in potential energy is crucial, and so we are concerned with the difference in potential or potential difference between two points, where
The potential difference between points A and B, , is thus defined to be the change in potential energy of a charge moved from A to B, divided by the charge. Units of potential difference are joules per coulomb, given the name volt (V) after Alessandro Volta.
The familiar term voltage is the common name for potential difference. Keep in mind that whenever a voltage is quoted, it is understood to be the potential difference between two points. For example, every battery has two terminals, and its voltage is the potential difference between them. More fundamentally, the point you choose to be zero volts is arbitrary. This is analogous to the fact that gravitational potential energy has an arbitrary zero, such as sea level or perhaps a lecture hall floor.
In summary, the relationship between potential difference (or voltage) and electrical potential energy is given by
Voltage is not the same as energy. Voltage is the energy per unit charge. Thus a motorcycle battery and a car battery can both have the same voltage (more precisely, the same potential difference between battery terminals), yet one stores much more energy than the other since . The car battery can move more charge than the motorcycle battery, although both are 12 V batteries.
Note that the energies calculated in the previous example are absolute values. The change in potential energy for the battery is negative, since it loses energy. These batteries, like many electrical systems, actually move negative charge—electrons in particular. The batteries repel electrons from their negative terminals (A) through whatever circuitry is involved and attract them to their positive terminals (B) as shown in . The change in potential is and the charge is negative, so that is negative, meaning the potential energy of the battery has decreased when has moved from A to B.
### The Electron Volt
The energy per electron is very small in macroscopic situations like that in the previous example—a tiny fraction of a joule. But on a submicroscopic scale, such energy per particle (electron, proton, or ion) can be of great importance. For example, even a tiny fraction of a joule can be great enough for these particles to destroy organic molecules and harm living tissue. The particle may do its damage by direct collision, or it may create harmful x rays, which can also inflict damage. It is useful to have an energy unit related to submicroscopic effects. shows a situation related to the definition of such an energy unit. An electron is accelerated between two charged metal plates as it might be in an old-model television tube or oscilloscope. The electron is given kinetic energy that is later converted to another form—light in the television tube, for example. (Note that downhill for the electron is uphill for a positive charge.) Since energy is related to voltage by we can think of the joule as a coulomb-volt.
On the submicroscopic scale, it is more convenient to define an energy unit called the electron volt (eV), which is the energy given to a fundamental charge accelerated through a potential difference of 1 V. In equation form,
An electron accelerated through a potential difference of 1 V is given an energy of 1 eV. It follows that an electron accelerated through 50 V is given 50 eV. A potential difference of 100,000 V (100 kV) will give an electron an energy of 100,000 eV (100 keV), and so on. Similarly, an ion with a double positive charge accelerated through 100 V will be given 200 eV of energy. These simple relationships between accelerating voltage and particle charges make the electron volt a simple and convenient energy unit in such circumstances.
The electron volt is commonly employed in submicroscopic processes—chemical valence energies and molecular and nuclear binding energies are among the quantities often expressed in electron volts. For example, about 5 eV of energy is required to break up certain organic molecules. If a proton is accelerated from rest through a potential difference of 30 kV, it is given an energy of 30 keV (30,000 eV) and it can break up as many as 6000 of these molecules (). Nuclear decay energies are on the order of 1 MeV (1,000,000 eV) per event and can, thus, produce significant biological damage.
### Conservation of Energy
The total energy of a system is conserved if there is no net addition (or subtraction) of work or heat transfer. For conservative forces, such as the electrostatic force, conservation of energy states that mechanical energy is a constant.
Mechanical energy is the sum of the kinetic energy and potential energy of a system; that is, . A loss of PE of a charged particle becomes an increase in its KE. Here PE is the electric potential energy. Conservation of energy is stated in equation form as
or
where i and f stand for initial and final conditions. As we have found many times before, considering energy can give us insights and facilitate problem solving.
### Test Prep for AP Courses
### Section Summary
1. Electric potential is potential energy per unit charge.
2. The potential difference between points A and B, , defined to be the change in potential energy of a charge moved from A to B, is equal to the change in potential energy divided by the charge, Potential difference is commonly called voltage, represented by the symbol
.
3. An electron volt is the energy given to a fundamental charge accelerated through a potential difference of 1 V. In equation form,
4. Mechanical energy is the sum of the kinetic energy and potential energy of a system, that is, This sum is a constant.
### Conceptual Questions
### Problems & Exercises
|
# Electric Potential and Electric Field
## Electric Potential in a Uniform Electric Field
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the relationship between voltage and electric field.
2. Derive an expression for the electric potential and electric field.
3. Calculate electric field strength given distance and voltage.
In the previous section, we explored the relationship between voltage and energy. In this section, we will explore the relationship between voltage and electric field. For example, a uniform electric field is produced by placing a potential difference (or voltage) across two parallel metal plates, labeled A and B. (See .) Examining this will tell us what voltage is needed to produce a certain electric field strength; it will also reveal a more fundamental relationship between electric potential and electric field. From a physicist’s point of view, either or can be used to describe any charge distribution. is most closely tied to energy, whereas is most closely related to force. is a scalar quantity and has no direction, while is a vector quantity, having both magnitude and direction. (Note that the magnitude of the electric field strength, a scalar quantity, is represented by below.) The relationship between and is revealed by calculating the work done by the force in moving a charge from point A to point B. But, as noted in Electric Potential Energy: Potential Difference, this is complex for arbitrary charge distributions, requiring calculus. We therefore look at a uniform electric field as an interesting special case.
The work done by the electric field in to move a positive charge from A, the positive plate, higher potential, to B, the negative plate, lower potential, is
The potential difference between points A and B is
Entering this into the expression for work yields
Work is ; here , since the path is parallel to the field, and so . Since
, we see that . Substituting this expression for work into the previous equation gives
The charge cancels, and so the voltage between points A and B is seen to be
where is the distance from A to B, or the distance between the plates in . Note that the above equation implies the units for electric field are volts per meter. We already know the units for electric field are newtons per coulomb; thus the following relation among units is valid:
In more general situations, regardless of whether the electric field is uniform, it points in the direction of decreasing potential, because the force on a positive charge is in the direction of and also in the direction of lower potential . Furthermore, the magnitude of equals the rate of decrease of with distance. The faster decreases over distance, the greater the electric field. In equation form, the general relationship between voltage and electric field is
where is the distance over which the change in potential, , takes place. The minus sign tells us that points in the direction of decreasing potential. The electric field is said to be the gradient (as in grade or slope) of the electric potential.
For continually changing potentials, and become infinitesimals and differential calculus must be employed to determine the electric field.
### Test Prep for AP Courses
### Section Summary
1. The voltage between points A and B is
where is the distance from A to B, or the distance between the plates.
2. In equation form, the general relationship between voltage and electric field is
where is the distance over which the change in potential, , takes place. The minus sign tells us that points in the direction of decreasing potential.) The electric field is said to be the gradient (as in grade or slope) of the electric potential.
### Conceptual Questions
### Problems & Exercises
|
# Electric Potential and Electric Field
## Electrical Potential Due to a Point Charge
### Learning Objectives
By the end of this section, you will be able to:
1. Explain point charges and express the equation for electric potential of a point charge.
2. Distinguish between electric potential and electric field.
3. Determine the electric potential of a point charge given charge and distance.
Point charges, such as electrons, are among the fundamental building blocks of matter. Furthermore, spherical charge distributions (like on a metal sphere) create external electric fields exactly like a point charge. The electric potential due to a point charge is, thus, a case we need to consider. Using calculus to find the work needed to move a test charge from a large distance away to a distance of from a point charge , and noting the connection between work and potential , it can be shown that the electric potential is
where k is a constant equal to
.
The potential at infinity is chosen to be zero. Thus for a point charge decreases with distance, whereas for a point charge decreases with distance squared:
Recall that the electric potential is a scalar and has no direction, whereas the electric field is a vector. To find the voltage due to a combination of point charges, you add the individual voltages as numbers. To find the total electric field, you must add the individual fields as vectors, taking magnitude and direction into account. This is consistent with the fact that is closely associated with energy, a scalar, whereas is closely associated with force, a vector.
The voltages in both of these examples could be measured with a meter that compares the measured potential with ground potential. Ground potential is often taken to be zero (instead of taking the potential at infinity to be zero). It is the potential difference between two points that is of importance, and very often there is a tacit assumption that some reference point, such as Earth or a very distant point, is at zero potential. As noted in Electric Potential Energy: Potential Difference, this is analogous to taking sea level as when considering gravitational potential energy, .
### Section Summary
1. Electric potential of a point charge is .
2. Electric potential is a scalar, and electric field is a vector. Addition of voltages as numbers gives the voltage due to a combination of point charges, whereas addition of individual fields as vectors gives the total electric field.
### Conceptual Questions
### Problems & Exercises
|
# Electric Potential and Electric Field
## Equipotential Lines
### Learning Objectives
By the end of this section, you will be able to:
1. Explain equipotential lines and equipotential surfaces.
2. Describe the action of grounding an electrical appliance.
3. Compare electric field and equipotential lines.
We can represent electric potentials (voltages) pictorially, just as we drew pictures to illustrate electric fields. Of course, the two are related. Consider , which shows an isolated positive point charge and its electric field lines. Electric field lines radiate out from a positive charge and terminate on negative charges. While we use blue arrows to represent the magnitude and direction of the electric field, we use green lines to represent places where the electric potential is constant. These are called equipotential lines in two dimensions, or equipotential surfaces in three dimensions. The term equipotential is also used as a noun, referring to an equipotential line or surface. The potential for a point charge is the same anywhere on an imaginary sphere of radius surrounding the charge. This is true since the potential for a point charge is given by and, thus, has the same value at any point that is a given distance from the charge. An equipotential sphere is a circle in the two-dimensional view of . Since the electric field lines point radially away from the charge, they are perpendicular to the equipotential lines.
It is important to note that equipotential lines are always perpendicular to electric field lines. No work is required to move a charge along an equipotential, since . Thus the work is
Work is zero if force is perpendicular to motion. Force is in the same direction as , so that motion along an equipotential must be perpendicular to . More precisely, work is related to the electric field by
Note that in the above equation,
and
symbolize the magnitudes of the electric field strength and force, respectively. Neither nor nor is zero, and so must be 0, meaning must be
. In other words, motion along an equipotential is perpendicular to .
One of the rules for static electric fields and conductors is that the electric field must be perpendicular to the surface of any conductor. This implies that a conductor is an equipotential surface in static situations. There can be no voltage difference across the surface of a conductor, or charges will flow. One of the uses of this fact is that a conductor can be fixed at zero volts by connecting it to the earth with a good conductor—a process called grounding. Grounding can be a useful safety tool. For example, grounding the metal case of an electrical appliance ensures that it is at zero volts relative to the earth.
Because a conductor is an equipotential, it can replace any equipotential surface. For example, in a charged spherical conductor can replace the point charge, and the electric field and potential surfaces outside of it will be unchanged, confirming the contention that a spherical charge distribution is equivalent to a point charge at its center.
shows the electric field and equipotential lines for two equal and opposite charges. Given the electric field lines, the equipotential lines can be drawn simply by making them perpendicular to the electric field lines. Conversely, given the equipotential lines, as in (a), the electric field lines can be drawn by making them perpendicular to the equipotentials, as in (b).
One of the most important cases is that of the familiar parallel conducting plates shown in . Between the plates, the equipotentials are evenly spaced and parallel. The same field could be maintained by placing conducting plates at the equipotential lines at the potentials shown.
An important application of electric fields and equipotential lines involves the heart. The heart relies on electrical signals to maintain its rhythm. The movement of electrical signals causes the chambers of the heart to contract and relax. When a person has a heart attack, the movement of these electrical signals may be disturbed. An artificial pacemaker and a defibrillator can be used to initiate the rhythm of electrical signals. The equipotential lines around the heart, the thoracic region, and the axis of the heart are useful ways of monitoring the structure and functions of the heart. An electrocardiogram (ECG) measures the small electric signals being generated during the activity of the heart. More about the relationship between electric fields and the heart is discussed in Energy Stored in Capacitors.
### Test Prep for AP Courses
### Section Summary
1. An equipotential line is a line along which the electric potential is constant.
2. An equipotential surface is a three-dimensional version of equipotential lines.
3. Equipotential lines are always perpendicular to electric field lines.
4. The process by which a conductor can be fixed at zero volts by connecting it to the earth with a good conductor is called grounding.
### Conceptual Questions
### Problems & Exercises
|
# Electric Potential and Electric Field
## Capacitors and Dielectrics
### Learning Objectives
By the end of this section, you will be able to:
1. Describe the action of a capacitor and define capacitance.
2. Explain parallel plate capacitors and their capacitances.
3. Discuss the process of increasing the capacitance of a dielectric.
4. Determine capacitance given charge and voltage.
A capacitor is a device used to store electric charge. Capacitors have applications ranging from filtering static out of radio reception to energy storage in heart defibrillators. Typically, commercial capacitors have two conducting parts close to one another, but not touching, such as those in . (Most of the time an insulator is used between the two plates to provide separation—see the discussion on dielectrics below.) When battery terminals are connected to an initially uncharged capacitor, equal amounts of positive and negative charge, and , are separated into its two plates. The capacitor remains neutral overall, but we refer to it as storing a charge in this circumstance.
The amount of charge a capacitor can store depends on two major factors—the voltage applied and the capacitor’s physical characteristics, such as its size.
A system composed of two identical, parallel conducting plates separated by a distance, as in , is called a parallel plate capacitor. It is easy to see the relationship between the voltage and the stored charge for a parallel plate capacitor, as shown in . Each electric field line starts on an individual positive charge and ends on a negative one, so that there will be more field lines if there is more charge. (Drawing a single field line per charge is a convenience, only. We can draw many field lines for each charge, but the total number is proportional to the number of charges.) The electric field strength is, thus, directly proportional to .
The field is proportional to the charge:
where the symbol means “proportional to.” From the discussion in Electric Potential in a Uniform Electric Field, we know that the voltage across parallel plates is . Thus,
It follows, then, that , and conversely,
This is true in general: The greater the voltage applied to any capacitor, the greater the charge stored in it.
Different capacitors will store different amounts of charge for the same applied voltage, depending on their physical characteristics. We define their capacitance to be such that the charge stored in a capacitor is proportional to . The charge stored in a capacitor is given by
This equation expresses the two major factors affecting the amount of charge stored. Those factors are the physical characteristics of the capacitor, , and the voltage, . Rearranging the equation, we see that capacitance or
The unit of capacitance is the farad (F), named for Michael Faraday (1791–1867), an English scientist who contributed to the fields of electromagnetism and electrochemistry. Since capacitance is charge per unit voltage, we see that a farad is a coulomb per volt, or
A 1-farad capacitor would be able to store 1 coulomb (a very large amount of charge) with the application of only 1 volt. One farad is, thus, a very large capacitance. Typical capacitors range from fractions of a picofarad to millifarads .
shows some common capacitors. Capacitors are primarily made of ceramic, glass, or plastic, depending upon purpose and size. Insulating materials, called dielectrics, are commonly used in their construction, as discussed below.
### Parallel Plate Capacitor
The parallel plate capacitor shown in has two identical conducting plates, each having a surface area , separated by a distance (with no material between the plates). When a voltage is applied to the capacitor, it stores a charge , as shown. We can see how its capacitance depends on and by considering the characteristics of the Coulomb force. We know that like charges repel, unlike charges attract, and the force between charges decreases with distance. So it seems quite reasonable that the bigger the plates are, the more charge they can store—because the charges can spread out more. Thus should be greater for larger . Similarly, the closer the plates are together, the greater the attraction of the opposite charges on them. So should be greater for smaller .
It can be shown that for a parallel plate capacitor there are only two factors ( and ) that affect its capacitance . The capacitance of a parallel plate capacitor in equation form is given by
is the area of one plate in square meters, and
is the distance between the plates in meters. The constant
is the permittivity of free space; its numerical value in SI units is
.
The units of F/m are equivalent to . The small numerical value of is related to the large size of the farad. A parallel plate capacitor must have a large area to have a capacitance approaching a farad. (Note that the above equation is valid when the parallel plates are separated by air or free space. When another material is placed between the plates, the equation is modified, as discussed below.)
Another interesting biological example dealing with electric potential is found in the cell’s plasma membrane. The membrane sets a cell off from its surroundings and also allows ions to selectively pass in and out of the cell. There is a potential difference across the membrane of about
. This is due to the mainly negatively charged ions in the cell and the predominance of positively charged sodium (
) ions outside. Things change when a nerve cell is stimulated.
ions are allowed to pass through the membrane into the cell, producing a positive membrane potential—the nerve signal. The cell membrane is about 7 to 10 nm thick. An approximate value of the electric field across it is given by
This electric field is enough to cause a breakdown in air.
### Dielectric
The previous example highlights the difficulty of storing a large amount of charge in capacitors. If is made smaller to produce a larger capacitance, then the maximum voltage must be reduced proportionally to avoid breakdown (since ). An important solution to this difficulty is to put an insulating material, called a dielectric, between the plates of a capacitor and allow
to be as small as possible. Not only does the smaller make the capacitance greater, but many insulators can withstand greater electric fields than air before breaking down.
There is another benefit to using a dielectric in a capacitor. Depending on the material used, the capacitance is greater than that given by the equation by a factor , called the dielectric constant. A parallel plate capacitor with a dielectric between its plates has a capacitance given by
Values of the dielectric constant for various materials are given in . Note that for vacuum is exactly 1, and so the above equation is valid in that case, too. If a dielectric is used, perhaps by placing Teflon between the plates of the capacitor in , then the capacitance is greater by the factor , which for Teflon is 2.1.
Note also that the dielectric constant for air is very close to 1, so that air-filled capacitors act much like those with vacuum between their plates except that the air can become conductive if the electric field strength becomes too great. (Recall that for a parallel plate capacitor.) Also shown in are maximum electric field strengths in V/m, called dielectric strengths, for several materials. These are the fields above which the material begins to break down and conduct. The dielectric strength imposes a limit on the voltage that can be applied for a given plate separation. For instance, in , the separation is 1.00 mm, and so the voltage limit for air is
However, the limit for a 1.00 mm separation filled with Teflon is 60,000 V, since the dielectric strength of Teflon is V/m. So the same capacitor filled with Teflon has a greater capacitance and can be subjected to a much greater voltage. Using the capacitance we calculated in the above example for the air-filled parallel plate capacitor, we find that the Teflon-filled capacitor can store a maximum charge of
This is 42 times the charge of the same air-filled capacitor.
Microscopically, how does a dielectric increase capacitance? Polarization of the insulator is responsible. The more easily it is polarized, the greater its dielectric constant . Water, for example, is a polar molecule because one end of the molecule has a slight positive charge and the other end has a slight negative charge. The polarity of water causes it to have a relatively large dielectric constant of 80. The effect of polarization can be best explained in terms of the characteristics of the Coulomb force. shows the separation of charge schematically in the molecules of a dielectric material placed between the charged plates of a capacitor. The Coulomb force between the closest ends of the molecules and the charge on the plates is attractive and very strong, since they are very close together. This attracts more charge onto the plates than if the space were empty and the opposite charges were a distance away.
Another way to understand how a dielectric increases capacitance is to consider its effect on the electric field inside the capacitor. (b) shows the electric field lines with a dielectric in place. Since the field lines end on charges in the dielectric, there are fewer of them going from one side of the capacitor to the other. So the electric field strength is less than if there were a vacuum between the plates, even though the same charge is on the plates. The voltage between the plates is , so it too is reduced by the dielectric. Thus there is a smaller voltage for the same charge ; since , the capacitance is greater.
The dielectric constant is generally defined to be , or the ratio of the electric field in a vacuum to that in the dielectric material, and is intimately related to the polarizability of the material.
We will find in Atomic Physics that the orbits of electrons are more properly viewed as electron clouds with the density of the cloud related to the probability of finding an electron in that location (as opposed to the definite locations and paths of planets in their orbits around the Sun). This cloud is shifted by the Coulomb force so that the atom on average has a separation of charge. Although the atom remains neutral, it can now be the source of a Coulomb force, since a charge brought near the atom will be closer to one type of charge than the other.
Some molecules, such as those of water, have an inherent separation of charge and are thus called polar molecules. illustrates the separation of charge in a water molecule, which has two hydrogen atoms and one oxygen atom . The water molecule is not symmetric—the hydrogen atoms are repelled to one side, giving the molecule a boomerang shape. The electrons in a water molecule are more concentrated around the more highly charged oxygen nucleus than around the hydrogen nuclei. This makes the oxygen end of the molecule slightly negative and leaves the hydrogen ends slightly positive. The inherent separation of charge in polar molecules makes it easier to align them with external fields and charges. Polar molecules therefore exhibit greater polarization effects and have greater dielectric constants. Those who study chemistry will find that the polar nature of water has many effects. For example, water molecules gather ions much more effectively because they have an electric field and a separation of charge to attract charges of both signs. Also, as brought out in the previous chapter, polar water provides a shield or screening of the electric fields in the highly charged molecules of interest in biological systems.
### Test Prep for AP Courses
### Section Summary
1. A capacitor is a device used to store charge.
2. The amount of charge a capacitor can store depends on two major factors—the voltage applied and the capacitor’s physical characteristics, such as its size.
3. The capacitance is the amount of charge stored per volt, or
4. The capacitance of a parallel plate capacitor is , when the plates are separated by air or free space.
is called the permittivity of free space.
5. A parallel plate capacitor with a dielectric between its plates has a capacitance given by
where
is the dielectric constant of the material.
6. The maximum electric field strength above which an insulating material begins to break down and conduct is called dielectric strength.
### Conceptual Questions
### Problems & Exercises
|
# Electric Potential and Electric Field
## Capacitors in Series and Parallel
### Learning Objectives
By the end of this section, you will be able to:
1. Derive expressions for total capacitance in series and in parallel.
2. Identify series and parallel parts in the combination of connection of capacitors.
3. Calculate the effective capacitance in series and parallel given individual capacitances.
Several capacitors may be connected together in a variety of applications. Multiple connections of capacitors act like a single equivalent capacitor. The total capacitance of this equivalent single capacitor depends both on the individual capacitors and how they are connected. There are two simple and common types of connections, called series and parallel, for which we can easily calculate the total capacitance. Certain more complicated connections can also be related to combinations of series and parallel.
### Capacitance in Series
(a) shows a series connection of three capacitors with a voltage applied. As for any capacitor, the capacitance of the combination is related to charge and voltage by .
Note in that opposite charges of magnitude flow to either side of the originally uncharged combination of capacitors when the voltage is applied. Conservation of charge requires that equal-magnitude charges be created on the plates of the individual capacitors, since charge is only being separated in these originally neutral devices. The end result is that the combination resembles a single capacitor with an effective plate separation greater than that of the individual capacitors alone. (See (b).) Larger plate separation means smaller capacitance. It is a general feature of series connections of capacitors that the total capacitance is less than any of the individual capacitances.
We can find an expression for the total capacitance by considering the voltage across the individual capacitors shown in . Solving for gives . The voltages across the individual capacitors are thus , , and . The total voltage is the sum of the individual voltages:
Now, calling the total capacitance for series capacitance, consider that
Entering the expressions for , , and , we get
Canceling the s, we obtain the equation for the total capacitance in series to be
where “...” indicates that the expression is valid for any number of capacitors connected in series. An expression of this form always results in a total capacitance that is less than any of the individual capacitances , , ..., as the next example illustrates.
### Capacitors in Parallel
(a) shows a parallel connection of three capacitors with a voltage applied. Here the total capacitance is easier to find than in the series case. To find the equivalent total capacitance , we first note that the voltage across each capacitor is , the same as that of the source, since they are connected directly to it through a conductor. (Conductors are equipotentials, and so the voltage across the capacitors is the same as that across the voltage source.) Thus the capacitors have the same charges on them as they would have if connected individually to the voltage source. The total charge is the sum of the individual charges:
Using the relationship , we see that the total charge is , and the individual charges are , , and . Entering these into the previous equation gives
Canceling from the equation, we obtain the equation for the total capacitance in parallel
:
Total capacitance in parallel is simply the sum of the individual capacitances. (Again the “...” indicates the expression is valid for any number of capacitors connected in parallel.) So, for example, if the capacitors in the example above were connected in parallel, their capacitance would be
The equivalent capacitor for a parallel connection has an effectively larger plate area and, thus, a larger capacitance, as illustrated in (b).
More complicated connections of capacitors can sometimes be combinations of series and parallel. (See .) To find the total capacitance of such combinations, we identify series and parallel parts, compute their capacitances, and then find the total.
### Section Summary
1. Total capacitance in series
2. Total capacitance in parallel
3. If a circuit contains a combination of capacitors in series and parallel, identify series and parallel parts, compute their capacitances, and then find the total.
### Conceptual Questions
### Problems & Exercises
|
# Electric Potential and Electric Field
## Energy Stored in Capacitors
### Learning Objectives
By the end of this section, you will be able to:
1. List some uses of capacitors.
2. Express in equation form the energy stored in a capacitor.
3. Explain the function of a defibrillator.
Most of us have seen dramatizations in which medical personnel use a defibrillator to pass an electric current through a patient’s heart to get it to beat normally. (Review .) Often realistic in detail, the person applying the shock directs another person to “make it 400 joules this time.” The energy delivered by the defibrillator is stored in a capacitor and can be adjusted to fit the situation. SI units of joules are often employed. Less dramatic is the use of capacitors in microelectronics, such as certain handheld calculators, to supply energy when batteries are charged. (See .) Capacitors are also used to supply energy for flash lamps on cameras.
Energy stored in a capacitor is electrical potential energy, and it is thus related to the charge and voltage on the capacitor. We must be careful when applying the equation for electrical potential energy to a capacitor. Remember that is the potential energy of a charge going through a voltage . But the capacitor starts with zero voltage and gradually comes up to its full voltage as it is charged. The first charge placed on a capacitor experiences a change in voltage , since the capacitor has zero voltage when uncharged. The final charge placed on a capacitor experiences , since the capacitor now has its full voltage on it. The average voltage on the capacitor during the charging process is , and so the average voltage experienced by the full charge is . Thus the energy stored in a capacitor, , is
where is the charge on a capacitor with a voltage applied. (Note that the energy is not , but .) Charge and voltage are related to the capacitance
of a capacitor by , and so the expression for can be algebraically manipulated into three equivalent expressions:
where is the charge and the voltage on a capacitor . The energy is in joules for a charge in coulombs, voltage in volts, and capacitance in farads.
In a defibrillator, the delivery of a large charge in a short burst to a set of paddles across a person’s chest can be a lifesaver. The person’s heart attack might have arisen from the onset of fast, irregular beating of the heart—cardiac or ventricular fibrillation. The application of a large shock of electrical energy can terminate the arrhythmia and allow the body’s pacemaker to resume normal patterns. Today it is common for ambulances to carry a defibrillator, which also uses an electrocardiogram to analyze the patient’s heartbeat pattern. Automated external defibrillators (AED) are found in many public places (). These are designed to be used by lay persons. The device automatically diagnoses the patient’s heart condition and then applies the shock with appropriate energy and waveform. CPR is recommended in many cases before use of an AED.
### Test Prep for AP Courses
### Section Summary
1. Capacitors are used in a variety of devices, including defibrillators, microelectronics such as calculators, and flash lamps, to supply energy.
2. The energy stored in a capacitor can be expressed in three ways:
where
is the charge, is the voltage, and is the capacitance of the capacitor. The energy is in joules when the charge is in coulombs, voltage is in volts, and capacitance is in farads.
### Conceptual Questions
### Problems & Exercises
|
# Electric Current, Resistance, and Ohm's Law
## Connection for AP® Courses
In our daily lives, we see and experience many examples of electricity which involve electric current, the movement of charge. These include the flicker of numbers on a handheld calculator, nerve impulses carrying signals of vision to the brain, an ultrasound device sending a signal to a computer screen, the brain sending a message for a baby to twitch its toes, an electric train pulling its load over a mountain pass, and a hydroelectric plant sending energy to metropolitan and rural users.
Humankind has indeed harnessed electricity, the basis of technology, to improve the quality of life. While the previous two chapters concentrated on static electricity and the fundamental force underlying its behavior, the next few chapters will be devoted to electric and magnetic phenomena involving electric current. In addition to exploring applications of electricity, we shall gain new insights into its nature – in particular, the fact that all magnetism results from electric current.
This chapter supports learning objectives covered under Big Ideas 1, 4, and 5 of the AP Physics Curriculum Framework. Electric charge is a property of a system (Big Idea 1) that affects its interaction with other charged systems (Enduring Understanding 1.B), whereas electric current is fundamentally the movement of charge through a conductor and is based on the fact that electric charge is conserved within a system (Essential Knowledge 1.B.1). The conservation of charge also leads to the concept of an electric circuit as a closed loop of electrical current. In addition, this chapter discusses examples showing that the current in a circuit is resisted by the elements of the circuit and the strength of the resistance depends on the material of the elements. The macroscopic properties of materials, including resistivity, depend on their molecular and atomic structure (Enduring Understanding 1.E). In addition, resistivity depends on the temperature of the material (Essential Knowledge 1.E.2).
The chapter also describes how the interaction of systems of objects can result in changes in those systems (Big Idea 4). For example, electric properties of a system of charged objects can change in response to the presence of, or changes in, other charged objects or systems (Enduring Understanding 4.E). A simple circuit with a resistor and an energy source is an example of such a system. The current through the resistor in the circuit is equal to the difference of potentials across the resistor divided by its resistance (Essential Knowledge 4.E.4).
The unifying theme of the physics curriculum is that any changes in the systems due to interactions are governed by laws of conservation (Big Idea 5). This chapter applies the idea of energy conservation (Enduring Understanding 5.B) to electric circuits and connects concepts of electric energy and electric power as rates of energy use (Essential Knowledge 5.B.5). While the laws of conservation of energy in electric circuits are fully described by Kirchoff's rules, which are introduced in the next chapter (Essential Knowledge 5.B.9), the specific definition of power (based on Essential Knowledge 5.B.9) is that it is the rate at which energy is transferred from a resistor as the product of the electric potential difference across the resistor and the current through the resistor.
Big Idea 1 Objects and systems have properties such as mass and charge. Systems may have internal structure.
Enduring Understanding 1.B Electric charge is a property of an object or system that affects its interactions with other objects or systems containing charge.
Essential Knowledge 1.B.1 Electric charge is conserved. The net charge of a system is equal to the sum of the charges of all the objects in the system.
Enduring Understanding 1.E Materials have many macroscopic properties that result from the arrangement and interactions of the atoms and molecules that make up the material.
Essential Knowledge 1.E.2 Matter has a property called resistivity.
Big Idea 4 Interactions between systems can result in changes in those systems.
Enduring Understanding 4.E The electric and magnetic properties of a system can change in response to the presence of, or changes in, other objects or systems.
Essential Knowledge 4.E.4 The resistance of a resistor, and the capacitance of a capacitor, can be understood from the basic properties of electric fields and forces, as well as the properties of materials and their geometry.
Big Idea 5: Changes that occur as a result of interactions are constrained by conservation laws.
Enduring Understanding 5.B The energy of a system is conserved.
Essential Knowledge 5.B.5 Energy can be transferred by an external force exerted on an object or system that moves the object or system through a distance; this energy transfer is called work. Energy transfer in mechanical or electrical systems may occur at different rates. Power is defined as the rate of energy transfer into, out of, or within a system. [A piston filled with gas getting compressed or expanded is treated in Physics 2 as a part of thermodynamics.]
Essential Knowledge 5.B.9 Kirchhoff's loop rule describes conservation of energy in electrical circuits. [The application of Kirchhoff's laws to circuits is introduced in Physics 1 and further developed in Physics 2 in the context of more complex circuits, including those with capacitors.] |
# Electric Current, Resistance, and Ohm's Law
## Current
### Learning Objectives
By the end of this section, you will be able to:
1. Define electric current, ampere, and drift velocity
2. Describe the direction of charge flow in conventional current.
3. Use drift velocity to calculate current and vice versa.
### Electric Current
Electric current is defined to be the rate at which charge flows. A large current, such as that used to start a truck engine, moves a large amount of charge in a small time, whereas a small current, such as that used to operate a hand-held calculator, moves a small amount of charge over a long period of time. In equation form, electric current is defined to be
where is the amount of charge passing through a given area in time . (As in previous chapters, initial time is often taken to be zero, in which case .) (See .) The SI unit for current is the ampere (A), named for the French physicist André-Marie Ampère (1775–1836). Since , we see that an ampere is one coulomb per second:
Not only are fuses and circuit breakers rated in amperes (or amps), so are many electrical appliances.
shows a simple circuit and the standard schematic representation of a battery, conducting path, and load (a resistor). Schematics are very useful in visualizing the main features of a circuit. A single schematic can represent a wide variety of situations. The schematic in (b), for example, can represent anything from a truck battery connected to a headlight lighting the street in front of the truck to a small battery connected to a penlight lighting a keyhole in a door. Such schematics are useful because the analysis is the same for a wide variety of situations. We need to understand a few schematics to apply the concepts and analysis to many more situations.
Note that the direction of current flow in is from positive to negative. The direction of conventional current is the direction that positive charge would flow. Depending on the situation, positive charges, negative charges, or both may move. In metal wires, for example, current is carried by electrons—that is, negative charges move. In ionic solutions, such as salt water, both positive and negative charges move. This is also true in nerve cells. A Van de Graaff generator used for nuclear research can produce a current of pure positive charges, such as protons. illustrates the movement of charged particles that compose a current. The fact that conventional current is taken to be in the direction that positive charge would flow can be traced back to American politician and scientist Benjamin Franklin in the 1700s. He named the type of charge associated with electrons negative, long before they were known to carry current in so many situations. Franklin, in fact, was totally unaware of the small-scale structure of electricity.
It is important to realize that there is an electric field in conductors responsible for producing the current, as illustrated in . Unlike static electricity, where a conductor in equilibrium cannot have an electric field in it, conductors carrying a current have an electric field and are not in static equilibrium. An electric field is needed to supply energy to move the charges.
### Drift Velocity
Electrical signals are known to move very rapidly. Telephone conversations carried by currents in wires cover large distances without noticeable delays. Lights come on as soon as a switch is flicked. Most electrical signals carried by currents travel at speeds on the order of , a significant fraction of the speed of light. Interestingly, the individual charges that make up the current move much more slowly on average, typically drifting at speeds on the order of . How do we reconcile these two speeds, and what does it tell us about standard conductors?
The high speed of electrical signals results from the fact that the force between charges acts rapidly at a distance. Thus, when a free charge is forced into a wire, as in , the incoming charge pushes other charges ahead of it, which in turn push on charges farther down the line. The density of charge in a system cannot easily be increased, and so the signal is passed on rapidly. The resulting electrical shock wave moves through the system at nearly the speed of light. To be precise, this rapidly moving signal or shock wave is a rapidly propagating change in electric field.
Good conductors have large numbers of free charges in them. In metals, the free charges are free electrons. shows how free electrons move through an ordinary conductor. The distance that an individual electron can move between collisions with atoms or other electrons is quite small. The electron paths thus appear nearly random, like the motion of atoms in a gas. But there is an electric field in the conductor that causes the electrons to drift in the direction shown (opposite to the field, since they are negative). The drift velocity is the average velocity of the free charges. Drift velocity is quite small, since there are so many free charges. If we have an estimate of the density of free electrons in a conductor, we can calculate the drift velocity for a given current. The larger the density, the lower the velocity required for a given current.
The free-electron collisions transfer energy to the atoms of the conductor. The electric field does work in moving the electrons through a distance, but that work does not increase the kinetic energy (nor speed, therefore) of the electrons. The work is transferred to the conductor’s atoms, possibly increasing temperature. Thus a continuous power input is required to keep a current flowing. An exception, of course, is found in superconductors, for reasons we shall explore in a later chapter. Superconductors can have a steady current without a continual supply of energy—a great energy savings. In contrast, the supply of energy can be useful, such as in a lightbulb filament. The supply of energy is necessary to increase the temperature of the tungsten filament, so that the filament glows.
We can obtain an expression for the relationship between current and drift velocity by considering the number of free charges in a segment of wire, as illustrated in . The number of free charges per unit volume is given the symbol and depends on the material. The shaded segment has a volume , so that the number of free charges in it is . The charge in this segment is thus , where is the amount of charge on each carrier. (Recall that for electrons, is .) Current is charge moved per unit time; thus, if all the original charges move out of this segment in time , the current is
Note that is the magnitude of the drift velocity,
, since the charges move an average distance in a time . Rearranging terms gives
where is the current through a wire of cross-sectional area made of a material with a free charge density . The carriers of the current each have charge and move with a drift velocity of magnitude .
Note that simple drift velocity is not the entire story. The speed of an electron is much greater than its drift velocity. In addition, not all of the electrons in a conductor can move freely, and those that do might move somewhat faster or slower than the drift velocity. So what do we mean by free electrons? Atoms in a metallic conductor are packed in the form of a lattice structure. Some electrons are far enough away from the atomic nuclei that they do not experience the attraction of the nuclei as much as the inner electrons do. These are the free electrons. They are not bound to a single atom but can instead move freely among the atoms in a “sea” of electrons. These free electrons respond by accelerating when an electric field is applied. Of course as they move they collide with the atoms in the lattice and other electrons, generating thermal energy, and the conductor gets warmer. In an insulator, the organization of the atoms and the structure do not allow for such free electrons.
### Test Prep for AP Courses
### Section Summary
1. Electric current is the rate at which charge flows, given by
where is the amount of charge passing through an area in time .
2. The direction of conventional current is taken as the direction in which positive charge moves.
3. The SI unit for current is the ampere (A), where
4. Current is the flow of free charges, such as electrons and ions.
5. Drift velocity is the average speed at which these charges move.
6. Current is proportional to drift velocity , as expressed in the relationship . Here, is the current through a wire of cross-sectional area . The wire’s material has a free-charge density , and each carrier has charge and a drift velocity .
7. Electrical signals travel at speeds about times greater than the drift velocity of free electrons.
### Conceptual Questions
### Problems & Exercises
|
# Electric Current, Resistance, and Ohm's Law
## Ohm’s Law: Resistance and Simple Circuits
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the origin of Ohm’s law.
2. Calculate voltages, currents, or resistances with Ohm’s law.
3. Explain what an ohmic material is.
4. Describe a simple circuit.
What drives current? We can think of various devices—such as batteries, generators, wall outlets, and so on—which are necessary to maintain a current. All such devices create a potential difference and are loosely referred to as voltage sources. When a voltage source is connected to a conductor, it applies a potential difference that creates an electric field. The electric field in turn exerts force on charges, causing current.
### Ohm’s Law
The current that flows through most substances is directly proportional to the voltage applied to it. The German physicist Georg Simon Ohm (1787–1854) was the first to demonstrate experimentally that the current in a metal wire is directly proportional to the voltage applied:
This important relationship is known as Ohm’s law. It can be viewed as a cause-and-effect relationship, with voltage the cause and current the effect. This is an empirical law like that for friction—an experimentally observed phenomenon. Such a linear relationship doesn’t always occur.
### Resistance and Simple Circuits
If voltage drives current, what impedes it? The electric property that impedes current (crudely similar to friction and air resistance) is called resistance . Collisions of moving charges with atoms and molecules in a substance transfer energy to the substance and limit current. Resistance is defined as inversely proportional to current, or
Thus, for example, current is cut in half if resistance doubles. Combining the relationships of current to voltage and current to resistance gives
This relationship is also called Ohm’s law. Ohm’s law in this form really defines resistance for certain materials. Ohm’s law (like Hooke’s law) is not universally valid. The many substances for which Ohm’s law holds are called ohmic. These include good conductors like copper and aluminum, and some poor conductors under certain circumstances. Ohmic materials have a resistance that is independent of voltage and current . An object that has simple resistance is called a resistor, even if its resistance is small. The unit for resistance is an ohm and is given the symbol (upper case Greek omega). Rearranging gives , and so the units of resistance are 1 ohm = 1 volt per ampere:
shows the schematic for a simple circuit. A simple circuit has a single voltage source and a single resistor. The wires connecting the voltage source to the resistor can be assumed to have negligible resistance, or their resistance can be included in .
Resistances range over many orders of magnitude. Some ceramic insulators, such as those used to support power lines, have resistances of or more. A dry person may have a hand-to-foot resistance of , whereas the resistance of the human heart is about . A meter-long piece of large-diameter copper wire may have a resistance of , and superconductors have no resistance at all (they are non-ohmic). Resistance is related to the shape of an object and the material of which it is composed, as will be seen in Resistance and Resistivity.
Additional insight is gained by solving for yielding
This expression for can be interpreted as the voltage drop across a resistor produced by the flow of current . The phrase drop is often used for this voltage. For instance, the headlight in has an drop of 12.0 V. If voltage is measured at various points in a circuit, it will be seen to increase at the voltage source and decrease at the resistor. Voltage is similar to fluid pressure. The voltage source is like a pump, creating a pressure difference, causing current—the flow of charge. The resistor is like a pipe that reduces pressure and limits flow because of its resistance. Conservation of energy has important consequences here. The voltage source supplies energy (causing an electric field and a current), and the resistor converts it to another form (such as thermal energy). In a simple circuit (one with a single simple resistor), the voltage supplied by the source equals the voltage drop across the resistor, since , and the same flows through each. Thus the energy supplied by the voltage source and the energy converted by the resistor are equal. (See .)
### Test Prep for AP Courses
### Section Summary
1. A simple circuit is one in which there is a single voltage source and a single resistance.
2. One statement of Ohm’s law gives the relationship between current
,
voltage
,
and resistance
in a simple circuit to be
3. Resistance has units of ohms (
), related to volts and amperes by .
4. There is a voltage or drop across a resistor, caused by the current flowing through it, given by .
### Conceptual Questions
### Problems & Exercises
|
# Electric Current, Resistance, and Ohm's Law
## Resistance and Resistivity
### Learning Objectives
By the end of this section, you will be able to:
1. Explain the concept of resistivity.
2. Use resistivity to calculate the resistance of specified configurations of material.
3. Use the thermal coefficient of resistivity to calculate the change of resistance with temperature.
### Material and Shape Dependence of Resistance
The resistance of an object depends on its shape and the material of which it is composed. The cylindrical resistor in is easy to analyze, and, by so doing, we can gain insight into the resistance of more complicated shapes. As you might expect, the cylinder’s electric resistance is directly proportional to its length , similar to the resistance of a pipe to fluid flow. The longer the cylinder, the more collisions charges will make with its atoms. The greater the diameter of the cylinder, the more current it can carry (again similar to the flow of fluid through a pipe). In fact, is inversely proportional to the cylinder’s cross-sectional area .
For a given shape, the resistance depends on the material of which the object is composed. Different materials offer different resistance to the flow of charge. We define the resistivity of a substance so that the resistance of an object is directly proportional to . Resistivity is an intrinsic property of a material, independent of its shape or size. The resistance of a uniform cylinder of length , of cross-sectional area , and made of a material with resistivity , is
gives representative values of . The materials listed in the table are separated into categories of conductors, semiconductors, and insulators, based on broad groupings of resistivities. Conductors have the smallest resistivities, and insulators have the largest; semiconductors have intermediate resistivities. Conductors have varying but large free charge densities, whereas most charges in insulators are bound to atoms and are not free to move. Semiconductors are intermediate, having far fewer free charges than conductors, but having properties that make the number of free charges depend strongly on the type and amount of impurities in the semiconductor. These unique properties of semiconductors are put to use in modern electronics, as will be explored in later chapters.
### Temperature Variation of Resistance
The resistivity of all materials depends on temperature. Some even become superconductors (zero resistivity) at very low temperatures. (See .) Conversely, the resistivity of conductors increases with increasing temperature. Since the atoms vibrate more rapidly and over larger distances at higher temperatures, the electrons moving through a metal make more collisions, effectively making the resistivity higher. Over relatively small temperature changes (about or less), resistivity varies with temperature change as expressed in the following equation
where is the original resistivity and is the temperature coefficient of resistivity. (See the values of in below.) For larger temperature changes, may vary or a nonlinear equation may be needed to find . Note that is positive for metals, meaning their resistivity increases with temperature. Some alloys have been developed specifically to have a small temperature dependence. Manganin (which is made of copper, manganese and nickel), for example, has close to zero (to three digits on the scale in ), and so its resistivity varies only slightly with temperature. This is useful for making a temperature-independent resistance standard, for example.
Note also that is negative for the semiconductors listed in , meaning that their resistivity decreases with increasing temperature. They become better conductors at higher temperature, because increased thermal agitation increases the number of free charges available to carry current. This property of decreasing with temperature is also related to the type and amount of impurities present in the semiconductors.
The resistance of an object also depends on temperature, since is directly proportional to . For a cylinder we know , and so, if and do not change greatly with temperature, will have the same temperature dependence as . (Examination of the coefficients of linear expansion shows them to be about two orders of magnitude less than typical temperature coefficients of resistivity, and so the effect of temperature on and is about two orders of magnitude less than on .) Thus,
is the temperature dependence of the resistance of an object, where is the original resistance and is the resistance after a temperature change . Numerous thermometers are based on the effect of temperature on resistance. (See .) One of the most common is the thermistor, a semiconductor crystal with a strong temperature dependence, the resistance of which is measured to obtain its temperature. The device is small, so that it quickly comes into thermal equilibrium with the part of a person it touches.
### Test Prep for AP Courses
### Section Summary
1. The resistance of a cylinder of length and cross-sectional area is , where is the resistivity of the material.
2. Values of in show that materials fall into three groups—conductors, semiconductors, and insulators.
3. Temperature affects resistivity; for relatively small temperature changes , resistivity is , where is the original resistivity and
is the temperature coefficient of resistivity.
4. gives values for , the temperature coefficient of resistivity.
5. The resistance of an object also varies with temperature: , where is the original resistance, and
is the resistance after the temperature change.
### Conceptual Questions
### Problems & Exercises
|
Subsets and Splits