chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Let us recap where we are with atoms, at least from a chemistry perspective:
• Atoms interact electrostatically with each other in a variety of ways, from transient interactions that result in weak (easy to overcome) attractions to strong (bonding) interactions that are much more difficult to break.
• When atoms interact they form more stable systems, where the attractive and repulsive interactions are equal. The potential energy of the system decreases but the total energy of the system remains constant. The total energy of the interacting atoms (the system) can decrease if it is transferred to the surroundings, usually by collisions with other molecules or atoms but the emission of a photon is also possible.
• Whether weak or strong, all types of interactions require energy to overcome. Typically this energy is derived from collisions with surrounding molecules, although absorption of a photon can also overcome interactions.
• The ways that atoms interact depend upon the arrangements of the electrons within them. Different types of atoms have different “internal” arrangements of electrons.
• When atoms bond to form new materials (compounds), the properties of those compounds are emergent—that is, they are quite different from the properties of the isolated component atoms.
• The macroscopic properties of materials depend upon the types of bonds present and their spatial organization, which influences molecular shape, the distribution of charges within the molecule, and intermolecular interactions.
• Some materials are continuous (diamond, metals, ionic compounds), whereas others are composed of discrete molecular units (water, methane, lipids, proteins).
• If you know the temperature at which phase changes occur in a material (solid to liquid, liquid to gas, etc.), you can make predictions about how much energy is required to overcome the interactions between the particles that make up the material.
Now we are ready to draw all these ideas together and make connections between the macroscopic and molecular levels. Understanding these connections allows us to predict how and when chemical changes will occur, which is the heart of chemistry.
Thumbnail: Flames of charcoal. (CC BY-SA 3.0; Oscar via Wikipedia)
05: Systems Thinking
Up to now the major types of change we have considered are phase changes (solid to liquid, liquid to gas, etc.) Now we will look at the elements of a phase change in greater detail starting with temperature. If you look up the definition of temperature you will probably find something like “the degree of heat of an object” and think to yourself, “Well, that’s not very illuminating, is it?” However, it is actually quite difficult to give a simple definition of temperature, (typically abbreviated as $\mathrm{T}$). If you were already taught about temperature in physics courses, please bear with us (a chemist and a cell and molecular biologist) as we work our way through it, sometimes it it helpful to think about things you already know in new ways!
A useful macroscopic way of thinking about temperature is that it tells you in which direction thermal energy (often called heat) will move—energy always moves from a hotter (higher-temperature) object to a cooler (lower-temperature) one. This may seem like an obvious statement about how the physical world works but do you really know why it must be the case? Why doesn’t heat flow from cooler to warmer? Is there some principle that will allow us to to explain why? We will be coming back to these questions later on in this chapter.
Students often confuse temperature and thermal energy and before we go on we need to have a good grasp of the difference between them. The temperature of an object is independent of the size of the object, at least until we get down to the atomic/molecular level where temperature begins to lose its meaning as a concept.[1] The temperature of a drop of boiling water is the same as the temperature of a pan (or an ocean) of boiling water: $100 { }^{\circ}\mathrm{C}$ at sea level. At the same time the total amount of thermal energy in a drop of water is much less than that in a large pot of water at the same temperature. A drop of boiling water may sting for a moment if it lands on you, but a pan of boiling water will cause serious damage if it splashes over you. Why? Even though the two are at the same temperature, one has relatively little thermal energy and the other has a lot; the amount of energy is related to the size of the system. In addition, the amount of thermal energy depends on the type, that is, the composition of the material. Different amounts of different substances can have different amounts of thermal energy, even if they are at the same temperature (weird but true).
Kinetic Energy and Temperature
Another way of thinking about temperature is that it is related to the energy of the particles in the sample: the faster the particles are moving, the higher the temperature. It may well take different amounts of energy to get particles moving at the same average kinetic energy. For a simple monoatomic gas, like helium or neon, the only motion that the atoms can do is to move from one place to another in a straight line until they bump into something else, such as another atom or molecule.[2] This kind of motion is called translational motion and is directly linked to the kinetic energy of the atom or molecule through the relationship $\mathrm{KE} = \frac{1}{2} mv(\mathrm{bar})^{2} = \frac{3}{2}k\mathrm{T}$ where $v(\mathrm{bar})$ is the average velocity of all of the molecules in the population[3], $m$ is the mass, $k$ is a constant, known as the Boltzmann constant, and $\mathrm{T}$ is the temperature. That is, the average kinetic energy of a gas is directly related to the temperature. In any given gaseous sample of moving atoms there are many collisions per unit time but these collisions do not alter the total energy of the system (it is conserved).[4] What these collision can, and often do, alter is the relative kinetic energies of the two (or more) colliding atoms: if one slows down, the other will speed up (remember, we are now talking only about monoatomic species; things get more complicated with more complex molecules).
Any single atom or molecule has kinetic energy, but not a temperature. This is an important distinction. Populations of molecules have a temperature related to their average velocity but the concept of temperature is not relevant to individual molecules, they have kinetic energy but not a temperature. This is a important idea, temperature as a characteristic of a system not its individual components. While a system has a unique temperature, the individual molecules that make up the system can have quite different kinetic energies. Because of collisions between molecules, an individual molecule’s kinetic energy can be changing rapidly, even though the temperature of the system is constant. When it comes to chemical reactions, it is individual kinetic energies that will be critical (we consider this point in greater detail in Chapter $7$). | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/05%3A_Systems_Thinking/5.1%3A_Temperature.txt |
Within a population of atoms and molecules, the many collisions that occur per second lead to a range of speeds and directions (that is, velocities) of the atoms/molecules. When large numbers of particles are involved in a phenomenon, their individual actions are not important, for example when measuring temperature or pressure (although they are when individual molecules collide, that is, take part in chemical reactions). We treat large numbers of molecules as a population. A population is characterized by the distribution of the number or probability of molecules moving with various velocities.[5] This makes it possible to use statistical methods to characterize the behavior of the population. Although any particular molecule behaves differently from one moment to the next, depending upon whether it collides with other molecules or not, the behavior of the population is quite predictable.[6]
From this population perspective, it is the distribution of kinetic energies of atoms or molecules that depends upon the temperature of the system. We will not concern ourselves with deriving the equations that describe these relationships, but rather focus on a general description of the behavior of the motions of atoms and molecules in various states of matter.
Let us think about a population of molecules at a particular temperature in the gas phase. Because of their constant collisions with one another, the population of molecules has a distribution of speeds. We can calculate the probability of a particular molecule moving at a particular speed. This relationship is known as the Maxwell–Boltzmann distribution, shown in the graph. Its shape is a function of the temperature of the system; typically it rises fairly steeply from zero (all of the curves begin at zero – why is that do you think?) to a maximum, which then decreases and tails off at higher velocities (which correspond to higher kinetic energies). Because we are plotting probability versus kinetic energy (or rms velocity or speed) we can set the area under the curve to be equal to one (or any other constant). As the temperature changes, the area under the curve stays constant. Why? Because we are completely certain that each particle has some defined amount of kinetic energy (or velocity or speed), even if it is zero and even if we could not possibly know it (remember the uncertainty principle). As the temperature is increased, the relative number of particles that are moving at higher speeds and with more kinetic energy increases. The shape of the curve flattens out and becomes broader. There are still molecules moving very slowly, but there are relatively fewer of them. The most probable speed (the peak of the curve) and the average speed (which is a little higher since the curve is not symmetrical) increase as the temperature increases.
Questions
Questions to Answer
• What happens to the average speed of molecules as temperature increases?
• When molecules collide, why don’t they stick together?
• What do you think happens to the average speed as molecular weight increases (assuming the temperature stays the same)?
• Imagine a system composed of two different types of molecules, one much heavier than the other. At a particular temperature, how do their average kinetic energies compare? Which, on average, is moving faster?
Questions to Ponder
• How large does a system have to be to have a temperature, $10$ molecules or $10,000,000$?
• If one considers the uncertainty principle, what is the slowest velocity at which a molecule can move?
• If you place a thermometer into a solution, why does it take time for the reading on the thermometer to correspond to the temperature of the solution?
Temperature, Kinetic Energy and Gases
Now here is an unexpected fact: the average kinetic energies of molecules of any gas at the same temperature are equal (since $\mathrm{KE} = \frac{3}{2}k\mathrm{T}$, the identity of the gas does not matter). Let us think about how that could be true and what it implies about gases. Under most circumstances the molecules in a gas do not significantly interact with each other; all they do is collide with one another like billiard balls. So when two gases are at the same temperature, their molecules have the same average kinetic energy. However, an even more unexpected fact is that the mass of the molecules of one gas is different from the mass of the molecules of the other gas. Therefore, given that the average kinetic energies are the same, but the molecular masses are different, the average velocities of molecules in the two gases must be different. For example, let us compare molecular hydrogen ($\mathrm{H}_{2}$) gas (molecular weight $= 2 \mathrm{~g/mol}$) with molecular oxygen ($\mathrm{O}_{2}$) gas (molecular weight $= 32 \mathrm{~g/mol}$), at the same temperature. Since they are at the same temperature the average kinetic energy of $\mathrm{H}_{2}$ must be equal to the average kinetic energy of $\mathrm{O}_{2}$, then the $\mathrm{H}_{2}$ molecules must be moving, on average, faster than the $\mathrm{O}_{2}$ molecules.[7]
So the average speed at which an atom or molecule moves depends on its mass. Heavier particles move more slowly, on average, which makes perfect sense. Consider a plot of the behavior of the noble (monoatomic) gases, all at the same temperature. On average helium atoms move much faster than xenon atoms, which are over 30 times heavier. As a side note, gas molecules tend to move very fast. At $0 { }^{\circ}\mathrm{C}$ the average $\mathrm{H}_{2}$ molecule is moving at about $2000 \mathrm{~m/s}$, which is more than a mile per second and the average $\mathrm{O}_{2}$ molecule is moving at approximately $500 \mathrm{~m/s}$. This explains why smells travel relatively fast: if someone spills perfume on one side of a room, you can smell it almost instantaneously. It also explains why you can’t smell something unless it is a gas. We will return to this idea later.
Questions
Questions to Answer
• Why don’t all gas particles move with the same speed at a given temperature?
• Where would krypton appear on the plot above? Why?
• Consider air, a gas composed primarily of $\mathrm{N}_{2}$, $\mathrm{O}_{2}$, and $\mathrm{CO}_{2}$. At a particular temperature, how do the average kinetic energies of these molecules compare to one another?
• What would a plot of kinetic energy versus probability look like for the same gas at different temperatures?
• What would a plot of kinetic energy (rather than speed) versus probability look like for different gases (e.g., the noble gases) at the same temperature?
Questions to Ponder
• If gas molecules are moving so fast (around $500 \mathrm{~m/s}$), why do most smells travel at significantly less than that?
• Why does it not matter much if we use speed, velocity, or kinetic energy to present the distribution of motion of particles in a system (assuming the particles are all the same)? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/05%3A_Systems_Thinking/5.2%3A_Thinking_About_Populations_of_Molecules.txt |
As we have already seen the average kinetic energy of a gas sample can be directly related to temperature by the equation $\mathrm{E}k(\mathrm{bar}) = \frac{1}{2} mv(\mathrm{bar})^{2} = \frac{3}{2} k\mathrm{T}$ where $v(\mathrm{bar})$ is the average velocity and $k$ is a constant, known as the Boltzmann constant. So, you might reasonably conclude that when the temperature is $0 \mathrm{~K}$, all movement stops. However, if a molecule stops moving we should be able to tell exactly where it is, right? Oh no! That would violate the uncertainty principle, which means there will need be some uncertainty in its energy! At $0 \mathrm{~K}$ (a temperature that cannot be reached, even in theory) the system will have what is called zero point energy: the energy that remains when all the other energy is removed from a system (a quantum mechanical concept completely irrelevant to normal life).
https://www.youtube.com/watch?v=FnUGeYkFCCw
For monoatomic gases, temperature is a measure of the average kinetic energy of molecules. But for systems made up of more complex molecules composed of multiple atoms, there are other ways to store energy besides translation (that is, moving through space). In these situations energy added to a system can not only speed up the movement of molecules but also make them vibrate, bend, and rotate (recall we discussed this briefly in Chapter $4$)(FIG$\rightarrow$). These vibrations, bends, and rotations are distinct for each type of molecule; they depend upon molecular shape and composition. Perhaps not surprisingly, they are quantized. This means that only certain packets of energy can be absorbed or released depending on which vibrations or rotations are involved.[8] Because of that, we can use these molecule-specific energy states to identify molecules and determine their structure at the atomic level. Just as we can identify atoms of elements by their electronic spectra (how their electrons absorb and emit photons as they move from one quantum level to another), we can identify molecules by the way they absorb or emit photons as the molecule moves from one vibrational or rotational state to another. Because it takes less energy to move between vibrational states, photons of infrared or microwave frequencies are typically involved in this analysis. This is the basis for infrared spectroscopy, a topic that we will return to in a separate work.
As materials become more complex in structure, more energy is needed to increase their temperature because there are more ways for a complex molecule to vibrate, bend, and rotate; some of the added energy is used up in vibrations and rotations as well as translations. The amount of energy required to raise the temperature of a particular amount of substance is determined by the molecular-level structure of the material. We can do experiments to determine how adding energy to a substance affects its temperature. Although the word heat is sometimes used to describe thermal energy, in the world of physics it is specifically used to describe the transfer of thermal energy from one thing to another. So, we will stick with thermal energy here.
The units of thermal energy are joules ($\mathrm{J}$).[9] Thermal energy is the sum of the kinetic and other potential energies of the particles in a system. There are two commonly used measures of how much energy it takes to change the temperature of a substance and, conversely, how much energy a substance can store at a given temperature: specific heat capacity ($\mathrm{J/g} { }^{\circ}\mathrm{C}$) and molar heat capacity ($\mathrm{J/mol} { }^{\circ}\mathrm{C}$). The specific heat of a substance tells you how much energy is required to raise the temperature of a mass ($1 \mathrm{~g}$) of material by $1 { }^{\circ}\mathrm{C}$; the molar heat capacity tells you how much energy is required to raise the temperature of a mole of particles by $1 { }^{\circ}\mathrm{C}$. The specific heats and molar heat capacity of a substance depend on both the molecular structure and intermolecular interactions (for solids and liquids, but not gases). Usually, more complex substances have a higher molar heat capacity because larger molecules have more possible ways to vibrate, bend, and rotate. Substances with strong IMFs tend to have higher heat capacities than those with weaker IMFs because energy must be used to overcome the interactions between molecules, rather than make the substance move faster – which increases the temperature.
Heat Capacity and Molecular Structure
It takes $4.12 \mathrm{~J}$ to raise 1 gram of water $1 { }^{\circ}\mathrm{C}$ (or $1 \mathrm{~K}$.) If you add energy to a pan of water by heating it on a stove top energy is transferred to the molecules of water by collisions with the pan, which in turn has heated up from contact with the heating element[10]. The addition of energy to the system results in the faster movement of molecules, which includes moving from place to place, rotating, bending, and vibrating. Each type of movement adds to the overall thermal energy of the material. Although the molecules in a gas very rarely interact with one another, those in a solid and liquid interact constantly. The increase in temperature as a function of added energy is relatively simple to calculate for a gas; it is much more complicated for liquids and solids, where it depends upon molecular structure and intramolecular (within a molecule) as well as intermolecular (between molecules) interactions.
Consider the molar heat capacities and specific heats of water and the hydrocarbon alcohols (which contain an $\mathrm{-OH}$ group) methanol, ethanol, and propanol. As you can see in the table below, water has an unusually high specific heat, even though it is smaller than the other molecules. Their specific heats are pretty much constant, but their molar heat capacities increase with molar mass.
Name Formula Molar Mass, $g$ Molar Heat Capacity
$\mathrm{J/mol} { }^{\circ}\mathrm{C}$
Specific Heat
$\mathrm{J/g} { }^{\circ}\mathrm{C}$
Water $\mathrm{H}_{2}\mathrm{O}$ $18$ $75.4$ $4.18$
Methanol $\mathrm{CH}_{3}\mathrm{OH}$ $32$ $81.0$ $2.53$
Ethanol $\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{OH}$ $48$ $112$ $2.44$
Propanol $\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{OH}$ $60$ $144$ $2.39$
So an obvious question is, why is the specific heat of water so much higher than that of these alcohols? The reasons for this (apparent) anomaly are:
1. Water molecules are smaller so there are more molecules per gram than there are in the larger, more complex substances.
2. Each water molecule can form up to four hydrogen bonds, but the alcohols can only form a maximum of two hydrogen bonds each (why is this?). As thermal energy is added to the system some of that energy must be used to overcome the attractive forces between molecules (that is, hydrogen bonds) before it can be used to increase the average speed of the molecules. Because there are more hydrogen bonds forming attractions between water molecules, it takes more energy to overcome those interactions and raise the kinetic energy of the water molecules. The end result is a smaller increase in temperature for the same amount of energy added to water compared to methanol, ethanol, and propanol.
The relatively high specific heat of water has important ramifications for us. About 70% of the Earth’s surface is covered with water. Because of water’s high specific heat, changes in the amounts of solar energy falling on an area between day and night are “evened out” by the large amount of water in the oceans. During the day, the water absorbs much of the energy radiated from the sun, but without a drastic temperature increase. At night, as the temperature falls, the oceans release some of this stored energy, thus keeping the temperature fluctuations relatively small. This effect moderates what would otherwise be dramatic daily changes in surface temperature. In contrast, surface temperatures of waterless areas (like deserts), planets (like Mars), and the Moon fluctuate much more dramatically, because there is no water to absorb and release thermal energy.[11] This moderation of day–night temperature change is likely to be one of the factors that made it possible for life to originate, survive, and evolve on the early Earth. As we go on, we will see other aspects of water’s behavior that are critical to life.
Removing Thermal Energy from a Gas
Now that we have been formally introduced to the concepts of heat, thermal energy, and temperature, we can examine what happens when energy is added or removed from matter. We begin with a gas because it is the simplest form of matter. We can observe a gas system by looking at a sealed container of water vapor. We can reduce the temperature by cooling the walls of the container; as gas molecules collide with the walls, some of their energy is transferred to the wall and then removed by the cooling system. Over time, the average kinetic energy of the molecules (temperature) decreases. We know that all molecules are attracted to one another by London dispersion forces. In the case of water molecules, there are also interactions mediated by the ability to make hydrogen bonds and dipole–dipole interactions. As temperature increases, these relatively weak interactions are not strong enough to keep molecules stuck together; they are broken during molecular collisions. As the temperature drops, and the average kinetic energy decreases, more and more of these interactions persist for longer and longer times. This enables groups of molecules to form increasingly larger and heavier aggregates. Assuming that our container is on the surface of the Earth, molecules fall out or condense out of the gaseous phase to form a liquid. Because the molecules in the liquid are interacting closely with one another, the volume occupied by these aggregates is much smaller than the volume occupied by the same number of molecules in a gas. The density (mass/volume) of the liquid is higher, and eventually these drops of liquid become large enough to feel the effect of gravity, and are attracted towards the Earth. As the drops of liquid fall to the bottom of the container they merge with one another and the liquid phase below separates from the gaseous phase above. The temperature where the liquid phase first appears is the boiling (or condensation) point of the material (for water it is $100 { }^{\circ}\mathrm{C}$ under atmospheric pressure at sea level). If we continue to remove energy from the system at a fairly slow, steady rate, the temperature will not change until almost all the water vapor has condensed into liquid. Why do you think this is so? It may be easier to think about the reverse process: when water boils, the temperature of the water does not change until almost all the water in the liquid phase has vaporized, even though energy is being added to the system. What is that energy being used for?
Even at temperatures well below the boiling point there are still some molecules in the gaseous phase. Why? Because within the liquid, some molecules are moving fast enough (and are located close enough to the liquid–gas boundary) to break the interactions holding them in the liquid. When they leave the liquid phase, the average kinetic energy of the liquid drops (the molecules that leave have higher than average kinetic energy) and some of the kinetic energy of the escaping molecules is used to break free of the interactions holding them together in the liquid phase. The escaping molecules now have lower kinetic energy. This is the basis of the process known as evaporative cooling. The same process explains how the evaporation of sweat cools your body.
Questions
Questions to Answer
• Can you measure thermal energy directly? Why or why not?
• What can we measure changes in? How does that allow us to figure out changes in thermal energy of a system?
• Draw a graph of the change in temperature when equal amounts of thermal energy are added at the same rate to equal masses of water, ethanol, and propanol.
• Does each sample reach the same temperature? Why or why not?
• Plot the temperature change versus time as a sample of water vapor moves from a temperature of $110 { }^{\circ}\mathrm{C}$ to $90 { }^{\circ}\mathrm{C}$.
• Draw a molecular-level picture of what the sample looks like at $110 { }^{\circ}\mathrm{C}$ and $90 { }^{\circ}\mathrm{C}$. Explain what is happening in each different part of your graph.
• When energy is added to and the water boils, the temperature stays at $100 { }^{\circ}\mathrm{C}$ until almost all the water is gone. What is the energy being used for?
Questions to Ponder
• What would life be like if we lived on a planet with no water, but instead the oceans were filled with methanol or ammonia (or filled with hydrocarbons as on Titan, a moon of Saturn)?
• After it’s just finished raining, why do pools of water disappear even when the temperature is below the boiling point of water?
• Clouds are made from small droplets of water, why don’t they fall to Earth?
Liquids to Solids and Back Again
Within a liquid, molecules move with respect to one another. That is why liquids flow. What does that mean at the molecular level? It means that the molecules are (on average) moving fast enough to break some, but not all, of the interactions linking them to their neighbors. But let us consider what happens as we remove more and more energy from the system through interactions of the molecules with the container’s walls. With less energy in the system, there is a decrease in the frequency with which molecules have sufficient energy to break the interactions between them, and as a result interactions become more stable. Once most interactions are stable the substance becomes a solid. The temperature at which the material goes from solid to liquid is termed the melting point. A liquid becomes a solid at the freezing point. For water at atmospheric pressure, this is $0 { }^{\circ}\mathrm{C}$ (or $273.15 \mathrm{~K}$). Just like the boiling/condensation point, the temperature does not change appreciably until all the liquid has solidified into ice, or all the ice has melted ($\rightarrow$).
Molecular shape and the geometry of the interactions between molecules determine what happens when water (or any other liquid) is cooled and eventually freezes. In the case of frozen water (ice) there are more than 15 types of arrangements of the molecules, ranging from amorphous to various types of crystalline ice. In amorphous ice, the molecules occupy positions that are more or less random with respect to their neighbors; in contrast the molecules in crystalline ice have very specific orientations to one another. The form of ice we are most familiar with is known as Ice Ih, in which the water molecules are organized in a hexagonal, three-dimensional array. Each molecule is linked to four neighboring molecules through hydrogen bonds. This molecular-level structure is reflected at the macroscopic level, which is why snowflakes are hexagonal. Once frozen, the molecules can no longer move with respect to one another because of the bonds between them; the ice is solid and retains it shape, at both the visible and the invisible (molecular) level. However, because we are not at absolute zero ($0 \mathrm{K}$ or $-273.15 { }^{\circ}\mathrm{C}$), the molecules are still vibrating in place.
Now, what would happen if we heated our container transferring energy from the surroundings into the system (the ice)? As energy is added to the ice the water molecules vibrate more and more vigorously and eventually the hydrogen bonding interactions holding the molecules in place are overcome and the molecules become free to move relative to one another other. The ice melts. At this temperature ($0 { }^{\circ}\mathrm{C, } 273.15 \mathrm{~K}$) all the energy entering the system is used to overcome intermolecular attractions, rather than increase the speed of molecular motion. If the system is well mixed, the temperature stays at $0 { }^{\circ}\mathrm{C}$ until all of the ice has melted. Then the temperature starts to rise again as the water molecules, now free to move relative to each other, increase in kinetic energy.
Because of the arrangement of water molecules in Ice Ih, the hexagonal “cages” of water molecules within the crystal have empty space within them. As the hydrogen bonds break, some of the water molecules can now move closer together to fill in these open spaces. The structure of the ice collapses in on itself. This open network of molecules, which is not present in liquid water, means that Ice Ih is less dense than liquid water, which is why it floats on liquid water. We don’t think much of this commonplace observation, but it is quite rare for a solid to be less dense than the corresponding liquid. More typically, materials (particularly gases, but also liquids and solids) expand when heated as a consequence of the increased kinetic energy, making the particles vibrate more vigorously and take up more space. | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/05%3A_Systems_Thinking/5.3%3A_Vibrating_Bending_and_Rotating_Molecules.txt |
In our discussion, the container of water vapor (gas) is our system: the part of the universe we are observing. It is separated from the rest of the universe (its surroundings) by the walls of the container (its boundary).[12] When we remove energy from the system or add energy to it, that energy goes to or comes from the surroundings. Our system is not an isolated system. If it were, neither energy nor matter would move between the system and the surroundings. In practice it is difficult to construct a perfectly isolated system (although an insulated or styrofoam coffee cup with a lid on is not a bad approximation.) We can also distinguish between open and closed systems: in an open system both matter and energy can enter or leave (we can keep track of both) whereas in a closed system the amount of matter is constant and only energy can enter or leave. Whenever we look at a system our first task is to decide whether the system is isolated, open, or closed. All biological systems are open (both energy and matter are being exchanged with the surroundings.) In the absence of such an exchange, a biological system would eventually die.[13]
Let us consider a beaker of water without a lid as our open system. As the temperature rises, some of the water molecules have enough energy to escape from the body of the water. The liquid water evaporates (changes to a gas). Any gases that might be dissolved in the liquid water, such as oxygen ($\mathrm{O}_{2}$) or nitrogen ($\mathrm{N}_{2}$), also move from the liquid to the gaseous phase. At the boiling point, all the energy being supplied to the system is being used to overcome the intermolecular forces, as it was at the melting point. However, this time the molecules are completely separated from one another, although they still collide periodically. Thus energy is used to overcome attractive forces and the individual molecules fly off into the gas phase where the distances between them become so great that the attractive forces are insignificant.[14] As the liquid boils, its temperature does not rise until all of it has been transformed from liquid to vapor. As the gas molecules fly off, they carry with them some of the system’s energy.
Questions
Questions to Answer
• Begin with an ice cube in a beaker and end with water vapor. Draw a graph of the energy input versus the temperature of the system. Is your graph a straight line?
• What would happen to the mass of the beaker and water during this process?
• Can you reproduce the hexagonal symmetry of ice by using a model kit? What property of hydrogen bonds makes the structure so open?
• As the temperature rises in liquid water, what do you think happens to the density? Draw a plot of density versus temperature for a mass of water beginning at $-10 { }^{\circ}\mathrm{C}$, up to $50 { }^{\circ}\mathrm{C}$.
• What happens when the temperature has risen such that the molecules have enough energy to overcome all the attractions between the separate molecules? Focus not on the covalent bonds but the attractions between separate molecules.
Questions to Answer, continued
• During evaporation and boiling do water molecules ever return to the liquid?
• Estimate the temperature at which the bonds within a water molecule break. How does that temperature compare to the boiling point of water? Why aren’t they the same temperature?
• How would an open and a closed system differ if you heated them from $30$ to $110^{\circ}\mathrm{C}$?
Questions to Ponder
• Are boiling and evaporation fundamentally different processes?
• Under what conditions does evaporation not occur? What is happening at the molecular level?
• What is in the spaces in the middle of the hexagonal holes in Ice Ih?
• What would be the consequences for a closed or isolated biological system?
Questions for Later
• As you heat up a solution of water, predict whether water molecules or dissolved gas molecules will preferentially move from the liquid to the gaseous phase (or will they all move at the same rate?). What factors do you think are responsible for “holding” the gas molecules in the water?
• What do you think happens to the density of the gas (in a closed system) as you increase the temperature?
• What would happen if you captured the gas in a container?
• What would happen if you took that gas in the container and compressed it (made the volume of the container much smaller)? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/05%3A_Systems_Thinking/5.4%3A_Open_Versus_Closed_Systems.txt |
The study of how energy in its various forms moves through a system is called thermodynamics. In chemistry specifically it is called thermochemistry. The first law of thermodynamics tells us that energy can be neither created nor destroyed but it can be transferred from a system to its surroundings and vice versa.[15] For any system, if we add up the kinetic and potential energies of all of the particles that make up the substance we get the total energy. This is called the system’s internal energy, abbreviated as $\mathrm{E}$ in chemistry.[16] It turns out that it is not really possible to measure the total internal energy of a system. But we can measure and calculate the change in internal energy represented as $\Delta \mathrm{E}$ (we use the Greek letter $\Delta$ to stand for change). There are two ways that the internal energy of a system can change: we can change the total amount of thermal energy in the system (denoted as $q$), or the system can do work or have work done to it (denoted as $w$). The change in internal energy is therefore: $\Delta \mathrm{E} = q + w$
At the molecular level, it should now be relatively easy to imagine the effects of adding or removing thermal energy from a system. However work (usually defined as force multiplied by distance) done on or by the system is a macroscopic phenomenon. If the system comes to occupy a larger or smaller volume, work must be done on the surroundings or on the system, respectively. With the exception of gases, most systems we study in chemistry do not expand or contract significantly. In these situations, $\Delta \mathrm{E} = q$, the change in thermal energy (heat). In addition, most of the systems we study in chemistry and biology are under constant pressure (usually atmospheric pressure). Heat change at constant pressure is what is known as a state function, it is called enthalpy ($\mathrm{H}$). A state function is a property of a system that does not depend upon the path taken to get to a particular state. Typically we use upper case symbols (for example, $\mathrm{H}$, $\mathrm{T}$, $\mathrm{P}$, $\mathrm{E}$, $\mathrm{G}$, $\mathrm{S}$) to signify state functions, and lower case symbols for properties that depend on the path by which the change is made (for example, $q$ and $w$). You may be wondering what the difference between a state and path function is. Imagine you are climbing Mount Everest. If you were able to fly in a balloon from the base camp to the top you would travel a certain distance and the height change would be $29,029$ feet. Now in contrast if you traveled on foot, via any one of the recorded paths – which wind around the mountain you would travel very different distances – but the height change would still be $29,029$ feet. That is the distance travelled is a path function and the height of Mt Everest is a state function. Similarly, both $q$ and $\Delta \mathrm{H}$ describe thermal energy changes but $q$ depends on the path and $\Delta \mathrm{H}$ does not. In a system at constant pressure with no volume change, it is the change in enthalpy ($\Delta \mathrm{H}$) that we will be primarily interested in (together with the change in entropy ($\Delta \mathrm{S}$), which we examine shortly in greater detail).
Because we cannot measure energy changes directly we have to use some observable (and measurable) change in the system. Typically we measure the temperature change and then relate it to the energy change. For changes that occur at constant pressure and volume this energy change is the enthalpy change, $\Delta \mathrm{H}$. If we know the temperature change ($\Delta \mathrm{T}$), the amount (mass) of material and its specific heat, we can calculate the enthalpy change: $\Delta H(J)=\text { mass }(g) \times \text { specific heat }\left(J / g^{\circ} C\right) \times \Delta T\left({ }^{\circ} C\right) .$[17]
When considering the enthalpy change for a process, the direction of energy transfer is important. By convention, if thermal energy goes out of the system to the surroundings (that is, the surroundings increase in temperature), the sign of $\Delta \mathrm{H}$ is negative and we say the process is exothermic (literally, “heat out”). Combustion reactions, such as burning wood or gasoline in air, are probably the most common examples of exothermic processes. In contrast, if a process requires thermal energy from the surroundings to make it happen, the sign of $\Delta \mathrm{H}$ is positive and we say the process is endothermic (energy is transferred from the surroundings to the system).
Questions
Questions to Answer
• You have systems (at $10 { }^{\circ}\mathrm{C}$) composed of water, methanol, ethanol, or propanol. Predict the final temperature of each system if equal amounts of thermal energy ($q$) are added to equal amounts of a substance ($m$). What do you need to know to do this calculation?
• Draw a simple sketch of a system and surroundings. Indicate by the use of arrows what we mean by an endothermic process and an exothermic process. What is the sign of $\Delta \mathrm{H}$ for each process?
• Draw a similar diagram and show the direction and sign of work ($w$) when the system does work on the surroundings (expands), and when the surroundings do work on the system (contracts).
• Draw a diagram to show the molecular level mechanism by which thermal energy is transferred in or out of a system. For example how is thermal energy transferred as an ice cube melts in a glass of water?
Questions to Ponder
• What does the difference in behavior of water, methanol, ethanol, and propane tell us about their molecular behavior/organization/structure?
The Second Law of Thermodynamics
Whereas the first law of thermodynamics states that you cannot get more energy out of a system than is already present in some form, the second law of thermodynamics tells us that we cannot even get back the energy that we use to bring about a change in a system. The idea in the second law is captured by the phrase “for any change in a system, the total entropy of the universe must increase.” As we will see, this means that some of the energy is changed into a form that is no longer useful (that is, it cannot do work).
There are lots of subtle and not so subtle implications captured by this statement and we will need to look at them carefully to identify them. You may already have some idea of what entropy means, but can you define it? As you might imagine it is not a simple idea. The word entropy is often used to designate randomness or disorder but this is not a very useful or accurate way to define entropy (although randomly disordered systems do have high entropy). A better way to think about entropy is in terms of probabilities: how to measure, calculate, and predict outcomes. Thermal energy transfers from hot to cold systems because the outcome is the most probable outcome. A drop of dye disperses in water because the resulting dispersed distribution of dye molecules is the most probable. Osmosis occurs when water passes through a membrane from a dilute to a more concentrated solution because the resulting system is more probable. In fact whenever a change occurs, the overall entropy of the universe always increases.[18] The second law has (as far as we know) never, ever been violated. In fact the direction of entropy change has been called “time’s arrow”; the forward direction of time is determined by the entropy change. At this point you should be shaking your head. All this cannot possibly be true! First of all, if entropy is always increasing, then was there a time in the past when entropy was 0?[19] Second, are there not situations where entropy decreases and things become more ordered, like when you clean up a room? Finally, given that common sense tells us that time flows in only one direction (to the future), how is it possible that at the atomic and molecular scale all events are reversible?
Probability and Entropy
Before we look at entropy in detail, let us look at a few systems and think about what we already know about probability. For example if you take a deck of cards and shuffle it, which is more probable: that the cards will fall into the order ace, king, queen, jack, 10, 9, etc. for each suit, or that they will end up in some random, jumbled order? Of course the answer is obvious—the random order is much more probable because there are many sequences of cards that count as “random order” but only one that counts as “ordered.” This greater probability is true even though any pre-specified random sequence is just as unlikely as the perfectly ordered one. It is because we care about a particular order that we lump all other possible orders of the cards together as “random” and do not distinguish between them.
We can calculate, mathematically, the probability of the result we care about. To determine the probability of an event (for example, a particular order of cards), we divide the number of outcomes cared about by the total number of possible outcomes. For 52 cards there are $52!$ (52 factorial, or $52 \times 51 \times 50 \times 49 \ldots$) ways that the cards can be arranged.[20] This number is $\sim 8.07 \times 10^{67}$, a number on the same order of magnitude as the number of atoms in our galaxy. So the probability of shuffling cards to produce any one particular order is $\frac{1}{52!}$ – a very small number indeed. But because the probability is greater than zero, this is an event that can happen. In fact, it must happen eventually, because the probability that any given arrangement of cards will occur is 1. That is a mind bender, but true nevertheless. Highly improbable events occur all the time![21]
This idea of entropy, in terms of probabilities, can help us understand why different substances or systems have different entropies. We can actually calculate entropies for many systems from the formula $\mathrm{S} =k \ln \(\mathrm{W}$\), where $\mathrm{S}$ is the entropy, $k$ is the Boltzmann constant, and $\mathrm{W}$ is the number of distinguishable arrangements (or states) that the system has.[22] So the greater the value of $\mathrm{W}$ (the number of arrangements), the greater the entropy.
In some cases it is relatively easy to figure out which system has more possible arrangements. For example, in a solid substance such as ice the molecules are fixed in place and can only vibrate. In a liquid, the molecules are free to roam around; it is possible for each molecule to be anywhere within the liquid mass and not confined to one position. In a gas, the molecules are not confined at all and can be found anywhere, or at least anywhere in the container. In general, gases have more entropy than liquids and solids. This so-called positional entropy can be extended to mixtures. In most mixtures (but not all, as we will see in the case of emulsions and colloids) the number of distinguishable arrangements is larger for the mixed compared to the unmixed components. The entropy of a mixture is usually larger.
So let us return to the idea that the direction of change in a system is determined by probabilities. We will consider the transfer of thermal energy (heat) and see if we can make sense of it. First, remember that energy is quantized. So, for any substance at a particular temperature there will be a certain number of energy quanta (recall that at the atomic-molecular level energy is quantized). To make things simpler, we will consider a four-atom solid that contains two quanta of energy. These quanta can be distributed so that a particular atom can have 0, 1, or 2 quanta of energy. You can either calculate or determine by trial and error the number of different possible arrangements of these quanta (there are 10). Remember that $\mathrm{W}$ is the number of distinguishable arrangements, so for this system $\mathrm{W} = 10$ and $\mathrm{S} = k \ln 10$. Now, what happens if we consider two similar systems, one with 4 quanta and the other with 8 quanta? The system with 4 quanta will be at a lower temperature than the system with 8 quanta. We can also calculate the value of $\mathrm{W}$ for the 4-quanta (4-atom) system by considering the maximum number of possible ways to arrange the quanta over the 4 atoms. For the 4-atom, 4-quanta system, $\mathrm{W} = 35$. If we do the same calculation for the 8-quanta, 4-atom system, $\mathrm{W} = 165$. If taken together, the total number of arrangements of the two systems is $35 \times 165 = 5775$.[23]
But what about temperature? The 4-quanta system is at a lower temperature than the 8-quanta system because the 8-quanta system has more energy. What happens if we put the two systems in contact? Energy will transfer from the hotter (8-quanta) to the colder (4-quanta) system until the temperatures are equal. At this point, each will have 6 quanta (which corresponds to a $\mathrm{W}$ of $84$). Because there are two systems (each with 6 quanta), the total $\mathrm{W}$ for the combined systems is $\mathrm{W}$ of $84 \times 84 = 7056$ states. You will note that $7056$ is greater than $5775$. There are more distinguishable arrangements of the quanta in the two systems after the energy transfer than before. The final system is more probable and therefore has a higher entropy.
Now you might well object, given that we are working with systems of only a few atoms each. It is easy to imagine that random fluctuations could lead to the movement of quanta from cold to hot, and that is true. That is why the behavior at the nanoscale is reversible. But when we are talking about macroscopic systems, such a possibility quickly becomes increasingly improbable as the number of atoms/molecules increases. Remember a very small drop of water with a weight of $0.05$ grams contains approximately $1.8 \times 10^{21}$ molecules (perhaps you can also calculate the volume of such a drop). Events that are reversible at the nanoscale are irreversible at the macroscopic scale – yet another wacky and counterintuitive fact. It is generally true that we are driven to seek a purpose for why things happen. In the grand scheme of things, the overarching idea that change in the universe is driven simply by the move to more probable states can be difficult to accept,but it is true – even when we consider living systems in the context of their surroundings.[24] The presence of a living system (which is itself highly organized) increases entropy of the Universe as a whole.
Questions
Questions to Answer
• Which has more entropy (in each case, explain your choice):
• A new deck of cards or a shuffled deck?
• Separated dye and water or a mixed-up solution?
• $\mathrm{H}_{2}\mathrm{O}$($s$) or $\mathrm{H}_{2}\mathrm{O}$($l$)?
• $\mathrm{CaCO}_{3}$($s$) or $\mathrm{CaO}$($s$) + $\mathrm{CO}_{2}$($g$)?
• $\mathrm{H}_{2}\mathrm{O}$($l$) (at $25 { }^{\circ}\mathrm{C}$) or $\mathrm{H}_{2}\mathrm{O}$($l$) (at $50 { }^{\circ}\mathrm{C}$)?
• Do you think that the structure of a compound affects its entropy? Why?
• Predict the relative entropies of diamond and sodium chloride, carbon dioxide, oxygen, and $\mathrm{HF}$. What factors influenced your prediction? Look up the entropies. Were you correct?
Questions to Ponder
• Can you think of any changes that occur that seem to produce more order?
• Why don’t living systems (organisms) violate the second law of thermodynamics?
• Does the second law rule out evolution?
• Does it argue for a supernatural soul running the brain? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/05%3A_Systems_Thinking/5.5%3A_Thermodynamics_and_Systems.txt |
Let us now return to the situation with solids, liquids, and gases. How do we think about entropy in these systems? Doesn’t a substance become more ordered as we move it from gas to liquid to solid? Clearly the entropy of a solid is lower than that of a liquid, and the entropy of a liquid is lower than that of a gas. We can calculate (or simply look up) how entropies change for materials as they go from gas to liquid to solid. As we have predicted, they decrease. How can a change occur when the entropy of the system decreases (such as ice freezing)? Are we forced to conclude that things we know to happen are impossible according to the second law of thermodynamics? Of course not!
The second law of thermodynamics tells us that for every change that occurs, the entropy of the universe must increase. The problem with this is that we are all well aware of changes where the entropy apparently decreases. How can we resolve this seeming paradox? The answer lies in the fact that for any system the entropy may indeed decrease; water freezing is an example of this phenomenon. For the universe as a whole however (or more easily defined, the system and its surroundings) total entropy must increase. For example, when water freezes, the water molecules form stable interactions (hydrogen bonding interactions). As we have seen previously, the formation of stabilizing interactions means that the potential energy of the system has decreased. Because energy is conserved, this energy must be released to the surroundings as thermal (kinetic) energy. That is, the freezing of water is an exothermic process.
Now we can see the solution to our thermodynamic problem. The reason that the freezing of water does not violate the second law is that even though the system (ice) becomes more ordered and has lower entropy, the energy that is released to the surroundings makes those molecules move faster, which leads to an increase in the entropy of the surroundings. At the freezing point of ice the increase in entropy in the surroundings is greater than the decrease in entropy of the ice! When we consider both system and surroundings, the change in entropy ($\Delta \mathrm{S}$) is positive. The second law is preserved (yet again), but to understand why we must actively embrace systems thinking.
5.7: Gibbs (Free) Energy to the Rescue
We must consider changes in entropy for both the system and its surroundings when we predict which way a change will occur, or in which direction a process is thermodynamically favorable. Because it is almost always easier to look at the system than it is to look at the surroundings (after all, we define the system as that part of the universe we are studying), it would be much more convenient to use criteria for change that refer only to the system. Fortunately, there is a reasonably simple way to do this. Let us return to water freezing again and measure the enthalpy change for this process. The thermal energy change for the system, $\Delta \mathrm{H}$ freezing, is about $–6 \mathrm{~kJ/mol}$. That is, $6 \mathrm{~kJ}$ of thermal energy are released into the surroundings for every mole of water that freezes. We can relate this thermal energy release to the entropy change of the surroundings. Entropy is measured in units of $\mathrm{J/K}$ (energy/temperature). Because we know how much energy is added to the surroundings, we can calculate the entropy change that this released (enthalpic) energy produces.
Mathematically we can express this as $\Delta S_{\text{surroundings}} = \frac{–\Delta \mathrm{H}_{\text{surroundings}}\mathrm{T}$. And because we know that $\Delta \mathrm{H}_{\text{system}} = –\Delta \mathrm{H}_{\text{surroundings}}$, or that the energy lost by the system equals minus ($–$) the energy gained by the surroundings, we can express the entropy change of the surroundings in terms of measurable variables for the system. That is: $\Delta S_{\text{surroundings}} = –\Delta \mathrm{H}_{\text{surroundings}}$
If you recall, we can express the total entropy change (the one relevant for the second law) as $\Delta \mathrm{S}_{\text{total}} = \Delta \mathrm{S}_{\text{system}} + \Delta \mathrm{S}_{\text{surroundings}}$. Substituting for the $\Delta \mathrm{S}_{\text{surroundings}}$ term, we get
$\Delta \mathrm{S}_{\text{total}} = \Delta \mathrm{S}_{\text{system}} –\Delta \mathrm{H}_{\text{system}}/\mathrm{T}$. Now we have an equation that involves only variables that relate to the system, which are much easier to measure and calculate. We can rearrange the equation by multiplying throughout by $-\mathrm{T}$, which gives us: $-\mathrm{T} \Delta \mathrm{S}_{\text{total}} = \Delta \mathrm{H}_{\text{system}} - \mathrm{T} \Delta \mathrm{S}_{\text{system}}$
The quantity $-\mathrm{T} \Delta \mathrm{S}_{\text{total}}$ has units of energy, and is commonly known as the Gibbs energy change, $\Delta \mathrm{G}$ (or sometimes as the free energy). The equation is normally written as: $\Delta \mathrm{G} = \Delta \mathrm{H} - \mathrm{T} \Delta \mathrm{S} .$
The Gibbs energy change of a reaction is probably the most important thermodynamic term that you will need to learn about. In most biological and biochemical systems, it is $\Delta \mathrm{G}$ that is commonly used to determine whether reactions are thermodynamically favorable. It is important to remember that $\Delta \mathrm{G}$ is a proxy for the entropy change of the universe: if it is negative, universal entropy is increasing (and the reaction occurs); if it is positive, universal entropy would decrease if the reaction occurred (and so it does not). It is possible, however, for reactions with a positive $\Delta \mathrm{G}$ to occur, but only if they are coupled with a reaction with an even greater negative $\Delta \mathrm{G}$ (see Chapters $8$ and $9$).
There are numerous tables of thermodynamic data in most texts and on many websites. Because we often want to use thermodynamic data such as $\Delta \mathrm{H}$, $\Delta \mathrm{S}$, and $\Delta \mathrm{G}$, it is useful to have some reference state. This is known as the standard state and is defined as $298 \mathrm{~K}$ temperature, $1$ atmosphere pressure, $1\mathrm{M}$ concentrations. When thermodynamic data refer to the standard state they are given the superscript º (nought), so $\Delta \mathrm{H}^{\circ}$, $\Delta \mathrm{S}^{\circ}$, and $\Delta \mathrm{G}^{\circ}$ all refer to the standard state. However, we often apply these data at temperatures other than $298 \mathrm{~K}$ and although small errors might creep in, the results are usually accurate enough.
What Is Free” About Gibbs Free Energy?
We use $\Delta \mathrm{G}$ or $\Delta \mathrm{G}^{\circ}$ to describe many systems (and especially biological ones) because both the magnitude and sign tell us a lot about how that system behaves. We use $\Delta \mathrm{G}$ (the Gibbs free energy change) rather than $\Delta \mathrm{H}$ (the enthalpy change) because $\Delta \mathrm{G}$ tells us how much energy is actually available to bring about further change (or to do work). In any change some of the energy is lost to the environment as the entropy increases, this dissipated energy cannot be used to do any kind of work and is effectively lost. $\Delta \mathrm{G}$ differentiates the energy produced from the change from the energy that is lost to the surroundings as increased entropy. As an example, when wood is burned, it is theoretically impossible to use all of the heat released to do work; some of the energy goes to increase the entropy of the system. For any change in the system, some of the energy is always lost in this way to the surroundings. This is why it is impossible to build a machine that is 100% efficient in converting energy from one kind to another (although many have tried–Google “perpetual motion machines” if you don’t believe us). So the term “free energy” doesn’t mean that it is literally free, but rather that it is potentially available to use for further transformations.
When $\Delta \mathrm{G}$ is negative, we know that the reaction will be thermodynamically favored.[25] The best-case scenario is when $\Delta \mathrm{H}$ is negative (an exothermic change in which the system is losing energy to the surroundings and becoming more stable), and $\Delta \mathrm{S}$ is positive (the system is increasing in entropy). Because $\mathrm{T}$ is always greater than 0 (in Kelvins), $\mathrm{T} \Delta \mathrm{S}$ is also positive and when we subtract this value from $\Delta \mathrm{H}$, we get an even larger negative $\Delta \mathrm{G}$ value. A good example of such a process is the reaction (combustion) of sugar ($\mathrm{C}_{6}\mathrm{H}_{12}\mathrm{O}_{6}$) with molecular oxygen ($\mathrm{O}_{2}$): $\mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6}(s)+6 \mathrm{O}_{2}(g) \rightarrow 6 \mathrm{CO}_{2}(g)+6 \underline{\mathrm{H}_{2} \mathrm{O}}(g) .$
This is an exothermic process and, as you can see from the reaction equation, it results in the production of more molecules than we started with (often a sign that entropy has increased, particularly if the molecules are of a gas).
A process such as this ($- \Delta \mathrm{H}$ and $+ \Delta \mathrm{S}$) is thermodynamically favored at all temperatures. On the other hand, an endothermic process ($+ \Delta \mathrm{H}$) and a decrease in entropy ($- \Delta \mathrm{S}$) will never occur as an isolated reaction (but in the real world few reactions are actually isolated from the rest of the universe). For example, a reaction that combined $\mathrm{CO}_{2}$ and $\mathrm{H}_{2}\mathrm{O}$ to form sugar (the reverse reaction to the combustion reaction above) is never thermodynamically favored because $\Delta \mathrm{H}$ is positive and $\Delta \mathrm{S}$ is negative, making $\Delta \mathrm{G}$ positive at all temperatures. Now you may again find yourself shaking your head. Everyone knows that the formation of sugars from carbon dioxide and water goes on all over the world right now (in plants)! The key here is that plants use energy from the sun, so the reaction is actually: $\text { captured energy }+6 \mathrm{CO}_{2}(g)+6 \mathrm{H}_{2} \mathrm{O}}(g) \leftrightarrows \mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6}(s)+6 \mathrm{O}_{2}(g)+\text { excess energy. }$
Just because a process is thermodynamically unfavorable doesn’t mean that it can never occur. What it does mean is that that process cannot occur in isolation; it must be “coupled” to other reactions or processes.
Free Energy and Temperature
So we have two very clear-cut cases that allow us to predict whether a process will occur: where the enthalpy and entropy predict the same outcome. But there are two possible situations where the enthalpy change and the entropy term ($\mathrm{T} \Delta \mathrm{S}$) “point” in different directions. When $\Delta \mathrm{H}$ is positive and $\Delta \mathrm{S}$ is positive, and when $\Delta \mathrm{H}$ is negative while $\Delta \mathrm{S}$ is negative. When this happens, we need to use the fact that the free energy change is temperature-dependent in order to predict the outcome. Recall that the expression $\Delta \mathrm{G} = \Delta \mathrm{H} - \mathrm{T} \Delta \mathrm{S}$ depends upon temperature. For a system where the entropy change is positive ($+ \Delta \mathrm{S}$), an increase in temperature will lead to an increasingly negative contribution to $\Delta \mathrm{G}$. In other words, as the temperature rises, a process that involves an increase in entropy becomes more favorable. Conversely, if the system change involves a decrease in entropy, ($\Delta \mathrm{S}$ is negative), $\Delta \mathrm{G}$ becomes more positive (and less favorable) as the temperature increases.
$\Delta \mathrm{H}$ $\Delta \mathrm{S}$ $\Delta \mathrm{G}$
Negative
(exothermic)
Positive
(entropy increases)
Negative at all temperatures
(always thermodynamically favored)
Positive
(endothermic)
Negative
(entropy decreases)
Positive at all temperatures
(never thermodynamically favored)
Negative
(exothermic)
Negative
(entropy decreases)
Temperature dependent: as the temperature increases $\Delta \mathrm{G}$ will become more positive and the reaction will become less favored (go backwards)
Positive
(endothermic)
Positive
(entropy increases)
Temperature dependent: as the temperature increases $\Delta \mathrm{G}$ will become more negative and the reaction will become favored (go forwards)
The idea that temperature affects the direction of some processes is perhaps a little disconcerting. It goes against common-sense that if you heat something up a reaction might actually stop and go backwards (rest assured we will come back to this point later). But in fact there are a number of common processes where we can apply this kind of reasoning and find that they make perfect sense.
Up to this point, we have been considering physical changes to a system—populations of molecules going from solid to liquid or liquid to gaseous states (and back). Not really what one commonly thinks of as chemistry, but the fact is that these transformations involve the making and breaking of interactions between molecules. We can therefore consider phase transitions as analogous to chemical reactions, and because they are somewhat simpler, develop a logic that applies to both processes. So let us begin by considering the phase change/reaction system $\mathrm{H}_{2} \mathrm{O} \text { (liquid) } \rightleftharpoons \mathrm{H}_{2} \mathrm{O} \text { (gas). }$
We use a double arrow $\rightleftharpoons$ to indicate that, depending upon the conditions the reaction could go either to the right (boiling) or to the left (condensing). So, let us assume for the moment that we do not already know that water boils (changes from liquid to gas) at $100 { }^{\circ}\mathrm{C}$. What factors would determine whether the reaction $\mathrm{H}_{2} \mathrm{O} \text { (liquid) } \rightleftharpoons \mathrm{H}_{2} \mathrm{O} \text { (gas) }$ favors the liquid or the gaseous state at a particular temperature? As we have seen, the criterion for whether a process will “go” at a particular temperature is $\Delta \mathrm{G}$. We also know that the free energy change for a reaction going in one direction is the negative of the $\Delta \mathrm{G}$ for the reaction going in the opposite direction. So that the $\Delta \mathrm{G}$ for the reaction: $\mathrm{H}_{2} \mathrm{O} \text { (liquid) } \rightleftharpoons \mathrm{H}_{2} \mathrm{O} \text { (gas) }$ is $- \Delta \mathrm{G}$ for the reaction $\mathrm{H}_{2} \mathrm{O} \text { (gas) } \rightleftharpoons \mathrm{H}_{2} \mathrm{O} \text { (liquid). }$
When water boils, all the intermolecular attractions between the water molecules must be overcome, allowing the water molecules to fly off into the gaseous phase. Therefore, the process of water boiling is endothermic ($\Delta \mathrm{H}_{\text{vaporization} = +40.65 \mathrm{~kJ/mol}$); it requires an energy input from the surroundings (when you put a pot of water on the stove you have to turn on the burner for it to boil). When the water boils, the entropy change is quite large ($\Delta \mathrm{S}_{\text{vaporization}} = 109 \mathrm{~J/mol K}$), as the molecules go from being relatively constrained in the liquid to gas molecules that can fly around. At temperatures lower than the boiling point, the enthalpy term predominates and $\Delta \mathrm{G}$ is positive. As you increase the temperature in your pan of water, eventually it reaches a point where the contributions to $\Delta \mathrm{G}$ of $\Delta \mathrm{H}$ and $\mathrm{T} \Delta \mathrm{S}$ are equal. That is, $\Delta \mathrm{G}$ goes from being positive to negative and the process becomes favorable. At the temperature where this crossover occurs $\Delta \mathrm{G} = 0$ and $\Delta \mathrm{H} = \mathrm{T} \Delta \mathrm{S}$. At this temperature ($373 \mathrm{~K}, 100 { }^{\circ}\mathrm{C}$) water boils (at $1$ atmosphere). At temperatures above the boiling point, $\Delta \mathrm{G}$ is always negative and water exists predominantly in the gas phase. If we let the temperature drop below the boiling point, the enthalpy term becomes predominant again and $\Delta \mathrm{G}$ for boiling is positive. Water does not boil at temperatures below $100 { }^{\circ}\mathrm{C}$ at one atmosphere.[26]
Let us now consider a different phase change, such as water freezing. When water freezes, the molecules in the liquid start to aggregate and form hydrogen bonding interactions with each other, and energy is released to the surroundings (remember it is this energy that is responsible for increasing the entropy of the surroundings). Therefore $\Delta \mathrm{H}$ is negative: freezing is an exothermic process ($\Delta \mathrm{H}_{\text{fusion} = – 6 \mathrm{~kJ/mol}$).[27] Freezing is also a process that reduces the system’s entropy. When water molecules are constrained, as in ice, their positional entropy is reduced. So, water freezing is a process that is favored by the change in enthalpy and disfavored by the change in entropy. As the temperature falls, the entropy term contributes less to $\Delta \mathrm{G}$, and eventually (at the crossover point) $\Delta \mathrm{G}_{\text{fusion}}$ goes to zero and then becomes negative. The process becomes thermodynamically favored. Water freezes at temperatures below $0 { }^{\circ}\mathrm{C}$. At temperatures where phase changes take place (boiling point, melting point), $\Delta \mathrm{G} = 0$. Furthermore, if the temperature were kept constant, there would be no observable change in the system. We say that the system is at equilibrium; for any system at equilibrium, $\Delta \mathrm{G} = 0$.
Question
Questions to Answer
• For each of these processes, give the change in entropy of the system, the direction of thermal energy transfer (the sign of $\Delta \mathrm{H}$), the change in entropy of the surroundings, and the change in entropy of the universe:
• Water freezing at $-10 { }^{\circ}\mathrm{C}$
• Water boiling at $110 { }^{\circ}\mathrm{C}$
• For each of these processes predict the sign of change in entropy ($\Delta \mathrm{S}$) of the system, the direction of thermal energy transfer (the sign of $\Delta \mathrm{H}$), and the sign of the Gibbs free energy change, $\Delta \mathrm{G}$. What does the sign of $\Delta \mathrm{G}$ tell you?
• Water freezing at $-10 { }^{\circ}\mathrm{C}$; Water boiling at $110 { }^{\circ}\mathrm{C}$
• Water boiling at $-10 { }^{\circ}\mathrm{C}$; Water freezing at $110 { }^{\circ}\mathrm{C}$
Questions to Ponder
• Why do we denote excess energy as a product of this equation? $\text { captured energy }+6 \mathrm{CO}_{2}(g)+6 \mathrm{H}_{2} \mathrm{O}}(g) \leftrightarrows \mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6}(s)+6 \mathrm{O}_{2}(g)+\text { excess energy. }$
• What other processes do you know that must be coupled to external energy sources to make them go? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/05%3A_Systems_Thinking/5.6%3A_Back_to_Phase_Changes.txt |
1. Instead of talking about the temperature of an isolated atom or molecule, we talk about its kinetic energy.
2. We can ignore gravitational effects because at the molecular level they are many orders of magnitude weaker than the forces between atoms and molecules.
3. actually $v(\mathrm{bar})$ is the root mean squared velocity of the gas particles,a measure that is similar to the mean, but makes the direction of the particles irrelevant.
4. We can also, for all practical purposes, ignore the fact that $\mathrm{E} = mc^{2}$; the conversions between energy and matter are insignificant for chemical processes.
5. Although this distribution of speeds of atoms was first derived mathematically, it is possible to observe experimentally that atoms in a gas are moving at different speeds.
6. Interestingly, this is like our approach to the decay of unstable atoms. We cannot predict when a particular atom will decay, but in a large enough population, we can very accurately predict the number of atoms per second that will decay.
7. We use average speed and velocity to describe the motion of the particles in a gas, but it is more accurate to use the root mean square (rms) of the velocity, that is, the square root of the average velocity. However, for our purposes average speed (or velocity) is good enough.
8. Translational energies are also quantized but the quanta are so small that in practice we do not need to worry about that.
9. There are a number of different energy units, including calories, but they are all measures of the same thing, so we will stick to joules here.
10. Alternatively in microwave ovens, the water molecules gain energy by absorbing microwave radiation which makes them rotate. When they collide with other molecules this energy can also be transformed into vibrations and translations, and the temperature of the water heats up.
11. The situation on planets like Venus and Jupiter is rather more complex.
12. The boundary between a system and surroundings depends on how you define the system. It can be real (as in the beaker) or imaginary (as in some ecosystems). In biological systems, the boundary may be the cell wall, or the boundary between the organism and its surroundings (e.g., skin).
13. The only exception would be cryptobiotic systems, like the tardigrads mentioned earlier.
14. Remember that London dispersion forces fall off as $\frac{1}{r^{6}}$, where $r$ is the distance between the molecules.
15. In fact, we should say mass-energy here, but because most chemical and biological systems do not operate under the high-energy situations required for mass to be converted to energy we don’t need to worry about that (for now).
16. Or $\mathrm{U}$ if you are a physicist. This is an example of how different areas sometimes seem to conspire to make things difficult by using different symbols and sign conventions for the same thing. We will try to point out these instances when we can.
17. One important point to note is that this relationship only works when the thermal energy is used to increase the kinetic energy of the molecules—that is, to raise the temperature. At the boiling point or freezing point of a liquid the energy is used to break the attractions between particles and the temperature does not rise.
18. This is another example of the different ways that the same process is described. In chemistry we usually describe osmosis as movement from a solution of low concentration to high (where we are referring to the concentration of the solute). In biology osmosis is often described as movement from high concentration (of water) to low. These two statements mean exactly the same thing even though they appear to be saying the opposite of each other.
19. One of many speculations about the relationship between the big bang and entropy; http://chronicle.uchicago.edu/041118/entropy.shtml
20. http://www.schuhmacher.at/weblog/52cards.html
21. A realistic understanding of the probability of something happening is a great asset (but would put the gambling and lottery industries, and perhaps part of the investment community, out of business). Listen to: http://www.wnyc.org/shows/radiolab/e...des/2009/09/11
22. or $\Omega$ in some texts.
23. We multiply, rather than add W when we combine systems.
24. A great lecture by Richard Feynmann on this topic: http://research.microsoft.com/apps/t...f62e4eca%7C%7C
25. Many people use the term spontaneous, but this is misleading because it could make people think that the reaction happens right away. In fact, $\Delta \mathrm{G}$ tells us nothing about when the process will happen, only that it is thermodynamically favored. As we will see later, the rate at which a process occurs is governed by other factors.
26. So why, you might ask, does water evaporate at temperatures lower than $100 { }^{\circ}\mathrm{C}$? We will come to that soon.
27. This is another rather counterintuitive idea, but remember that to freeze something you have to take heat away (for example, in a refrigerator). | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/05%3A_Systems_Thinking/5.8%3A_In-Text_References.txt |
We have covered quite a number of topics up to this point: the structure of atoms, discrete molecules, complex network solids, and metals; how atoms and molecules interact, through London dispersion forces, dipole-dipole interactions, hydrogen bonds, and covalent and ionic bonds. We have discussed how changes in energy and entropy lead to solid, liquid, and gas state changes. So far, so good, but is this really chemistry? Where are the details about chemical reactions, acids and bases, gas laws, and so forth? Not to worry—we have approached the topics in this order so that you have a strong conceptual foundation before you proceed to the nuts and bolts of chemical reactions. Without this foundation, you would just memorize whatever equations we presented, without making the connections between seemingly disparate reactions. Many of these reactions are complex and overwhelming even for the most devoted student of chemistry. The topics we have covered so far will serve as a tool kit for understanding the behavior of increasingly complex chemical systems. We will continue to reinforce these basic ideas and their application as we move on to the types of reactions that are relevant to most chemical systems.
Thumbnail: Nile red solution. (CC BY-SA 3.0; Armin Kübelbeck).
06: Solutions
The first type of complex system that we will consider is a solution. You almost certainly already have some thoughts about what a solution is and you might want to take a moment to think about what these are. This will help you recognize your implicit assumptions if they “get in the way” of understanding what a solution is scientifically. The major difference between a solution and the systems we have previously discussed is that solutions have more than one chemical substance in them. This raises the question: what exactly is a solution and what does it mean to dissolve? You are probably thinking of examples like sugar or salt dissolved in water or soda. What about milk? Is it a solution? Do solutions have to be liquid or can they also include gases and solids? What is the difference between a solution and a mixture?
It turns out that we can make solutions from a wide range of materials. Although it is common to think of solutions in terms of a solid dissolved into a liquid, this is not the only type of solution. Other examples of solutions include: gas in liquid (where molecular oxygen, or $\mathrm{O}_{2}$, dissolves in water – important for fish); solid in solid (the alloy brass is a solution of copper and zinc); gas in solid (hydrogen can be dissolved in the metal palladium); and liquid in liquid (beer is a solution of ethanol and water and a few other things).
Let us take a closer look at what we mean by a solution, starting with a two-component system. Typically, one of the components is present in a smaller amount than the other. We call the major component the solvent and the minor component(s) the solute(s). The most familiar solutions are aqueous solutions, in which water is the solvent. For example, in a solution of the sugar glucose in water, glucose molecules are the solute and water molecules are the solvent. In beer, which is typically 2–4% ethanol, ethanol is the primary solute and water is the solvent. Once they are thoroughly mixed, solutions have the same composition throughout—they are homogeneous at the macroscopic scale, even though at the molecular level we still find different types of molecules or ions. This is an important point: Once mixed, they remain mixed! If you take a sample from the top of a solution, it has the same composition as a sample from elsewhere in the solution. Solutions, when viewed at the molecular level, have the solute particles evenly (and randomly) dispersed in the solute. Also, because the solute and solvent are in contact with each other, there must be some kind of molecular interaction between the two types of molecules. This is not true for simple mixtures. For example, we tend to describe air as a mixture of gases ($\mathrm{N}_{2}$, $\mathrm{O}_{2}$, $\mathrm{H}_{2}\mathrm{O}$, etc.), rather than a solution because the gas molecules do not interact aside from the occasional collision with each other.
Molecular Formation of Solutions
Let us consider a solution of ethanol and water. Many common solutions contain these two components (usually with minor amounts of other substances as well). Ethanol and water are soluble in each other (what is known as “miscible”) in all proportions. For example, beer is typically about $3 \%$ alcohol ($6 \%$ proof),[1] wine about $6 \%$ alcohol ($12 \%$ proof), and liquors such as whiskey or brandy are about $50 \%$ alcohol ($100 \%$ proof). How do they dissolve into each other at the molecular level, and why?
For a process to be thermodynamically favorable, the Gibbs (free) energy change ($\Delta \mathrm{G}$) associated with that process must be negative. However, we have learned that Gibbs energy change depends on both enthalpy ($\mathrm{H}$) and entropy ($\mathrm{H}$) changes in the system. It is possible to envision a wide range of situations – involving both positive and negative changes in $\mathrm{H}$ and $\mathrm{S}$, and we have to consider the magnitudes of the enthalpy, the entropy and the temperature changes.
So what happens when we add a drop of ethanol to a volume of water? The ethanol molecules rapidly disperse and the solution becomes homogeneous. The entropy of the ethanol–water solution is higher than that of either substance on its own. In other words, there are more distinguishable arrangements of the molecules when they are mixed than when they are separate. Using simple entropic arguments we might, at least initially, extend the idea to include all solutions. Everything should be soluble in everything else, because this would to an entropy increase, right? Wrong. We know that this is not true. For example, oil is not soluble in water and neither are diamonds, although for very different reasons. So what are the factors influencing solution formation? We will see that some are entropic (involving $\Delta \mathrm{S}$) and some enthalpic (involving $\Delta \mathrm{H}$.)
Questions
Questions to Answer
• Make a list of some common solutions you might encounter in everyday life. How do you know they are solutions and not mixtures?
• Consider a solution formed from $100 \mathrm{~g}$ of water and $5 \mathrm{~g}$ sodium chloride:
• What would you expect the mass of the solution to be? Why?
• What would you expect the volume of the solution to be? Why?
• How would you test your hypotheses? What experiments would you do?
• What evidence would you collect? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/06%3A_Solutions/6.1%3A_What_Is_A_Solution.txt |
Let us say you have a $100-\mathrm{mL}$ graduated cylinder and you take $50 \mathrm{~mL}$ of ethanol and add it to $50 \mathrm{~mL}$ of water. You might be surprised to find that the volume of the resulting solution is less than $100 \mathrm{~mL}$. In fact, it is about $98 \mathrm{~mL}$, assuming good technique (no spilling). How can we explain this? Well, we can first reassure ourselves that matter has not been destroyed. If we weigh the solution, it weighs the same as $50 \mathrm{~mL}$ of water plus $50 \mathrm{~mL}$ of ethanol. This means that the density of the water–ethanol solution must be greater than the density of either the water or ethanol alone. At the molecular level, we can immediately deduce that the molecules are closer together in the ethanol and water mixture than they were when pure (before mixing) –try drawing a molecular level picture of this to convince yourself that this is possible. Now, if you took $50 \mathrm{~mL}$ of oil and $50 \mathrm{~mL}$ of water, you would find that they do not mix—no matter how hard you tried. They will always separate away from one another into two layers. What factors determine whether or not substances form solutions?
First, we need to be aware that solubility is not an all-or-nothing property. Even in the case of oil and water, a very small number of oil molecules are present in the water (the aqueous phase), and a small number of water molecules are present in the oil. There are a number of ways to describe solubility. The most common way is to define the number of moles of solute per liter of solution. This is called the solution’s molarity ($\mathrm{M}, \mathrm{~mol/L}$). Another common way is to define the number grams of solute per mass of solution. For example: $1 \mathrm{~mg}$ ($10^{-3} \mathrm{~g}$) of solute dissolved in $1 \mathrm{~kg}$ ($10^{3} \mathrm{~g}$) of solution is 1 part per million ($10^{6}$) solute, or $1 \mathrm{~ppm}$. As you might expect, given the temperature term in the free energy equation, solubility data are always reported at a particular temperature. If no more solute can dissolve at a given temperature, the solution is said to be saturated; if more solute can dissolve, it is unsaturated.
If we look at the structure of compounds that dissolve in water, we can begin to see some trends: hydrocarbons are not very soluble in water (remember from Chapter $4$ that these are compounds composed only of carbon and hydrogen), whereas alcohols (hydrocarbons with an $—\mathrm{O–H}$ group attached) with up to 3 carbons are completely soluble. As the number of carbon atoms increases, the solubility of the compound in water decreases. For example, hexanol ($\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{OH}$), is only very slightly soluble in water ($0.4 \mathrm{~g/L}$). So perhaps the hydroxyl ($—\mathrm{O–H}$) group is responsible for the molecule’s solubility in water. Evidence supporting this hypothesis can be found in the fact that diols (compounds with $2—\mathrm{O–H}$ groups) are more soluble than similar alcohols. For example, compared to hexanol, 1,6-hexanediol ($\mathrm{HO}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{OH}$) is quite soluble in water. More familiar water-soluble compounds such as the sugars glucose, fructose, and sucrose (a dimer of glucose and fructose – shown in the figure) are, in fact, polyalcohols. Each of their six carbons is attached to a hydroxyl group.
Compound Molar Mass (\mathrm{g/mol}\)) Structure Solubility ($\mathrm{g/L}$) $20 { }^{\circ}\mathrm{C}$
Propane $44$ $\(\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{CH}_{3}$ $0.07 \mathrm{~g/L}$
Ethanol $46$ $\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{OH}$ Completely miscible
Dimethyl ether $46$ $\(\mathrm{CH}_{3}\mathrm{OCH}_{3}$ $328 \mathrm{~g/L}$
Pentane $72$ $\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{3}$ $0.4 \mathrm{~g/L}$
Butanol $74$ $\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{OH}$ $80 \mathrm{~g/L}$
Diethyl ether $74$ $\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{OCH}_{2}\mathrm{CH}_{3}$ $69 \mathrm{~g/L}$
Hexanol $102$ $\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{OH}$ $0.4 \mathrm{~g/L}$
1,6 Hexanediol $226$ $\mathrm{HO}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{CH}_{2}\mathrm{OH}$ $500 \mathrm{~g/L}$
Glucose $180$ $\mathrm{C}_{6}\mathrm{H}_{12}\mathrm{O}_{6}$ $910 \mathrm{~g/L}$
Questions
Questions to Answer
• Make a list of substances that you know dissolve in water.
• Which of these dissolve: metals, ionic compounds, molecular compounds (polar, non-polar), network solids (diamond graphite)?
• Can you make any generalizations about which things dissolve and which don’t?
• What must happen in order for something to dissolve in water?
• How would you design an experiment to determine the solubility of a solute?
• How would you determine whether or not a solution was saturated?
• Draw a molecular level picture of a solution of ethanol and water showing why the solution is more dense than the separate liquids.
• Draw a molecular level picture of an oil and water mixture.
• Draw a molecular level picture of the process of solution
• When you try mixing oil and water, which layer ends up on top? Why?
Question to Ponder
• You have a saturated solution, with some solid solute present.
• Do you think the solute particles that are in solution are the same ones over time?
• How would you determine whether they were the same?
Questions for Later
• What would you predict for the sign of $\Delta \mathrm{S}$ upon the formation of any solution? Why?
• What would you predict for the sign of $\Delta \mathrm{H}$ upon the formation of any solution? Why?
• What would you predict for the sign of $\Delta \mathrm{G}$ upon the formation of any solution? Why? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/06%3A_Solutions/6.2%3A_Solubility%3A_why_do_some_things_form_solutions_and_others_not.txt |
How does adding hydroxyl groups increase the solubility of a hydrocarbon in water? To understand this, we must return to the two components of the free energy equation: enthalpy and entropy. For a solute to dissolve in a liquid, the solute molecules must be dispersed in that liquid. Solubility depends on how many solute molecules can be present within a volume of solution before they begin to associate preferentially with themselves rather than the solvent molecules. When the solute molecules are dispersed, whatever bonds or attractions holding the particles together in the solute are replaced by interactions between solvent and solute molecules. You might deduce that one reason diamonds are not soluble in water is that the $\mathrm{C—C}$ bonds holding a carbon atom within a diamond are much stronger (take more energy to break) than the possible interactions between carbon atoms and water molecules. For a diamond to dissolve in water, a chemical reaction must take place in which multiple covalent bonds are broken. Based on this idea, we can conclude that the stronger the interactions between the solute particles, the less favorable it is for the solute to dissolve in water. At the same time, the stronger the interactions between solute and solvent molecules, the greater the likelihood that solubility will increase.
So do intermolecular interactions explain everything about solubility? Do they explain the differences between the solubility of hexane, hexanol, and hexanediol in water? Hexanediol ($\mathrm{HO}(\mathrm{CH}_{2})_{6}\mathrm{OH}$) is readily soluble, and if we consider its structure we can see that interactions between hexanediol molecules include hydrogen bonding (involving the two hydroxyl groups) and van der Waals interactions (LDFs and dipole-dipole). We can also approach this from a more abstract perspective. If we indicate the non-hydroxyl ($—\mathrm{O–H}$) part of a molecule as $\mathrm{R}$, then an alcohol molecule can be represented as $\mathrm{R}—\mathrm{O–H}$, and a diol can be represented as $\mathrm{H–O}—\mathrm{R}—\mathrm{O–H}$. All alcohols have the ability to form hydrogen bonding interactions with each other as well as with water. So when an alcohol dissolves in water, the interactions between the alcohol molecules are replaced by interactions between alcohol and water molecules—an interaction similar to that between water molecules. Like water molecules, alcohols have a dipole (unequal charge distribution), with a small negative charge on the oxygen(s) and small positive charges on the hydrogen (bonded to the oxygen atoms). It makes sense that molecules with similar structures interact in similar ways. Thus, small molecular-weight alcohols can dissolve in water. But if you look again at the previous table, notice that hexanol (a 6-carbon chain with one $—\mathrm{O–H}$ group) is much less soluble than hexanediol (a 6-carbon chain with two $—\mathrm{O–H}$ groups—one at each end). As the non-polar carbon chain lengthens, the solubility typically decreases. However, if there are more $—\mathrm{O–H}$ groups present, there are more possible interactions with the water. This is also why common sugars, which are really polyalcohols with large numbers of $—\mathrm{O–H}$ groups (at least 4 or 5 per molecule), are very soluble in water. Their $—\mathrm{O–H}$ groups form hydrogen-bonds with water molecules to form stabilizing interactions. As the length of the hydrocarbon chain increases, the non-polar hydrocarbon part of the molecule starts to become more important and the solubility decreases. This phenomenon is responsible for the “like-dissolves-like” statements that are often found in introductory chemistry books (including this one, apparently). So, do intermolecular interactions explain everything about solubility? If only things were so simple!
Entropy and Solubility: Why Don’t Oil and Water Mix?[2]
The fact that oil and water do not mix is well known. It has even become a common metaphor for other things that do not mix (people, faiths, etc.) What is not quite so well known is, why? Oil is a generic name for a group of compounds, many of which are hydrocarbons or contain hydrocarbon-like regions. Oils are – well – oily, they are slippery and (at the risk of sounding tedious) unable to mix with water. The molecules in olive oil or corn oil typically have a long hydrocarbon chain of about 16–18 carbons. These molecules often have polar groups called esters (groups of atoms that contain $\mathrm{C—O}$ bonds) at one end.[3] Once you get more than six carbons in the chain, these groups do not greatly influence solubility in water, just as the single $\mathrm{O–H}$ groups in most alcohols do not greatly influence solubility. So, oily molecules are primarily non-polar and interact with one another as well as with other molecules (including water molecules), primarily through London dispersion forces (LDFs). When oil molecules are dispersed in water, their interactions with water molecules include both LDFs and interactions between the water dipole and an induced dipole on the oil molecules. Such dipole–induced dipole interactions are common and can be significant. If we were to estimate the enthalpy change associated with dispersing oily molecules in water, we would discover $\Delta \mathrm{H}$ is approximately zero for many systems. This means that the energy required to separate the molecules in the solvent and solute is about equal to the energy released when the new solvent–solute interactions are formed.
Remember that the entropy change associated with simply mixing molecules is positive. So, if the enthalpy change associated with mixing oils and water is approximately zero, and the entropy of mixing is usually positive, why then do oil and water not mix? It appears that the only possibility left is that the change in entropy associated with dissolving oil molecules in water must be negative (thus making $\Delta \mathrm{G}$ positive.) Moreover, if we disperse oil molecules throughout an aqueous solution, the mixed system spontaneously separates (unmixes). This seems to be a process that involves work. What force drives this work?
Rest assured, there is a non-mystical explanation but it requires thinking at both the molecular and the systems level. When hydrocarbon molecules are dispersed in water, the water molecules rearrange to maximize the number of $\mathrm{H}$-bonds they make with one another. They form a cage-like structure around each hydrocarbon molecule. This cage of water molecules around each hydrocarbon molecule is a more ordered arrangement than that found in pure water, particularly when we count up and add together all of the individual cages! It is rather like the arrangement of water molecules in ice, although restricted to regions around the hydrocarbon molecule. This more ordered arrangement results in a decrease in entropy. The more oil molecules disperse in the water, the larger the decrease in entropy. On the other hand, when the oil molecules clump together, the area of “ordered water” is reduced; fewer water molecules are affected. Therefore, there is an increase in entropy associated with the clumping of oil molecules—a totally counterintuitive idea! This increase in entropy leads to a negative value for $-\mathrm{T} \Delta \mathrm{S}$, because of the negative sign. Therefore, in the absence of any other factor the system moves to minimize the interactions between oil and water molecules, which leads to the formation of separate oil and water phases. Depending on the relative densities of the substances, the oily phase can be either above or below the water phase. This entropy-driven separation of oil and water molecules is commonly referred to as the hydrophobic effect. Of course, oil molecules are not afraid (phobic) of water, and they do not repel water molecules. Recall that all molecules will attract each other via London dispersion forces (unless they have a permanent and similar electrical charge).
The insolubility of oil in water is controlled primarily by changes in entropy, so it is directly influenced by the temperature of the system. At low temperatures, it is possible to stabilize mixtures of water and hydrocarbons. In such mixtures, which are known as clathrates, the hydrocarbon molecules are surrounded by stable cages of water molecules (ice)($\rightarrow$). Recall that ice has relatively large open spaces within its crystal structure. The hydrocarbon molecules fit within these holes, making it possible to predict the maximum size of the hydrocarbon molecules that can form clathrates. For example, some oceanic bacteria generate $\mathrm{CH}_{4}$ (methane), which is then dissolved in the cold water to form methane clathrates. Scientists estimate that between two and ten times the current amount of conventional natural gas resources are present as methane clathrates.[4]
Solubility of Ionic Compounds: Salts
Polar compounds tend to dissolve in water, and we can extend that generality to the most polar compounds of all—ionic compounds. Table salt, or sodium chloride ($\mathrm{NaCl}$), the most common ionic compound, is soluble in water ($360 \mathrm{~g/L}$). Recall that $\mathrm{NaCl}$ is a salt crystal composed not of discrete $\mathrm{NaCl}$ molecules, but rather of an extended array of $\mathrm{Na}^{+}$ and $\mathrm{Cl}^{-}$ ions bound together in three dimensions through electrostatic interactions. When $\mathrm{NaCl}$ dissolves in water, the electrostatic interactions within the crystal must be broken. By contrast, when molecular compounds dissolve in water, it is the intermolecular forces between separate molecules that are disrupted. One might imagine that the breaking of ionic interactions would require a very high-energy input (we have already seen that diamonds do not dissolve in water because actual covalent bonds have to be broken). That would be true if all we considered was the energy required to break the ionic interactions, as indicated by the fact that $\mathrm{NaCl}$ melts at $801 { }^{\circ}\mathrm{C}$ and boils at $1413 { }^{\circ}\mathrm{C}$. But we know that substances like $\mathrm{NaCl}$ dissolve readily in water, so clearly there is something else going on. The trick is to consider the whole system when $\mathrm{NaCl}$ dissolves, just like we did for molecular species. We need to consider the interactions that are broken and those that are formed. These changes in interactions are reflected in the $\Delta \mathrm{H}$ term (from $\Delta \mathrm{G} = \Delta \mathrm{H} – \mathrm{T} \Delta \mathrm{S}). When a crystal of \(\mathrm{NaCl}$ comes into contact with water, the water molecules interact with the $\mathrm{Na}^{+}$ and $\mathrm{Cl}^{-}$ ions on the crystal’s surface, as shown in the figure. The positive ends of water molecules (the hydrogens) interact with the chloride ions, while the negative end of the water molecules (the oxygen) interacts with the sodium ions. So the ion on the surface of the solid interacts with water molecules from the solution; these water molecules form a dynamic cluster around the ion. Thermal motion (which reflects the kinetic energy of the molecules, that is the motion driven by collisions with other molecules in the system) then moves the ion and its water shell into solution.[5] The water shell is highly dynamic—molecules are entering and leaving it. The ion–dipole interaction between ions and water molecules can be very strongly stabilizing ($- \Delta \mathrm{H}$). The process by which solvent molecules interact with and stabilize solute molecules in solution is called solvation. When water is the solvent, the process is known as hydration.
Questions
Questions to Answer
• Draw a molecular-level picture of a solution of $\mathrm{NaCl}$. Show all the kinds of particles and interactions present in the solution.
• When we calculate and measure thermodynamic quantities (such as $\Delta \mathrm{H}$, $\Delta \mathrm{S}$ and $\Delta \mathrm{G}$), why is it important to specify the system and the surroundings?
• When a substance dissolves in water, what is the system and what are the surroundings? Why? What criteria would you use to specify the system and surroundings?
• For a solution made from $\mathrm{NaCl}$ and water, what interactions must be overcome as the $\mathrm{NaCl}$ goes into solution? What new interactions are formed in the solution?
• If the temperature goes up when the solution is formed, what can we conclude about the relative strengths of the interactions that are broken and those that are formed? What can we conclude if the temperature goes down?
• When you measure the temperature of a solution, are you measuring the system or the surroundings?
Questions to Ponder
• Why is the water shell around an ion not stable?
• What are the boundaries of a biological system? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/06%3A_Solutions/6.3%3A_Hydrogen_Bonding_Interactions_and_Solubility.txt |
Try adding $\mathrm{NaCl}$ to water, you can do this at the dinner table. You will see that the $\mathrm{NaCl}$ dissolves and the temperature of the solution goes down. Is this the case with all salts? No, it is not. If you dissolve calcium chloride ($\mathrm{CaCl}_{2}$) or magnesium chloride ($\mathrm{MgCl}_{2}$), the solution gets warmer, not colder. Dissolving $\mathrm{CaCl}_{2}$ or $\mathrm{MgCl}_{2}$ in water clearly involves some kind of energy release (recall that if the temperature increases, the average kinetic energy of the molecules in the solution also increases).
How do we explain why dissolving $\mathrm{NaCl}$ causes the temperature of the solution to decrease, whereas dissolving $\mathrm{CaCl}_{2}$ or $\mathrm{MgCl}_{2}$ makes the temperature increase? Because both processes (that is the dissolving of $\mathrm{NaCl}$ and $\mathrm{CaCl}_{2}/\mathrm{MgCl}_{2}$ into water) occur, they must be thermodynamically favorable. In fact, all of these compounds are highly soluble in water, the $\Delta \mathrm{G}$ for the formation of all three solutions is negative, but the process results in different temperature changes. Let us look at the example of calcium chloride: as a crystal of $\mathrm{CaCl}_{2}$ dissolves in water, interactions between ions are broken and new interactions between water molecules and ions are formed. The table below lists the types of interactions forming in the crystal and the solvent.
Within the crystal, there are ion–ion interaction while in the solvent there are $\mathrm{H}$-bonding, dipole–dipole, and LDF interactions. As the crystal dissolves, new ion–dipole interactions form between calcium ions and water molecules, as well as between chloride ions and molecules. At the same time, the majority of the interactions between water molecules are preserved.
Interactions Present Before Solution Interactions Present After Solution
ion-ion
(interactions between $\mathrm{Ca}_{2+}$ and $\mathrm{Cl}^{-}$)
ion–dipole
interactions between $\mathrm{Ca}_{2+}$ and $\mathrm{H}_{2}\mathrm{O}$ and $\mathrm{Cl}^{-}$ and $\mathrm{H}_{2}\mathrm{O}$)
Interactions Between Water Molecules
$\mathrm{H}$-bonding, dipole–dipole, and LDFs
Interactions Between Water Molecules
$\mathrm{H}$-bonding, dipole–dipole, and LDFs
In order to connect our observation that the temperature increases with thermodynamic data, we have to be explicit about what we mean by the system and what we mean by the surroundings. In calcium chloride, the system is $\mathrm{CaCl}_{2}$ and the water molecules it interacts with. The surroundings are the rest of the water molecules (the solution). So when we measure the temperature change, we are actually measuring the temperature change of the surroundings (not the system). If the temperature rises, that means thermal energy is transferred from the $\mathrm{CaCl}_{2}—\mathrm{H}_{2}\mathrm{O}$ system to the water. Therefore, the interactions after the solution is formed are stronger and more stable than those for the solid $\mathrm{CaCl}_{2}$ and water separately. If we look up the enthalpy change for the solution of calcium chloride, it is around $-80 \mathrm{~kJ/mol}$: dissolving is exothermic and heat is transferred from the system to the surroundings.
So what is going on with $\mathrm{NaCl}$? Solution temperatures decrease when $\mathrm{NaCl}$ is dissolved, so the solution (surroundings) loses energy to the ion–solvent interactions (system). Energy from the surroundings breaks up the $\mathrm{NaCl}$ lattice and allows ions to move into the solution. That would imply that ion–ion and $\mathrm{H}_{2}\mathrm{O}—\mathrm{H}_{2}\mathrm{O}$ interactions are stronger than the ion–water interactions for the $\mathrm{NaCl}—\mathrm{H}_{2}\mathrm{O}$ system. But why does $\mathrm{NaCl}$ dissolve at all? The answer is that enthalpy is not the critical factor determining whether solution happens. If we factor in the entropy change for the solution, which in this case is positive, then $\Delta \mathrm{G}$ is negative. The dissolving of salt is an entropy-driven process!
To recap: for a solution to form, the Gibbs energy change must be negative. When calcium chloride dissolves in water, $\Delta \mathrm{H}$ is negative and as it turns out $\Delta \mathrm{S}$ is slightly negative (although this cannot be determined from observations). This results in a large negative $\Delta \mathrm{G}$ and a very high solubility ($595 \mathrm{~g/L}$). By contrast, when sodium chloride dissolves, $\Delta \mathrm{H}$ is positive, but $\Delta \mathrm{S}$ is positive enough to overcome the effect of $\Delta \mathrm{H}$. This means that the Gibbs free energy change is also negative for this process. In fact, many solutes dissolve in water with a decrease in temperature. Ethanol—which is infinitely soluble in water—has an unfavorable enthalpy of solution. Thus, the entropy of mixing is the important factor.
Questions
Questions to Answer
• When ammonium chloride dissolves in water, the temperature of the solution drops. Predict the signs of $\Delta \mathrm{H}$, $\Delta \mathrm{S}$, and $\Delta \mathrm{G}$ and explain your reasoning by drawing molecular level pictures
• Calcium phosphate ($\mathrm{Ca}_{3}(\mathrm{PO}_{4})_{3}$) is insoluble in water. The $\Delta \mathrm{H}$ for solution is about zero. Predict the signs of $\Delta \mathrm{S}$ and $\Delta \mathrm{G}$ and explain your reasoning by drawing molecular-level pictures. | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/06%3A_Solutions/6.4%3A_Gibbs_Energy_and_Solubility.txt |
So far we have considered solutions that are made up of molecules that are either polar or non-polar or ionic species that have properties that are relatively easy to predict. Many substances, however, have more complex structures that incorporate polar, ionic, and non-polar groups. For example, many biomolecules cannot be classified as exclusively polar or non-polar, but are large enough to have distinct regions of differing polarity. They are termed amphipathic. Even though the structures of proteins such as $\mathrm{RNA}$, $\mathrm{DNA}$, and other biomolecules are complex, we can use the same principles involving entropic and enthalpic effects of interacting with water to understand the interactions between biomolecules, as well as within a given biomolecule. Biomolecules are very large compared to the molecules considered in most chemistry courses, and often one part of the molecule interacts with another part of the same molecule. The intramolecular[6] interactions of biological macromolecules, together with their interactions with water, are key factors in predicting their shapes.[7]
Let us begin with a relatively simple biomolecular structure. In the previous section we looked at the solubility of oils in water. Oils or fats are also known as a triglycerides. In the figure, $\mathrm{R}$ and $\mathrm{R}_{\prime}$ indicate hydrocarbon chains, which have the generic structure $\mathrm{CH}_{3}\mathrm{CnH}_{2\mathrm{n}}$, shown in the figure. If you treat an oil or fat with sodium hydroxide ($\mathrm{NaOH}$), the resulting chemical reaction leads to the formation of what is known as a fatty acid (in this example, oxygen atoms are maroon). A typical fatty acid has a long, non-polar hydrocarbon chain and one end that often contains both a polar and ionic group. The polar head of the molecule interacts with water with little or no increase in entropy, unlike a hydrocarbon, where the lack of $\mathrm{H}$-bonding interactions with water forces a more ordered shell of water molecules around the hydrocarbon molecule, leading to a decrease in entropy. On the other hand, in water the non-polar region of the molecule creates a decrease in entropy as water molecules are organized into a type of cage around it—an unfavorable outcome in terms of $\Delta \mathrm{S}$, and therefore $\Delta \mathrm{G}$ as well. So, which end of the molecule “wins”? That is do such molecules dissolve in water or not? The answer is: Both! These amphipathic molecules become arranged in such a manner that their polar groups are in contact with the water, while their non-polar regions are not. (See whether you can draw out such an arrangement, remembering to include the water molecules in your drawing.)
In fact, there are several ways to produce such an arrangement, depending in part on the amount of water in the system. A standard micelle is a spherical structure with the polar heads on the outside and the non-polar tails on the inside. It is the simplest structure that can accommodate both hydrophilic and hydrophobic groups in the same molecule. If water is limiting, it is possible to get an inverted micelle arrangement, in which polar head groups (and water) are inside and the non-polar tails point outward (as shown in the figure). Other highly organized structures can form spontaneously depending on the structure of the head group and the tail. For example, lipid molecules have multiple hydrocarbon tails and carbon ring structures called sterols. That structure creates a lipid bilayer—a polar membrane made up of two lipid molecule layers that form cellular and organellar boundaries in all organisms. It should be noted that these ordered structures are possible only because dispersing the lipid molecules in water results in a substantial decrease in the disorder of the system. In fact, many ordered structures associated with living systems, such as the structure of $\mathrm{DNA}$ and proteins, are the result of entropy-driven processes, yet another counterintuitive idea. This is one of the many reasons why biological systems do not violate the laws of thermodynamics and why it is scientifically plausible that life arose solely due to natural processes![8]
Questions
Questions to Answer
• If you had a compound that you suspected might form micelles:
• What structural features would you look for?
• How might you design an experiment to determine whether the compound would form micelles in water?
• What would be the experimental evidence?
• Why do you think some amphipathic molecules form spherical clusters (micelles or liposomes) whereas others form sheets (bilayers)? (Hint: consider the shape of the individual molecule itself.)
• Amphipathic molecules are often called surfactants. For example, the compounds used to disperse oil spills are surfactants. How do you think they work?
Questions to Ponder
• If membrane formation and protein folding are entropy-driven processes, does that make the origins of life seem more or less “natural” to you?
Solutions, Colloids & Emulsions
So, do micelles dissolve in water? Well, micelles are not molecules but rather supramolecular assemblies composed of many distinct molecules. A glucose solution consists of isolated glucose molecules but micelles in solution consist of larger molecular aggregates. Solutions of macromolecular solutes are called colloids. These particles can be aggregates of molecules (like micelles), atoms (nanoparticles), or larger macromolecules (proteins, nucleic acids), among others. When these particles are on the order of the wavelength of visible light, they scatter the light; smaller objects do not. This is why a salt or sugar solution is translucent, whereas a colloidal dispersion of micelles or cells is cloudy.[9] This principle also explains why soap solutions are typically cloudy—they contain particles large enough to scatter the light. When the particles in a solution maintain the structure of a solid, the end result is known as a colloid. The colloid is stable because the thermal movements of these small, solid particles are suspended. As the particles get larger, the colloid becomes unstable; the influence of gravity overcomes the effects of thermal motion and the particles settle out. Before they settle out, such unstable systems are known as suspension
But if the suspended particles are liquid, the system is known as an emulsion. For example, if we looked at a salad dressing made of oil and water under a microscope, we would see drops of oil suspended in water. Emulsions are often unstable, and over time the two liquid phases separate. This is why you have to shake up salad dressing just before using it. There are many colloids and emulsions in the world around us. Milk, for example, is an emulsion of fat globules and a colloid of protein (casein) micelles. | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/06%3A_Solutions/6.5%3A_Polarity.txt |
Can you also predict the effect of temperature on solubility? If you raise the temperature, does solubility of a solute increase or decrease? It would be reasonable to assume that increasing temperature increases solubility. But remember that both $\Delta \mathrm{H}$ and $\Delta \mathrm{S}$ have a role, and an increase in temperature increases the effect of changes in entropy. Dissolving solute into solvent is likely to increase entropy (if $\Delta \mathrm{S}$ is positive), but this is not always the case. Consider what happens when you heat up water on the stove. Bubbles of gas are released from the liquid long before the water reaches its boiling point. At low temperatures, these bubbles contain air (primarily $\mathrm{N}_{2}$, $\mathrm{O}_{2}$) that was dissolved in the water.[10] Why? Because the solubility of most gases in water decreases as temperature rises. We can trace the reason for this back to the entropy of solution. Most gases have very small intermolecular attractions – this is the reason why they are gases after all. Gas molecules do not stick together and form solids and liquids. Therefore, they do not have very high solubility in water. As an example, the solubility of $\mathrm{O}_{2}$ in water is $8.3 \mathrm{~mg/L}$ (at $25 { }^{\circ}\mathrm{C}$ and 1 atmosphere).
Most gases have a slightly favorable (negative) enthalpy of solution and a slightly unfavorable (negative) entropy of solution. The effect on enthalpy can be traced to the dipole–induced dipole attractions formed when the gas dissolves in the solution. The decrease in entropy results from the fact that the gas molecules are no longer free to roam around – their positional entropy is more constrained within the liquid phase than it is in the gas phase. When the temperature is increased the gas molecules have more kinetic energy and therefore more of them can escape from the solution, increasing their entropy as they go back to the gas phase. Thus, the solubility of $\mathrm{O}_{2}$ and other gases decreases as temperature increases. This can produce environmental problems, because less oxygen is available for organisms that live in the water. A common source of thermal pollution occurs when power plants and manufacturing facilities expel warm water into the environment.
Solutions of Solids in Solids: Alloys
Another type of solution occurs when two or more elements, typically metals, are melted and mixed together so that their atoms can intersperse, forming an alloy. Upon re-solidification, the atoms become fixed in space relative to each other and the resulting alloy has different properties than the two separate metals. Bronze was one of the first known alloys. Its major component is copper ($\sim 90%$) and its minor component is tin ($\sim 10%$), although other elements such as arsenic or phosphorus may also be included.
The Bronze Age was a significant leap forward in human history.[11] Before bronze, the only metals available were those that occurred naturally in their elemental form—typically silver, copper, and gold, which were not well suited to forming weapons and armor. Bronze is harder and more durable than copper because the tin atoms substitute for copper atoms in the solid lattice. Its structure has stronger metallic bonding interactions, making it harder and less deformable, with a higher melting point than copper itself. Artifacts (weapons, pots, statues, etc.) made from bronze are highly prized. Before bronze, the only metals available were those that occurred naturally in their elemental form– typically silver, copper, and gold.
Steel is another example of a solid–solid solution. It is an iron solvent with a carbon solute. The carbon atoms do not replace the iron atoms, but fit in the spaces between them; this is often called an interstitial alloy. Because there are more atoms per unit volume, steel is denser, harder, and less metallic than iron. The carbon atoms are not in the original lattice, so they affect the metallic properties more and make it harder for the atoms to move relative to each other. Steel is more rigid, less malleable, and conducts electricity and heat less effectively than iron.
Is the Formation of a Solution a Reaction?
We have not yet considered what happens during a chemical reaction: a process where the atoms present in the starting material are rearranged to produce different chemical species. You may be thinking, “Isn’t the formation of a solution a chemical reaction?” If we dissolve ethanol in water, does the mixture contain chemically different species than the two components separately? The answer is no: there are still molecules of ethanol and molecules of water. What about when an ionic substance dissolves in water? For example, sodium chloride must separate into sodium and chloride ions in order to dissolve. Is that a reaction? Certainly interactions are broken (the interactions between $\mathrm{Na}^{+} \mathrm{~Cl}^{-}$ ions) and new interactions are made (between $\mathrm{Na}^{+}$ ions and water and $\mathrm{Cl}^{-}$ ions and water), but the dissolution of a salt has not traditionally been classified as a reaction, even though it seems to fit the criteria.[12] Rather than quibble about what constitutes a reaction, let us move along the spectrum of possible changes and look at what happens when you dissolve a molecular species in water and it forms ions.
When you dissolve hydrogen chloride, $\mathrm{HCl}$ (a white, choking gas), in water you get an entirely new chemical substance: hydrochloric acid (or muriatic acid as it is known in hardware stores), one of the common strong acids. This reaction can be written: $\mathrm{HCl}(g)+\mathrm{H}_{2} \mathrm{O} \rightarrow \mathrm{HCl}(aq)$
This is a bit of shorthand because we actually begin with lots of water, but not much of it is used in the reaction. We indicate this fact by using the aq symbol for aqueous, which implies that the $\mathrm{HCl}$ molecules are dissolved in water (but as we will see they are now no longer molecules). It is important to recognize that hydrochloric acid, $\mathrm{HCl}(aq)$, has properties that are quite distinct from those of gaseous hydrogen chloride $\mathrm{HCl}(g)$. The processes that form hydrochloric acid are somewhat similar to those that form a solution of sodium chloride, except that in this case it is the covalent bond between $\mathrm{H}$ and $\mathrm{Cl}$ that is broken and a new covalent bond between $\mathrm{H}$ and $\mathrm{O}$ is formed at the same time. $\mathrm{HCl}(\mathrm{g})+\mathrm{H}_{2} \mathrm{O} \rightarrow \mathrm{H}_{3} \mathrm{O}^{+}+\mathrm{Cl}^{-}$
We call this reaction an acid–base reaction. In the next chapter, we will consider this and other reactions in (much) greater detail.
Questions
Questions to Answer
Can you convert the solubility of $\mathrm{O}_{2}$ in water into molarity (moles solute ($\mathrm{O}_{2}$) / liter solution)?
If solubility of gases depends on dipole–induced dipole interactions, what do you think the trend in solubility is for the noble gases ($\mathrm{He}$, $\mathrm{Ne}$, $\mathrm{Ar}$, $\mathrm{Kr}$, $\mathrm{Xe}$)?
What else might increase the solubility of a gas (besides lowering the temperature)? (Hint: How are carbonated drinks bottled?)
Why do you think silver, copper, and gold often occur naturally as elements (rather than compounds)?
Draw an atomic-level picture of what you imagine bronze looks like and compare it to a similar picture of steel.
Use these pictures to explain the properties of bronze and steel, as compared to copper and iron.
Questions to Ponder
• Why do you think the Iron Age followed the Bronze Age? (Hint: Does iron normally occur in its elemental form? Why not?)
• How did the properties of bronze and steel influence human history?
6.7: In-Text References
1. Percent proofing of alcoholic beverages can be traced back to the 18th century, when British sailors were partially paid in rum. To prevent it from being watered down, the rum was “proofed” by seeing if it would support the combustion of gunpowder.
2. Silverstein, Todd P. J. Chem. Educ. 1998 75 116
3. See additional materials for structures and names of functional groups.
4. http://en.Wikipedia.org/wiki/Methane_clathrate
5. ACS GenChem materials
6. Intramolecular means within the same molecule. Intermolecular means between or among separate molecules.
7. For examples, see the internet game “foldit”, which uses intramolecular interactions to predict how proteins will fold into the lowest energy shape.
8. Why do you use soap and shampoo? Why not use just water? The answer is, of course, that water doesn’t do a very good job of getting dirt and oil of your skin and hair because grime is just not soluble in water. Soaps and detergents are excellent examples of amphipathic molecules. They both have a polar head and a long non-polar tail, which leads to the formation of micelles. Oily molecules can then be sequestered within these micelles and washed away.
9. It is often possible to track the passage of a beam of light through such a solution, known as the Tyndall effect.
10. At the boiling point, the bubbles contain only water molecules because all the air has been expelled long before this temperature is reached.
11. http://en.Wikipedia.org/wiki/Bronze_Age
12. It has been noted that one reason why chemistry is so difficult is that even experienced chemists cannot agree on the terminology and this is one such example. | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/06%3A_Solutions/6.6%3A_Temperature_and_Solubility.txt |
At last we have arrived at the place where many chemistry courses begin: chemical reactions. In this chapter we will examine what a chemical reaction is, which processes are not chemical reactions, how chemical reactions occur, and how they are characterized. We will also look at how molecules come to be reorganized during a chemical reaction. (In Chapter 8, we will look at reaction behaviors in greater detail.)
There are a bewildering array of possible reactions, but the truth is that most chemical reactions fall into a rather limited number of basic types. This is a good thing for the student of chemistry. Recognizing types simplifies our task greatly, and enables us to achieve a greater level of confidence with predicting and explaining the outcomes of chemical reactions. Although each particular reaction differs in its specific molecules and conditions (e.g., temperature, solvent, etc.), some common rules apply. Rather than bombard you with a lot of seemingly unrelated reactions, we will introduce you to the two most common reaction types: acid–base (which as we will see can also be classified as nucleophile/electrophile) and oxidation-reduction. Keep in mind that whatever the reaction type, reactions are systems composed of reactants, products, and the environment in which the reaction occurs. Reactants behave quite differently in the gas phase than in an aqueous or non-aqueous system. High or low temperatures also affect behavior. In the next chapter, we will consider how thermodynamics and kinetics come into play in particular reactions, under specific conditions. This will then lead us to consider equilibrium and non-equilibrium systems.
Thumbnail: Lead (II) iodide precipitates when potassium iodide is mixed with lead (II) nitrate. (CC BY-SA 3.0 Unported; PRHaney via Wikipedia)
07: A Field Guide to Chemical Reactions
First we will state the obvious: chemical reactions are linked to change but not all change involves a chemical reaction. When liquid water boils or freezes, it undergoes a change of state (a phase change) but the water molecules are still discrete $\mathrm{H}_{2}\mathrm{O}$ molecules. In ice, they remain more or less anchored to one another through $\mathrm{H}$-bonding interactions, whereas in liquid and water vapor they are constantly moving with respect to one another and the interactions that occur between the molecules are transient. We can write out this transition in symbolic form as: $\mathrm{H}_{2} \mathrm{O} \text { (solid) } \rightleftarrows \mathrm{H}_{2} \mathrm{O} \text { (liquid) } \rightleftarrows \mathrm{H}_{2} \mathrm{O} \text { (vapor) }$
The double arrows mean that the changes are reversible. In this case, reversibility is a function of temperature, which controls whether the interactions between molecules are stable (as in ice), transient (as in liquid water), or basically non-existent (as in water vapor). What you notice immediately is that there are water molecules present in each phase. This helps shed light on the common misconception that bubbles found in boiling water are composed of oxygen and hydrogen. Boiling does not break the bonds in a water molecule, so the bubbles are actually composed of water vapor. That said, within liquid water there is actually a chemical reaction going on: the disassociation of water into ${}^{-}\mathrm{OH}$ and $\mathrm{H}^{+}$ (which we will discuss in more detail shortly). However a naked proton (that is, $\mathrm{H}^{+}$ as discrete entity) does not exist in water. Therefore, this reaction is more accurately written as: $\mathrm{H}_{2} \mathrm{O}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}+{ }^{-} \mathrm{OH}$
Here we see the signature of a chemical reaction. The molecules on the two sides of the equation are different; covalent bonds are broken (an $\mathrm{O—H}$ bond in one water molecule) and formed (a $\mathrm{H—O}$ bond in the other.) All chemical reactions can be recognized in this way. The water dissociation reaction also illustrates how reactions can vary in terms of the extent to which they occur. In liquid water, which has a concentration of about $\sim 55 \mathrm{~M}$, very few molecules undergo this reaction. In fact, in pure water the concentration of $\mathrm{H}_{3}\mathrm{O}^{+}$ is only $10^{-7} \mathrm{~M}$, which is eight orders of magnitude less than the concentration of water molecules. Another interesting feature of this reaction is that it is going in both directions, as indicated by the double arrows $\rightleftarrows$.
Water reacts with itself to form $\mathrm{H}_{3}\mathrm{O}^{+} + {}^{-}\mathrm{OH}$, and at the same time $\mathrm{H}_{3}\mathrm{O}^{+} + {}^{-}\mathrm{OH}$ are reacting to generate water molecules. The reaction is at equilibrium, and in this case the position of the equilibrium indicates that the majority of the species in water are actually water molecules.
In contrast, other reactions essentially go to completion (proceed until essentially all the reactants are used up and the reaction is a equilibrium).[1] For example, pure ethanol ($\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{OH}$), is $\sim 17.1 \mathrm{~M}$ and it will burn in air (which contains $\mathrm{O}_{2}$). We can write the reaction going to completion as: $\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{OH}+3 \mathrm{O}_{2} \rightleftarrows 2 \mathrm{CO}_{2}+3 \mathrm{H}_{2} \mathrm{O}$
There is very little ethanol left if this reaction occurs in the presence of sufficient $\mathrm{O}_{2}$.[2] In the real world, the reaction is irreversible because the system is open and both $\mathrm{CO}_{2}$ and $\mathrm{H}_{2}\mathrm{O}$ escape and are therefore not able to collide with each other – which would be a prerequisite for the reverse reaction to occur. Another interesting feature of the ethanol burning reaction is that pure ethanol can be quite stable in contact with the atmosphere, which typically contains $\sim 20 \% \mathrm{O}_{2}$. It takes a spark or a little heat to initiate the reaction. For example, vodka, which is about $50 \%$ ethanol, will not burst into flames without a little help! Most reactions need a spark of energy to get them started, but once started, many of them release enough energy to keep them going. As we saw in our discussion of solutions, some reactions release energy (are exothermic) and some require energy (are endothermic). It is important to note that this overall energy change is not related to the spark or energy that is required to get some reactions started. We will return to these ideas in chapter $8$.
Another feature of reactions is that some are faster than others. For example, if we add hydrogen chloride gas to water, a reaction occurs almost instantaneously: $\mathrm{HCl}(g)+\mathrm{H}_{2} \mathrm{O}(l) \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}(aq)+\mathrm{Cl}^{-}(aq)$
Very little time elapses between dissolving the $\mathrm{HCl}$ and the reaction occurring. We say the rate of the reaction is fast or instantaneous (in Chapter $8$, we will look more closely at reaction rate and what affects it.) In contrast, when iron nails are left out in the weather, they form rust, a complex mixture of iron oxides and hydroxides. This reaction is slow and can take many years, although in hot climates the reaction goes faster. Similarly, when we cook food, the reactions that take place occur at a faster rate than they would at room temperature.
As we have seen previously, bonded atoms are typically more stable than unbonded atoms. For a reaction to occur, some bonds have to break and new ones have to form. What leads to a bond breaking? Why are new bonds formed? What are the factors that affect whether reactions occur, how much energy is released or absorbed, where they come to equilibrium, and how fast they occur? All these questions and more will be addressed in Chapter $8$.
But first things first, in order for a reaction to occur, the reacting molecules have to collide. They have to bump into each other to have a chance of reacting at all. An important point to remember is that molecules are not sitting still. They may be moving from one place to another (if they are in liquid or gaseous phase) and/or they are vibrating and rotating. Remember that the temperature of a system of molecules is a function of the average kinetic energy of those molecules. Normally, it is enough to define the kinetic energy of a molecule as $\frac{1}{2} mv^{2}$, but if we are being completely rigorous this equation applies only to monatomic gases. Molecules are more complex because they can flex, bend, rotate around bonds, and vibrate. Many reactions occur in solution where molecules are constantly in contact with each other—bumping and transferring energy, which may appear as either kinetic or vibrational energy. Nevertheless, we can keep things simple for now as long as we remember what simplifications we are assuming. Recall that although temperature is proportional to the average kinetic energy of the molecules, this does not mean that all the molecules in the system are moving with the same velocity. There is typically a broad range of molecular velocities, even if all the molecules are of the same type. There is an even broader range in reaction mixtures, which have more than one type of molecule in them. Since the system has only a single temperature, all types of molecules must have the same average kinetic energy, which means that the more massive molecules are moving more slowly, on average, than the less massive molecules. At the same time, all the molecules are (of course) moving so they inevitably collide with one another and, if the system has a rigid boundary, with the boundary. We have previously described the distribution of velocities found in the system in terms of a distribution of velocity (or speed) and the percent or even absolute number of molecules with that speed, the Boltzmann distribution. At any particular temperature, there are molecules that move much faster (have higher kinetic energy) and other molecules that move much slower (have less kinetic energy) than the average kinetic energy of the population. This means that when any two molecules collide with one another, the energetics of that interaction can vary dramatically. Some collisions involve relatively little energy, whereas others involve a lot!
These collisions may or may not lead to a chemical reaction, so let’s consider what happens during a chemical reaction. To focus our attention, we will consider the specific reaction of hydrogen and oxygen gases to form water: $2 \mathrm{H}_{2}+\mathrm{O}_{2} \rightleftarrows 2 \mathrm{H}_{2} \mathrm{O}$
This is, in fact, a very complex reaction, so let’s simplify it in a way that may seem cartoonish but which is, nevertheless, accurate. If we have a closed flask of pure oxygen, and we add some hydrogen ($\mathrm{H}_{2}$) to the flask, the two types of gas molecules quickly mix, because – as you will recall – the mixed system is more probable (that is the entropy of the mixed gases is higher than the unmixed.) Some of the molecules collide with each other, but the overwhelming majority of these collisions are unproductive. Neither the hydrogen molecule ($\mathrm{H}_{2}$) nor the oxygen molecule ($\mathrm{O}_{2}$) are altered, although there are changes in their respective kinetic energies. However, when we add kinetic energy (say, from a burning match, which is itself a chemical reaction), the average kinetic energy of the molecules in the heated region increases, thus increasing the energy that can be transferred upon collision, which increases the probability that a particular collision will lead to a bond breaking, which therefore increases the probability of the $\mathrm{H}_{2} + \mathrm{O}_{2}$ reaction. In addition, because the stability of the bonds in $\mathrm{H}_{2}\mathrm{O}$ is greater than those of $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$, the reaction releases energy to the surroundings. This energy can take the form of kinetic energy (which leads to a further increase in the temperature) and electromagnetic energy (which results in the emission of photons of light.) In this way, the system becomes self-sustaining. It no longer needs the burning match because the energy released as the reaction continues is enough to keep new molecules reacting. The reaction of $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$ is explosive (it rapidly releases thermal energy and light), but only after that initial spark has been supplied.
We can plot out the behavior of the reaction, as a function of time, beginning with the addition of the burning match. It is worth keeping in mind that the reaction converts $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$ into water. Therefore, the concentrations of $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$ in the system decrease as the reaction proceeds while the concentration of $\mathrm{H}_{2}\mathrm{O}$ increases. As the reaction proceeds, the probability of productive collisions between $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$ molecules decreases simply because there are fewer $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$ molecules present. We can think of it this way: the rate at which the reaction occurs in the forward (to the right) direction is based on the probability of productive collisions between molecules of $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$. This in turn depends upon their relative concentration (this is why hydrogen will not burn in the absence of $\mathrm{O}_{2}$). As the concentrations of the two molecules decrease, the reaction rate slows down. Normally, the water molecules produced by burning disperse and the concentration (molecules per unit volume) of $\mathrm{H}_{2}\mathrm{O}$ never grows very large. But if the molecules are in a container, then their concentrations increase, and eventually the backward reaction could begin to occur. The reaction will reach equilibrium, at which point the rate of forward and backward reactions would be equal. Because the forward reaction is so favorable, some (but very little) $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$ would remain at equilibrium. The point is to recognize that reactions are dynamic and, depending on the conditions, the exact nature of the equilibrium state will be determined by concentrations, temperatures, and the nature of the reaction.
Questions
Questions to Answer
• In your own words, define the term chemical reaction. How can you tell when a chemical reaction has occurred?
• Give some examples of reactions that you already know about or have learned about in previous courses.
• What do we mean by rate of reaction? How might you determine a reaction rate?
• What conditions must exist in order for something to react?
• How does the concentration of reactants and products influence the rate of a reaction?
• Are chemical reactions possible in the solid phase?
• What factors are required for a reaction to reach a stable (albeit dynamic) equilibrium?
• Why is a burning building unlikely to reach equilibrium?
• Assuming you have encountered them before, define the terms acidic and basic in your own words.
Questions to Ponder
• What reactions are going on around you right now?
• What is required in order for a reaction to go backwards? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/07%3A_A_Field_Guide_to_Chemical_Reactions/7.1%3A_Collisions_and_Chemical_Reactions.txt |
Let us begin with the hydrogen chloride and water reaction from the last chapter, a classic acid–base reaction. To understand how these types of reactions are related, we need to learn how to identify their essential and common components. Our first hurdle is the fact that the terms acid and acidity, and to a lesser extent, bases and basicity, have entered the language of everyday life. Most people have some notion of acids and acidity. Examples of common usage include: acid rain, stomach acid, acid reflux, acid tongue, etc. You might hear someone talk about wine that tastes acidic, by which they probably mean sour, and most people would nod their heads in comprehension. Old wine tastes like vinegar because it contains acetic acid. You have also probably heard of or even learned about measurements of acidity that involve $\mathrm{pH}$, but what is $\mathrm{pH}$ exactly? What is an acid, and why would you want to neutralize it? Are acidic things bad? Do we need to avoid them at all costs and under all circumstances? Although the term base is less common, you may already be familiar with materials that are basic in the chemical sense. Bases are often called alkalis, as in alkaline batteries and alkali metals. They are slippery to the touch, bitter tasting.
Not surprisingly, many definitions of acid–base reactions have been developed over the years. Each new definition has been consistent, that is it produces similar conclusions when applied to a particular system, to the ones that have come before, but each new definition has also furthered the evolution of the idea of acids and bases. Later definitions encompass original ideas about acids and bases, but also broaden them and make them more widely applicable, covering a large array of reactions with similar characteristics. We will start with the simplest model of acids and bases—the Arrhenius model.[3] This is the most common introduction to acid–base chemistry; perhaps you have already been taught this model. Although the Arrhenius model is of limited usefulness, we will examine its simple structure as the foundation for more sophisticated and useful models. Our model-by-model consideration should help you appreciate how acid–base chemistry has become increasingly general, and powerful over time. As we progress, keep this simple rule in mind: All acid–base reactions begin and end with polarized molecules. As we go through the various models for acid–base reactions, see if you can identify the polar groups and how they interact with each other.
Arrhenius Acids and Bases
In the Arrhenius model, an acid is defined as a compound that dissociates when dissolved in water to produce a proton ($\mathrm{H}^{+}$) and a negatively-charged ion (an anion). In fact, naked protons ($\mathrm{H}^{+}$) do not roam around in solution. They always associate with at least one, and more likely multiple, water molecules. [4] Generally, chemists use a shorthand for this situation, either referring to the $\mathrm{H}^{+}$ in aqueous solution as a hydronium ion (denoted as $\mathrm{H}_{3}\mathrm{O}^{+}$) or even more simply as $\mathrm{H}^{+}$, but do not forget, this is a short-hand. An example of an Arrhenius acid reaction is: $\mathrm{HCl}(g)+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}(aq)+\mathrm{Cl}^{-}(aq)$
or, more simply (and truer to the original theory): $\mathrm{HCl}(g) \rightleftarrows \mathrm{H}^{+}(aq)+\mathrm{Cl}^{-}(aq) \text { or } \mathrm{HCl}(aq)$
But this is really quite a weird way to present the actual situation, because the $\mathrm{HCl}$ molecule does not interact with a single water molecule, but rather interacts with water as a solvent. When hydrogen chloride ($\mathrm{HCl}$) gas is dissolved in water, it dissociates into $\mathrm{H}^{+}(aq)$ and $\mathrm{Cl}^{-}(aq)$ almost completely. For all intents and purposes, there are no $\mathrm{HCl}$ molecules in the solution. An aqueous solution of $\mathrm{HCl}$ is known as hydrochloric acid, which distinguishes it from the gas, hydrogen chloride. This complete dissociation is a characteristic of strong acids, but not all acids are strong!
An Arrhenius base is defined as a compound that generates hydroxide ($–\mathrm{OH}$) ions when dissolved in water. The most common examples of Arrhenius bases are the Group I (alkali metal) hydroxides, such as sodium hydroxide: $\mathrm{NaOH}(s)+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{Na}^{+}(aq)+{ }^{-} \mathrm{OH}(aq) \text { or } \mathrm{NaOH}(aq)$
Again, this is a reaction system that involves both $\mathrm{NaOH}$ and liquid water. The process of forming a solution of sodium hydroxide is just like the one involved in the interaction between sodium chloride ($\mathrm{NaCl}$) and water: the ions ($\mathrm{Na}^{+}$ and ${}^{-}\mathrm{OH}$) separate and are solvated (surrounded) by the water molecules.
As we will see shortly, some acids (and bases) do not ionize completely; some of the acid molecules remain intact when they dissolve in water. When this occurs we use double-headed arrows $\rightleftarrows$ to indicate that the reaction is reversible, and both reactants and products are present in the same reaction mixture. We will have much more to say about the duration and direction of a reaction in the next chapter. For now, it is enough to understand that acid–base reactions (in fact, all reactions) are reversible at the molecular level. In the case of simple Arrhenius acids and bases, however, we can assume that the reaction proceeds almost exclusively to the right.
An Arrhenius acid–base reaction occurs when a dissolved (aqueous) acid and a dissolved (aqueous) base are mixed together. The product of such a reaction is usually said to be a salt plus water and the reaction is often called a neutralization reaction: the acid neutralizes the base, and vice versa. The equation can be written like this: $\mathrm{HCl}(aq)+\mathrm{NaOH}(aq) \rightleftarrows \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{NaCl}(aq)$
When the reaction is written in this molecular form it is quite difficult to see what is actually happening. If we rewrite the equation to show all of the species involved, and assume that the number of $\mathrm{HCl}$ and $\mathrm{NaOH}$ molecules are equal, we get: $\mathrm{H}^{+}(aq)+\mathrm{Cl}^{-}(aq)+\mathrm{Na}^{+}(aq)+-\mathrm{OH}(aq) \rightleftarrows \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{Na}^{+}(aq)+\mathrm{Cl}^{-}(aq)$
$\mathrm{Na}^{+}(aq)$ and $\mathrm{Cl}^{-}(aq)$ appear on both sides of the equation; they are unchanged and do not react (they are often called spectator ions because they do not participate in the reaction). The only actual reaction that occurs is the formation of water: $\mathrm{H}^{+}(aq)+{ }^{-} \mathrm{OH}(aq) \rightleftarrows \mathrm{H}_{2} \mathrm{O}(l)$
The formation of water (not the formation of a salt) is the signature of an Arrhenius acid–base reaction. A number of common strong acids, including hydrochloric acid ($\mathrm{HCl}$), sulfuric acid ($\mathrm{H}_{2}\mathrm{SO}_{2}$), and nitric acid ($\mathrm{HNO}_{2}$), react with a strong base such as $\mathrm{NaOH}$ or $\mathrm{KOH}$ (which, like strong acids, dissociate completely in water) to produce water..
Such acid–base reactions are always exothermic and we can measure the temperature change and calculate the corresponding enthalpy change ($\Delta \mathrm{H}$) for the reaction. Regardless of which strong acid or strong base you choose, the enthalpy change is always the same (about $58 \mathrm{~kJ/mol}$ of $\mathrm{H}_{2}\mathrm{O}$ produced). This is because the only consistent net reaction that takes place in a solution of a strong acid and a strong base is: $\mathrm{H}^{+}(aq)+-\mathrm{OH}(aq) \rightleftarrows \mathrm{H}_{2} \mathrm{O}(l)$
One other factor to note is that the overall reaction involves a new bond being formed between the proton ($\mathrm{H}^{+}$) and the oxygen of the hydroxide (${}^{-}\mathrm{OH}$.) It makes sense that something with a positive charge would be attracted to (and bond with) a negatively-charged species (although you should recall why the $\mathrm{Na}^{+}$ and $\mathrm{Cl}^{-}$ do not combine to form sodium chloride solid in aqueous solution.) Whether or not bonds form depends on the exact nature of the system, and the enthalpy and entropy changes that are associated with the change. We will return to this idea later in chapter $8$.
Questions
Questions to Answer
• What would be the reaction if equal amounts of equimolar $\mathrm{HNO}_{3}$ and $\mathrm{KOH}$ were mixed?
• How about equal amounts of equimolar $\mathrm{H}_{2}\mathrm{SO}_{4}$ and $\mathrm{KOH}$? What would the products be?
• How about equal amounts of equimolar $\mathrm{H}_{3}\mathrm{PO}_{4}$ and $\mathrm{KOH}$?
• How many moles of $\mathrm{NaOH}$ would be needed to react fully with one mole of $\mathrm{H}_{3}\mathrm{PO}_{4}$?
• Draw a molecular level picture of Arrhenius acid base reaction.
Brønsted–Lowry[5] Acids and Bases
The Arrhenius acid–base model is fairly easy to understand but its application is limited to certain kinds of reactions. Rather than continue down this road, chemists found that they needed to expand their model of acids and bases and how they react. The first of these expansions was the Brønsted–Lowry model. In the Brønsted–Lowry model, an acid is characterized as a proton ($\mathrm{H}^{+}$) donor and a base as a proton acceptor. If we revisit the reactions we looked at earlier in the context of the Brønsted–Lowry acid-base model, we see that $\mathrm{HCl}$ is the proton donor; it gives away $\mathrm{H}^{+}$ and water is the proton acceptor. In this scheme, $\mathrm{HCl}$ is the acid and water is the base: $\mathrm{HCl}(g)+ \mathrm{H}_{2} \mathrm{O}(l) \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}(aq)+\mathrm{Cl}^{-}(aq)$
$\mathrm{HCl} =$ acid $\mathrm{H}_{2} \mathrm{O} =$ base $\mathrm{H}_{3} \mathrm{O}^{+} =$ conjugate acid $\mathrm{Cl}^{-} =$ conjugate base
The resulting species are called the conjugate acid (so $\mathrm{H}_{3} \mathrm{O}^{+}$ is the conjugate acid of $\mathrm{H}_{2} \mathrm{O}$ and the conjugate base ($\mathrm{Cl}^{-}$ is the conjugate base of $\mathrm{HCl}$). This is because $\mathrm{H}_{3} \mathrm{O}^{+}$ can and generally does donate its $\mathrm{H}^{+}$ to another molecule (most often another water molecule) and $\mathrm{Cl}^{-}$ can accept an $\mathrm{H}^{+}$.
A major (and important difference) between the Brønsted–Lowry and Arrhenius acid–base models is that a Brønsted–Lowry acid must always have an accompanying base to react with—the two are inseparable. A proton donor must have something to donate the protons to (a base)—in this case, water. Remember that bond breaking requires energy, whereas bond formation releases energy. Some energy input is always required for a reaction in which the only thing that happens is the breaking of a bond (for example the $\mathrm{Cl–H}$ bond in $\mathrm{HCl}$). Acid–base reactions are typically exothermic; they release energy to the surroundings and the released energy is associated with the interaction between the $\mathrm{H}^{+}$ and the base. In other words, the proton does not drop off the acid and then bond with the base. Instead, the acid$\mathrm{–H}$ bond starts to break as the base–H bond starts to form. One way that we can visualize this process is to draw out the Lewis structures of the molecules involved and see how the proton is transferred.
As shown in the figure, we use a dotted line to show the growing attraction between the partial positive charge on the $\mathrm{H}$ of the $\mathrm{H—Cl}$ molecule and the partial negative charge on the oxygen. This interaction results in the destabilization of the $\mathrm{H—Cl}$ bond. Because the $\mathrm{Cl}$ is more electronegative than the $\mathrm{H}$, the electrons of the original $\mathrm{H—Cl}$ bond remain with the $\mathrm{Cl}$ (which becomes $\mathrm{Cl}^{-}$) and the $\mathrm{H}^{+}$ forms a new bond with a water molecule. Essentially, a Brønsted–Lowry acid–base reaction involves the transfer of a proton from an acid to a base, leaving behind the original bonding electrons.
Another example of an acid–base reaction is the reaction of ammonia with water: $\mathrm{NH}_{3}(aq)+\mathrm{H}_{2} \mathrm{O}(l) \rightleftarrows \mathrm{NH}_{4}^{+}(aq)+{}^{-} \mathrm{OH}(aq)$
$\mathrm{NH}_{3} =$ base $\mathrm{H}_{2} \mathrm{O} =$ acid $\mathrm{NH}_{4} =$ conjugate acid ${}^{-} \mathrm{OH} =$ conjugate base
In this case, oxygen is more electronegative than nitrogen. The proton is transferred from the oxygen to the nitrogen. Again, the dotted line in the figure represents the developing bond between the hydrogen and the nitrogen. As the $\mathrm{H—O}$ bond breaks, a new $\mathrm{H—N}$ bond forms, making the resulting $\mathrm{NH}_{4} {}^{+}$ molecule positively-charged. The electrons associated with the original $\mathrm{H—O}$ bond are retained by the $\mathrm{O}$, making it negatively-charged. So, water is the acid and ammonia is the base! An important difference between this and the preceding $\mathrm{HCl–H}_{2}\mathrm{O}$ reaction is that $\mathrm{H}_{2}\mathrm{O}$ is a much weaker acid than is $\mathrm{HCl}$. In aqueous solution, not all of the $\mathrm{NH}_{3}$ reacts with $\mathrm{H}_{2}\mathrm{O}$ to form $\mathrm{NH}_{4} {}^{+}$. Moreover, the reaction between $\mathrm{NH}_{3}$ and water is reversible, as indicated by the $\rightleftarrows$ symbol. The next chapter will consider the extent to which a reaction proceeds to completion. You may be wondering why the water does not act as a base in the reaction with $\mathrm{NH}_{3}$, like it does with $\mathrm{HCl}$. If you draw out the products resulting from a proton transfer from nitrogen to oxygen, you will see that this process results in a mixture of products where the more electronegative atom ($\mathrm{O}$) now has a positive charge, and the less electronegative atom ($\mathrm{N}$) has a negative charge. It does not make sense that the most electronegative atom would end up with a positive charge, and indeed this process does not happen (to any measurable extent).
We will soon return to a discussion of what makes a compound acidic and/or basic. At the moment, we have two acid–base reactions: one in which water is the acid and the other in which water is the base. How can this be? How can one molecule of water be both an acid and a base, apparently at the same time? It is possible because of the water molecule’s unique structure. In fact, water reacts with itself, with one molecule acting as an acid and one as a base: $\mathrm{H}_{2} \mathrm{O}(l)+\mathrm{H}_{2} \mathrm{O}(l) \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}(aq)+{ }^{-} \mathrm{OH}(aq)$
$\mathrm{H}_{2} \mathrm{O} =$ acid $\mathrm{H}_{2} \mathrm{O} =$ base $\mathrm{H}_{3} \mathrm{O}^{+} =$ conjugate acid ${}^{-} \mathrm{OH} =$ conjugate base
As shown in the figure, we can again visualize this process by drawing out the Lewis structures of the water molecules to see how the proton is able to move from one water molecule to another, so that it is never “alone” and always interacting with the lone pairs on the oxygens.
Questions
Questions to Ponder
• Between the Arrhenius model and the Brønsted–Lowry model of acids and base, which is more useful? Why?
Questions to Answer
• Which do you think is more likely to happen? The reaction $\mathrm{H}_{2} \mathrm{O} + \mathrm{H}_{2} \mathrm{O} \rightarrow \mathrm{H}_{3} \mathrm{O}^{+} + { }^{-} \mathrm{OH}$? Or the reverse process $\mathrm{H}_{3} \mathrm{O}^{+} + { }^{-} \mathrm{OH} \rightarrow \mathrm{H}_{2} \mathrm{O} + \mathrm{H}_{2} \mathrm{O}$? Could they both happen at once?
• What do you think the relative amounts of $\mathrm{H}_{2} \mathrm{O}$, $\mathrm{H}_{3} \mathrm{O}^{+} + { }^{-} \mathrm{OH}$ might be in a pure sample of liquid water? How would you measure the relative amounts?
• Now that you know $\mathrm{HCl}$ is an acid and ammonia is a base, can you predict the reaction that occurs between them?
• Is water a necessary component of a Brønsted–Lowry acid–base reaction? How about for an Arrhenius acid–base reaction?
How to Spot an Acid
Moving on from water, can we predict whether a compound will be an acid, a base, or neither? We have learned that we can predict many properties of materials by considering their molecular structure. When acids are written in their simplified form (for example $\mathrm{HNO}_{3}$ or $\mathrm{H}_{2}\mathrm{SO}_{4}$) it can be very difficult to see any similarities, but if we draw out the Lewis structures some commonalities emerge. Let us take a look at the Lewis structures for several strong acids, such as hydrochloric acid $\mathrm{HCl}(aq)$, nitric acid $\mathrm{HNO}_{3}(aq)$, and sulfuric acid $\mathrm{H}_{2}\mathrm{SO}_{4}(aq)$.[6] What structural feature do these substances have in common? Well, from their formulae it is clear that they all contain hydrogen, but there are many compounds that contain hydrogen that are not acidic. For example, methane ($\mathrm{CH}_{4}$) and other hydrocarbons are not acidic; they do not donate protons to other molecules.
One common feature of acids is that the proton that gets donated (or picked off) is bonded to a highly electronegative atom. This atom is often either an oxygen or a halogen such as chlorine ($\mathrm{Cl}$), bromine ($\mathrm{Br}$), or iodine ($\mathrm{I}$). Once you know what to look for, it is quite easy to spot the potentially acidic sites in a molecule. For example, in the previous figure, you could circle the “vulnerable” hydrogens. The ability to spot donatable hydrogens is a useful skill that allows you to predict properties of more complex molecules. But why is a hydrogen that is covalently bonded to an electronegative element potentially acidic and donatable?
First, let us consider the $\mathrm{O—H}$ bond. Based on our discussion of water molecules, we can predict that it is polarized, with a partial positive charge on the H and a partial negative on the $\mathrm{O}$. In water, the $\mathrm{H}$ is (on average) also part of a hydrogen bonding interaction with the oxygen of another water molecule. It turns out that it does not take much energy to break the original $\mathrm{O—H}$ bond. Remember that $\mathrm{H}^{+}$ does not just “drop off” the acid, but at the same time forms a bond with the base molecule. In fact, strong acid–base reactions are typically exothermic, meaning that the new bond formed between the proton ($\mathrm{H}^{+}$) and the base is stronger than the bond that was broken to release the $\mathrm{H}^{+}$. The released energy raises the temperature of the surroundings. In an aqueous solution of a strong acid, hydrogen ions are moving rapidly and randomly from one oxygen to another. The energy for all this bond-breaking comes from the thermal motion of water molecules.
We must also consider what happens to the oxygen that gets left behind. When the acidic hydrogen is transferred, it leaves behind the electrons that were in the bond, giving that atom more electrons than it started with. The species left behind must be stable even with those extra electrons (the negative charge). In the example below, chloride ion $\mathrm{Cl}^{-}(aq)$ is left behind when the proton gets transferred away. We know chloride is stable and common. It is not surprising that it is one of the products of the reaction. $\mathrm{HCl}(g)+\mathrm{H}_{2} \mathrm{O}(l) \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}(aq) +\mathrm{Cl}^{-}(aq)$
$\mathrm{HCl} =$ acid $\mathrm{H}_{2} \mathrm{O} =$ base $\mathrm{H}_{3} \mathrm{O}^{+} =$ conjugate acid $\mathrm{Cl}^{-} =$ conjugate base
If you recall, electronegativity is a measure of the ability to attract (and retain) electrons.[7] Therefore, it makes sense that a negatively-charged, electronegative atom (like chlorine or oxygen) will be more stable than a negatively-charged, less electronegative atom (like carbon).
Questions
Questions to Answer
• What other atoms besides chlorine or oxygen are electronegative enough to stabilize those extra electrons?
• Draw the reactions of each of the strong acids with water: ($\mathrm{HCl} (aq)$), nitric acid ($\mathrm{HNO}_{3} (aq)$), sulfuric acid ($\mathrm{H}_{2}\mathrm{SO}_{4} (aq)$), hydrogen bromide ($\mathrm{HBr} (aq)$), and hydrogen iodide ($\mathrm{HI} (aq)$). What are the commonalities? What are the differences?
• Draw the structures of methanol ($\mathrm{CH}_{3}\mathrm{OH}$), acetic acid ($\mathrm{CH}_{3}\mathrm{COOH}$), and methane ($\mathrm{CH}_{4}$) and write a potential reaction with water. Label the conjugate acid–base pairs.
• Which reactions do you think are likely to occur? Why?
Questions for Later
• What other methods (besides having a strongly electronegative atom) might be available to stabilize the electrons (recall that one model of bonding allows for molecular orbitals that extend over more than two atoms)? We will return to this idea later.
Strong, Weak, Concentrated, and Dilute Acids and Bases
It can be very confusing when words have a different meaning in the scientific context than they do in everyday life. The words we use to describe solutions of acids and bases fall into this category of easily mixed-up definitions. We use the term strong to refer to acids that ionize completely in water, and weak for those acids that are only partially ionized (see Chapter $8$ for more information on why). Strong and weak are used to describe an intrinsic property of the acid or base. The terms dilute and concentrated are used to describe the concentration of the acid in water. We could have a dilute solution (say $0.1 \mathrm{~M}$) of the strong acid hydrochloric acid, or a concentrated solution (say $10 \mathrm{~M}$) of the weak acid acetic acid. By contrast, when we refer to strong versus weak liquids in the everyday sense, we are referring to the concentration of the solution. For example, if you say, “This tea is very weak” or “I like my coffee strong” what you are really saying that you like a lot of tea or coffee dissolved in the solution you are drinking. It is important to remember this difference and understand that the scientific context can change the meaning of familiar words.
Questions
Questions to Answer
• Draw out molecular-level pictures of a dilute solution of a strong acid and a weak acid.
• Draw out molecular-level pictures of a concentrated solution of a strong acid and a weak acid.
• What are the similarities and differences between all the representations you have drawn?
• Consider what you have learned about the energy changes associated with the reaction of a strong acid with water. From a safety point of view, which of the following actions makes more sense when diluting a concentrated solution of a strong acid with water? Why?
• A. Add water slowly (dropwise) to the concentrated strong acid or
• B. Add the concentrated strong acid dropwise to water
Factors That Affect Acid Strength
In Chapter $8$, we will discuss the quantification of acid and base strength. First let us take a look at the factors that might affect the strength of an acid. As we have already seen, the ability of the conjugate base to hold on to (stabilize) the electron pair is crucial. There are several ways to accomplish this. The simplest is that the acidic $\mathrm{H}$ is attached to an electronegative atom such as $\mathrm{O}$, $\mathrm{N}$, or a halogen. There is a wide range of acidities for oxyacids. The differences in acidity are determined by the number of places available for the extra electron density to be stabilized. The figure illustrates a fairly simple example of this in the difference between ethanol ($\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{OH}$) and acetic acid ($\mathrm{CH}_{3}\mathrm{COOH}$). Acetic acid is about 10 billion times more acidic than ethanol, because the conjugate base (acetate) is able to stabilize the negative charge on two oxygens instead of just one. This “spreading out” of the charge diminishes the electron-electron repulsions, and stabilizes the structure more than if the negative charge were localized on just one oxygen. If you draw out the Lewis structures of the common strong inorganic oxy-acids (e.g. $\mathrm{HNO}_{3}$ or $\mathrm{H}_{2}\mathrm{SO}_{4}$), you will see that it is possible to delocalize the negative charge of the corresponding anion on more than one oxygen.
How to Spot a Base
There is an equally simple method for figuring out which compounds are potential bases. Let us take a look at some common bases. The first bases that most people encounter are the metal hydroxides such as $\mathrm{NaOH}$, $\mathrm{KOH}$, and $\mathrm{Mg}(\mathrm{OH})_{2}$. The metal ions are generated when these compounds dissolve in water, but they typically do not play any role in acid–base reactions.[8] The base in these compounds is the hydroxide (${}^{-}\mathrm{OH}$). Another common class of bases is molecules that contain nitrogen, like $\mathrm{NH}_{3}$. There many kinds of nitrogenous bases, some of which play a critical role in biological systems. For example, the bases in nucleic acids ($\mathrm{DNA}$ and $\mathrm{RNA}$) are basic because they contain nitrogen. Let us not forget that water is also basic and can accept a proton.
So what is the common structural feature in bases? Well, if an acid is the species with a proton to donate, then the base must be able to accept a proton. This means that the base must have somewhere for the proton to attach—it must contain a non-bonded (lone) pair of electrons for the proton to interact and bond with. If we look at our examples so far, we find that all the bases have the necessary non-bonded pair of electrons. Most common bases have either an oxygen or a nitrogen (with lone pairs of electrons) acting as the basic center. Once you learn how to spot the basic center, you can predict the outcome of a vast range of reactions rather than just memorizing them. It is often the case that if you can identify the acidic and basic sites in the starting materials, you can predict the product and ignore the rest of the molecule.
In general, nitrogen is a better proton acceptor than oxygen, because it is more basic. Ammonia ($\mathrm{NH}_{3}$) is more basic than water ($\mathrm{H}_{2}\mathrm{O}$), and organic compounds with nitrogen in them are typically more basic than the corresponding compounds containing structurally-analogous oxygens ($\rightarrow$). If we compare the trend in basicity for a range of simple compounds across the periodic table, we see that basicity decreases from $\mathrm{NH}_{3} > \mathrm{~H}_{2}\mathrm{O} > \mathrm{~HF}$. This effect parallels the increase in electronegativity across the row. The ability to of an electron pair to bond with and accept a proton depends on how tightly that electron pair is held in by the donor atom. In fluorine, the most electronegative atom, the electrons are held so tightly and so close to the atom’s nucleus that they are not available to bond with a proton. Oxygen holds onto its electron pairs a little less tightly, and so is more likely than fluorine to donate a lone pair to a proton. Nitrogen, however, is even less electronegative and therefore has a more available lone pair, making most nitrogen compounds basic.[9]
Questions
Questions to Answer
• Why did we not include $\mathrm{CH}_{4}$ or neon in this analysis?
• Do you think compounds with ammonium ($\mathrm{NH}_{4} {}^{+}$) are basic? Why or why not?
• Can you draw the structure of a basic compound that has not yet been mentioned in the text?
• Draw out the reactions of $\mathrm{CH}_{3}\mathrm{NH}_{2}$ and $\mathrm{CH}_{3}\mathrm{OH}$ with water. Label the conjugate acid and base pairs. Which reaction is most likely to occur? Why?
• How would you design an experiment to figure out whether a compound is an acid or a base (or both)? What experimental evidence would you accept to determine if you had an acid or a base or both? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/07%3A_A_Field_Guide_to_Chemical_Reactions/7.2%3A_Acid-Base_Reactions%3A_A_Guide_for_Beginners.txt |
Although chemists use the Brønsted–Lowry model for any reaction in which a proton is transferred from one atom to another, there is an even broader model. The Lewis model incorporates reactions where there is no proton transfer. Instead of seeing the reaction as a proton transfer, we can look at it from the vantage point of the electron pair that eventually becomes part of the new bond. That is: we can consider an acid-base reaction as the donation of an electron pair (from a base) to form a bond between the donor atom and the proton (or the acid). So, instead of saying water transfers a proton to ammonia, the Lewis model would view the process as ammonia donating a lone electron pair to form a new bond with a proton from a water molecule. This process results in the transfer of a hydrogen from the water to the ammonia molecule (a bond formation event, as shown in the figure). The electrons that were originally bonded to the hydrogen do not disappear. Rather, they are left behind on the oxygen, leading to the generation of a hydroxide ($–\mathrm{OH}$) ion. The Lewis acid–base model allows us to consider reactions in which there is no transferred hydrogen, but where there is a lone pair of electrons that can form a new bond.
This figure shows an example of the Lewis acid–base model in the reaction between boron trifluoride ($\mathrm{BF}_{3}$) and ammonia ($\mathrm{NH}_{3}$). In this case, the base is the electron pair donor and the acid is the electron pair acceptor. The lone electron pair from $\mathrm{NH}_{3}$ is donated to boron, which has an empty bonding orbital that accepts the pair of electrons, forming a bond between the $\mathrm{N}$ and the $\mathrm{B}$. Even though we use the term “donate”, the electron pair does not leave the $\mathrm{NH}_{3}$ molecule; it changes from a non-bonding pair to a bonding pair of electrons. $\mathrm{BF}_{3}$ is a Lewis acid, but note that it has no $\mathrm{H}$ to donate. It represents a new class of acids: Lewis acids. These include substances such as $\mathrm{BF}_{3}$ or $\mathrm{AlCl}_{3}$, compounds of periodic table Group III atoms, which have only six electrons in their bonding orbitals. This electron deficiency leaves empty, energetically-accessible orbitals open to accept an electron pair from the Lewis base, the electron pair donor. Other examples of Lewis acids are metal ions, like $\mathrm{Fe}^{2+}$, $\mathrm{Fe}^{3+}$, $\mathrm{Mg}^{2+}$, and $\mathrm{Zn}^{2+}$. All of these elements play a critical role in biological systems via their behavior as Lewis acids. An important example is the heme group of hemoglobin. In the center of this group is a positively-charged iron ($\mathrm{Fe}$) atom. Such positively-charged ions (cations) have empty orbitals that can interact with the lone pair electrons from Lewis bases and form Lewis acid–base complexes. In the case of hemoglobin, the Lewis bases ($\mathrm{O}_{2}$, $\mathrm{CO}_{2}$, and $\mathrm{CO}$) interact with Fe to move oxygen into the body from the lungs and move $\mathrm{CO}_{2}$ from the body to the lungs. It takes a little practice to gain confidence in recognizing Lewis acid–base reactions, but this skill can help us understand many biological and chemical systems.
If we look back over the acid–base theories about acids, we see that the theories become increasingly complex as each subsequent theory subsumes the previous one and extends the range of reactions that can be explained. Neither the Arrhenius nor Brønsted–Lowry theories explain why iron in the heme complexes and oxygen to form the oxygen transport system in our bodies. The Lewis acid–base model, on the other hand, can help explain this as well as the simple reaction between $\mathrm{HCl}$ and $\mathrm{NaOH}$ (where ${}^{-}\mathrm{OH}$ is the Lewis base and $\mathrm{H}^{+}$ is the Lewis acid).
Questions
Questions to Answer
• For the reaction: $\mathrm{HCl}(g)+\mathrm{H}_{2} \mathrm{O}(l) \rightarrow \mathrm{H}_{3} \mathrm{O}^{+}(aq)+\mathrm{Cl}^{-}(aq)$, write out (in words and molecular level pictures) what is going on during the reaction in terms of:
• Arrhenius acid–base theory
• Bronsted–Lowry acid–base theory
• Lewis acid–base theory
• Now do the same activity for the reaction of $\mathrm{NH}_{3}$ and $\mathrm{HCl}$.
• Now do the same activity for the reaction of $\mathrm{R}_{2}\mathrm{NH}$ and $\mathrm{AlCl}_{3}$.
• Why do you think we use different models of acid–base reactions?
• Can you describe what would dictate the use of a particular model? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/07%3A_A_Field_Guide_to_Chemical_Reactions/7.3%3A_Lewis_Acid-Base_Reactions.txt |
The Lewis acid–base model is more inclusive than the Brønsted–Lowry model, but we often use the Brønsted–Lowry model because it is easier to follow the proton transfer from one molecule (the acid) to another (the base). In aqueous solutions, the Brønsted–Lowry theory also allows us to use the concept of $\mathrm{pH}$ to quantify acidity (as we will see shortly). Both the Lewis and Brønsted–Lowry models capture the overarching principle that most chemical reactions are initiated by an electrostatic interaction between a positively-charged portion of a molecule to a negatively-charged portion of the same, or another, molecule.[10] As we will see in the next chapter, molecules must collide with one another in order for reactions to occur between them—they do not react at a distance. When the reacting particles collide, there has to be some continuous pathway through which bonds rearrange and produce products. The first step in this pathway often involves Coulombic (electrostatic) interactions between specific regions of the molecules involved. Of course, whether or not such Coulombic interactions are stable depends upon the kinetic energies of the colliding molecules and exactly how they collide with one another. Catalysts often speed reactions by controlling how molecules collide with or interact with one another. This figure ($\rightarrow$) shows the reaction of $\mathrm{H}_{2}\mathrm{O}$ and $\mathrm{NH}_{3}$, in which the positive end of one molecule interacts with the negative end of the other. If we consider this as a Lewis acid–base reaction, the same principle holds true. It turns out that we can profitably consider a wide range of reactions using the principle of Coulombic attraction. For example, ammonia (and other nitrogen compounds) can react with carbon-containing molecules if the appropriate conditions are met.
In the figure ($\rightarrow$) the nitrogen is behaving as a Lewis base, donating its lone pair of electrons to the carbon. However, it is a little more difficult to see the analogy with a Lewis acid at the carbon site. What we can see is that there is an electronegative, polarizing group (in this case a bromine atom) bonded to the carbon. The presence of a bromine atom polarizes the $\mathrm{C—Br}$ bond, giving the carbon a slight positive charge. This makes the carbon susceptible to attack by the lone pair of the nitrogen. Since carbon does not have an empty orbital to accept the lone pair into, and carbon can never form more than four bonds, something has to give. What gives is the $\mathrm{C—Br}$ bond, which breaks, and the bromine carries away the electrons from the bond with it, producing a bromide ion, $\mathrm{Br}^{-}$.
This type of reaction, while is essentially a Lewis acid-base reactions, is usually described using yet another set of terms, probably because these reactions usually belong in the realm of organic chemistry, which was once considered a distinct chemical discipline. For organic chemists, the species with the lone pair (in this case the $\mathrm{NH}_{3}$) is called the nucleophile (literally, “nucleus-loving”) and is attracted to a positive center of charge. The species that accepts the lone pair of electrons, in this case the $\mathrm{CH}_{3}\mathrm{Br}$ molecule, is called the electrophile (literally, “electron-loving”). The species that is released from its bond with the carbon is called the leaving group. Leaving groups must be relatively electronegative (as in the case of $\mathrm{Br}$) or stable when associated with an extra pair of electrons. So, good leaving groups are weak bases. Conjugate bases of strong acids are excellent leaving groups because they are stable.
If we analyze the reaction in the figure further, we see the nitrogen nucleophile approaching the carbon electrophile: as the bond forms between the $\mathrm{C}$ and $\mathrm{N}$, the bond breaks between the $\mathrm{C}$ and the $\mathrm{Br}$. The bond-breaking and bond-making occur simultaneously. Given what we know about water and aqueous solutions, we might even be so brave as to predict that the product (${}^{+}\mathrm{NH}_{3}\mathrm{CH}_{3} \mathrm{~Br}^{-}$) will rapidly lose a proton in aqueous solution to produce $\mathrm{CH}_{3}—\mathrm{NH}_{2}$ and $\mathrm{H}_{3}\mathrm{O}^{+}$. This kind of reaction is often referred to as a methylation (a $-\mathrm{CH}_{3}$ group is a methyl group). The product is an $\mathrm{N}$-methylated derivative of ammonia.
As we have already seen, nitrogen compounds are common in biological systems. We now see how these compounds can also act as nucleophiles, and how methylation of nitrogen is a fairly common occurrence with a range of effects. For example, methylation and demethylation of the nitrogenous bases in $\mathrm{DNA}$ adenine and cytosine is used to influence gene expression and mark newly synthesized $\mathrm{DNA}$ strands from older, preexisting $\mathrm{DNA}$ strands. At the same time, various methylated sequences (such as CpG) are much less stable than the unmethylated form, and so more likely to to mutate.[11] Methylation reactions are quite common in other biological reactions as well. For example, epinephrine (also known as adrenaline, the fight-or-flight hormone) is synthesized in the body by methylation of the related molecule norepinephrine.
Considering Acid–Base Reactions: $\mathrm{pH}$
It is almost certain that you have heard the term $\mathrm{pH}$, it is another of those scientific terms that have made it into everyday life, yet its scientific meaning is not entirely obvious. For example: why does an increase in $\mathrm{pH}$ correspond to a decrease in “acidity” and why does $\mathrm{pH}$ change with temperature?[12] How do we make sense of $\mathrm{pH}$ and use that information to better understand chemical systems?
The key idea underlying $\mathrm{pH}$ is that water undergoes an acid–base reaction with itself. Recall that this reaction involves the transfer of a proton from one water molecule to another. The proton is never free or “alone”; it is always bonded to an oxygen within another water molecule. Another important point about $\mathrm{pH}$ is that the reaction is readily reversible. Under normal conditions (room temperature), the reaction proceeds in both directions. If we look at the reaction, it makes intuitive sense that the reactants on the right ($\mathrm{H}_{3}\mathrm{O}^{+}$ and ${}^{-}\mathrm{OH}$) can react together to give two $\mathrm{H}_{2}\mathrm{O}$ molecules simply because of the interaction of the positive and negative charges, and we have already seen that the forward reaction does occur. This is one of the first examples we have seen of a reaction that goes both forward and backward in the same system. As we will see, all reactions are reversible at the nanoscale (we will consider the implications of this fact in detail in the next chapter). In any sample of pure water, there are three different molecular species: water molecules ($\mathrm{H}_{2}\mathrm{O}$), hydronium ions ($\mathrm{H}_{3}\mathrm{O}^{+}$), and hydroxide ions (${}^{-}\mathrm{OH}$), as shown in the figure ($\rightarrow$). These three species are constantly interacting with each other through the formation of relatively weak $\mathrm{H}$-bonding interactions, which are constantly forming and breaking. Remember, in liquid water, the water molecules are constantly in motion and colliding with one another. Some of these collisions have enough energy to break the covalent $\mathrm{H—O}$ bond in water or in the hydronium ion. The result is the transfer of $\mathrm{H}^{+}$ and the formation of a new bond with either another water molecule (to form hydronium ion) or with a hydroxide ion (to form a water molecule). To get a feeling for how dynamic this process is, it is estimated that the average lifetime of an individual hydronium ion is on the order of 1 to 2 picoseconds ($1 \times 10^{-12} \mathrm{~ps}$), an unimaginably short period of time. In pure water, at $25 { }^{\circ}\mathrm{C}$, the average concentration of hydronium ions is $1 \times 10^{-7} \mathrm{~mol/L}$. We use square brackets to indicate concentration, so we write this as: $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=1 \times 10^{-7} \mathrm{~M}$
Note that this is a very, very, very small fraction of the total water molecules, given that the concentration of water molecules $\left[\mathrm{H}_{2}\mathrm{O}\right]$ in pure water is $\sim 55.4 \mathrm{~M}$.
In pure water, every time a hydronium ion is produced, a hydroxide ion must also be formed. Therefore, in pure water at $25 { }^{\circ}\mathrm{C}$, the following equation must be true: $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\left[{}^{-}\mathrm{OH}\right]=1 \times 10^{-7} \mathrm{~M}$
It must also be true that the product of the hydronium and hydroxide ion concentrations, $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[{}^{-}\mathrm{OH}\right]$, is a constant at a particular temperature. This constant is a property of water. At $25 { }^{\circ}\mathrm{C}$, this constant is $1 \times 10^{–14}$ and given the symbol $\mathrm{K}_{\mathrm{w}}$, $25 { }^{\circ}\mathrm{C}$. So why do we care? Because when we add an acid or a base to a solution of water at $25 { }^{\circ}\mathrm{C}$, the product of $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[{}^{-}\mathrm{OH}\right]$ remains the same: $1 \times 10^{–14}$. We can use this fact to better understand the behavior of acids, bases, and aqueous solutions.
For many people, dealing with negative exponents does not come naturally. Their implications and manipulations can be difficult. Believe it or not, the $\mathrm{pH}$ scale[13] was designed to make dealing with exponents easier, but it does require that you understand how to work with logarithms (perhaps an equally difficult task). $\mathrm{pH}$ is defined as: $\mathrm{pH} = -\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]$.[14]
In pure water (at $25 { }^{\circ}\mathrm{C}$), where the $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right] = 1 \times 10^{-7} \mathrm{~M}$, $\mathrm{pH} = 7$ ($\mathrm{pH}$ has no units). A solution with a higher concentration of hydronium ions than pure water is acidic, and a solution with a higher concentration of hydroxyl ions is basic. This leads to the counter-intuitive fact that as acidity $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right]$ goes up, $\mathrm{pH}$ goes down. See for yourself: calculate the $\mathrm{pH}$ of a solution with a $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right]$ of $1 \times 10^{-2} \mathrm{~M}$ ($\mathrm{pH} = 2$), and of $1 \times 10^{-9} \mathrm{~M}$ ($\mathrm{pH} = 9$). Moreover, because it is logarithmic, a one unit change in $\mathrm{pH}$ corresponds to a change in $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right]$ of a factor of $10$.
The $\mathrm{pH}$ scale is commonly thought of as spanning units $1–14$, but in fact many of the strongest acid solutions have $\mathrm{pH} < 1$. Representations of the $\mathrm{pH}$ scale often use colors to indicate the change in $\mathrm{pH}$. This convention is used because there are many compounds that change color depending on the $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right]$ of the solution in which they are dissolved. For example, litmus[15] is red when dissolved in an acidic ($\mathrm{pH} < 7$) solution, and blue when dissolved in a basic ($\mathrm{pH} > 7$) solution. Perhaps you have noticed that when you add lemon juice (acidic) to tea, the color changes. Do not get confused: solutions of acids and bases do not intrinsically differ in terms of color. The color change depends on the nature of molecules dissolved in the solution. Think about how changes in $\mathrm{pH}$ might affect molecular structure and, by extension, the interactions between molecules and light (a topic that is more extensively treated in the spectroscopy supplement).
It is important to note that at $37 { }^{\circ}\mathrm{C}$ the value of $\mathrm{K}_{\mathrm{w}}$ is different: $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[{}^{-}\mathrm{OH}\right] = 2.5 \times 10^{-14}$ and therefore the $\mathrm{pH} = 6.8$. Weirdly, this does not mean that the solution is acidic, since $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] = \left[{}^{-}\mathrm{OH}\right]$. The effect is small, but it is significant; it means that a $\mathrm{pH}$ of $7$ does not always mean that a solution is neutral (it depends on the temperature). This is particularly important when the concept of $\mathrm{pH}$ is applied to physiological systems, since the body is usually not at room temperature.
Now let us consider what happens when we add a Brønsted–Lowry acid to water.
For example, if we prepare a solution of $0.10 \mathrm{~M HCl}$ (where we dissolve $0.10 \mathrm{~mol HCl}(g)$ in enough water to make 1 liter of solution), the reaction that results (see figure) contains more hydronium ion ($\mathrm{H}_{3}\mathrm{O}^{+}$). Now if we measure[16] the $\mathrm{pH}$ of the solution of $0.10 \mathrm{~M HCl}$, we find that it is $1.0 \mathrm{~pH}$ units. If we convert back to concentration units from $\mathrm{pH}$ (if $\mathrm{pH} = –\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]$, then $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] = 10^{-\mathrm{pH}}$), we find that the concentration of $\mathrm{H}_{3}\mathrm{O}^{+}$ in $0.10 \mathrm{~M HCl}$ is $0.10 \mathrm{~M}$. This makes sense, in light of our previous discussion about how $\mathrm{HCl}$ completely dissociates into $\mathrm{Cl}^{-}$ and $\mathrm{H}^{+}$ (associated with water molecules).
$[\mathrm{HCl}] \mathrm{~M}$ $\left[\mathrm{H}_{2}\mathrm{O}\right] \mathrm{~M}$ $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right] \mathrm{~M}$ $[\mathrm{OH}] \mathrm{~M}$ $\left[\mathrm{Cl}^{-}\right] \mathrm{~M}$
Before reaction $0.10$ $\sim 55.5$ $1.0 \times 10^{-7}$ $1.0 \times 10^{-7}$ 0
After Reaction $\sim 0$ $\sim 55.4$ $\sim 1.0 \times 10^{-1}$ $1.0 \times 10^{-13}$ $1.0 \times 10^{-1}$
This table gives the concentrations of all the species present both before and after the reaction. There are several things to notice about this table. Because the measured $\mathrm{pH} = 1$ and we added $0.1 \mathrm{~M}$ (or $10^{-1} \mathrm{~M}$) $\mathrm{HCl}$, it is reasonable to assume that all the $\mathrm{HCl}$ dissociated and that the vast majority of the $\mathrm{H}_{3}\mathrm{O}^{+}$ came from the $\mathrm{HCl}$. We can ignore the $\mathrm{H}_{3}\mathrm{O}^{+}$ present initially in the water. Why? Because it was six orders of magnitude $(0.0000001)(10^{-7})$ smaller than the $\mathrm{H}^{+}$ derived from the $\mathrm{HCl}$ ($10^{-1}$). It is rare to see $\mathrm{pH}$ measurements with more than three significant figures, so the $\mathrm{H}_{3}\mathrm{O}^{+}$ originally present in the water does not have a significant effect on the measured $\mathrm{pH}$ value. Although we are not generally concerned about the amount of hydroxide, it is worth noting that $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[{}^{-}\mathrm{OH}\right]$ remains a constant ($\mathrm{K}_{\mathrm{w}}$),and therefore when $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]$ increases the $\left[{}^{-}\mathrm{OH}\right]$ decreases.
Although a number of substances dissolve in water, not all ionize, and not all substances that ionize alter the $\mathrm{pH}$. For example, $\mathrm{NaCl}$ ionizes completely when dissolved in water, yet the $\mathrm{pH}$ of this solution is still $7$. The $\mathrm{Na}^{+}$ and $\mathrm{Cl}^{-}$ ions do not affect the $\mathrm{pH}$ at all. However, if we make a $1 \mathrm{~M}$ solution of ammonium chloride ($\mathrm{NH}_{4}\mathrm{Cl}$), we find that its $\mathrm{pH}$ is around $5$. Although it might not be completely obvious why the $\mathrm{pH}$ of this solution is $5$ and the $\mathrm{pH}$ of a $1\mathrm{M NaCl}$ solution is $7$, once you know that it is (and given what you know about $\mathrm{pH}$), you can determine the concentrations of $\mathrm{H}_{3}\mathrm{O}^{+}$, $\mathrm{NH}_{4} {}^{+}$, $\mathrm{NH}_{3}$, $–\mathrm{OH}$ and $\mathrm{Cl}^{-}$ present (see Chapter $8$). The question is: Why are $\mathrm{NH}_{4}\mathrm{Cl}$ and $\mathrm{HCl}$ so different? (We consider this point in Chapter $9$.)
Making Sense of Vinegar and Other Acids
Now let us consider another common acid: acetic acid. If wine is left open to the air, it will often begin to taste sour because the ethanol in wine reacts with oxygen in the air and forms acetic acid. Acetic acid belongs to a family of organic compounds known as carboxylic acids. It has one acidic proton attached to the oxygen.
If we measure the $\mathrm{pH}$ of a $0.10-\mathrm{M}$ solution of acetic acid, we find that it is about $2.8$. The obvious question is why the $\mathrm{pH}$ of a $0.10-\mathrm{M}$ solution of acetic acid is different from the $\mathrm{pH}$ of a $0.10-\mathrm{M}$ solution of hydrochloric acid? The explanation lies in the fact that acetic acid ($\mathrm{CH}_{3}\mathrm{COOH}$) does not dissociate completely into $\mathrm{CH}_{3}\mathrm{CO}_{2} {}^{-}$ and $\mathrm{H}_{3}\mathrm{O}^{+}$ when it is dissolved in water. A $\mathrm{pH}$ of $2.8$ indicates that the $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right] = 10^{-2.8}$. This number can be converted into $1.6 \times 10^{–3} \mathrm{~M}$. About $1.6 \%$ of the added acetic acid is ionized (a form known as acetate ion, $\mathrm{CH}_{3}\mathrm{COO}^{-}$). The rest is in the protonated form (acetic acid, $\mathrm{CH}_{3}\mathrm{COOH}$). The specific molecules that are ionized changes all the time; protons are constantly transferring from one oxygen to another. You can think of this process in another way: it is the system that has a $\mathrm{pH}$, not individual molecules. If we look at a single molecule of acetic acid in the solution, we find that it is ionized $1.6 \%$ of the time. This may seem a weird way to think about the system, but remember, many biological systems (such as bacteria) are quite small, with a volume of only a few cubic microns or micrometers (a cubic micron is a cube $10^{–6} \mathrm{~m}$ on a side) and may contain a rather small number of any one type of molecule. Thus, rather than thinking about the bulk behavior of these molecules, which are relatively few, it can be more useful to think of the behavior of individual molecules averaged over time. Again, in an aqueous solution of acetic acid molecules, most of the molecules ($\sim 98.4 \%$) are in the un-ionized form, so any particularly molecule is un-ionized $\sim 98.4 \%$ percent of the time.
We can measure the $\mathrm{pH}$ of the solutions of many acids of known concentrations, and from these measurements make estimates of the strength of the acid. Strong acids, such as nitric, sulfuric, and hydrochloric are all totally ionized in solution. Weaker acids, such as organic acids, ionize to a much lesser extent. However, given the low naturally occurring concentrations of hydronium and hydroxide ions in pure water, even weak acids can significantly alter the $\mathrm{pH}$ of an aqueous solution. The same behavior applies to weak bases.
Conversely, if weak acids or bases are dissolved in solutions of different $\mathrm{pH}$, the amount of ionization of the group may be significantly changed. For example, as we will see in chapters $8$ and $9$, if we added a weak acid to a solution that was basic (for example at $\mathrm{pH} 9$), we would find that much more of the acid will ionize. Many biological molecules contain parts (called functional groups) that behave as weak acids or weak bases. Therefore, the $\mathrm{pH}$ of the solution in which these molecules find themselves influences the extent to which these functional groups are ionized. Whether a part of a large molecule is ionized or not can dramatically influence a biomolecule’s behavior, structure, and interactions with other molecules. Thus, changes in $\mathrm{pH}$ can have dramatic effects on a biological system. For example, if the $\mathrm{pH}$ of your blood changes by $\pm 0.3 \mathrm{~pH}$ units, you are likely to die. Biological systems spend much of the energy they use maintaining a constant $\mathrm{pH}$ (typically around $7.35 - 7.45$).[17] In addition, the $\mathrm{pH}$ within your cells is tightly regulated and can influence cellular behavior.[18]
Questions
Questions to Answer
• How would you calculate the molarity of pure water?
• What percentage of water molecules are ionized at $25 { }^{\circ}\mathrm{C}$?
• If the $\mathrm{pH}$ of a solution (at $25 { }^{\circ}\mathrm{C}$) is $2.0$, what is the $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]$? What is the $\left[{}^{-}\mathrm{OH}\right]$?
• If the $\mathrm{pH}$ of a solution (at $37 { }^{\circ}\mathrm{C}$) is $2.0$, what is the $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]$? What is the $\left[{}^{-}\mathrm{OH}\right]$?
• What would be the $\mathrm{pH}$ of a $0.01-\mathrm{M}$ solution of $\mathrm{HCl}$ at $25 { }^{\circ}\mathrm{C}$?
• If the $\mathrm{pH}$ of a $0.1-\mathrm{M}$ solution of $\mathrm{NH}_{4}\mathrm{Cl}$ is $5.1$, what is the $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]$?
• Draw out a molecular level picture of what you imagine solutions of $\mathrm{NaCl}$ and $\mathrm{NH}_{4}\mathrm{Cl}$ look like.
• Why does acetic acid only have one acidic proton (after all, it does have a total of four protons)?
• Why is acetic acid more acidic than ethanol? What is it about the structure of acetic acid that makes it more acidic?
Questions for Later
• Why do you think we keep specifying the temperature in our discussions of reactions?
Questions to Ponder
• Carboxylic acid groups, $–\mathrm{COOH}$, are common in large biomolecules. What would be the effect of raising or lowering the $\mathrm{pH}$ on carboxylate side chains?
• What effect do you think that might have on the properties of the biomolecule (solubility, interactions with other molecules, etc.)?
• Amino groups are also common. What would be the effect of raising or lowering the $\mathrm{pH}$ on an amino group? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/07%3A_A_Field_Guide_to_Chemical_Reactions/7.4%3A_Nucleophiles_and_Electrophiles.txt |
In contrast to acid–base reactions, oxidation–reduction (or redox) reactions obey a different pattern. In the simplest kinds of redox reactions, polar products are generated from non-polar reactants. You may have run into such reactions already (even if you did not know what they were called!) When iron is left in contact with oxygen (in air) and water, it rusts. The iron is transformed from a hard, non-polar metallic substance, $\mathrm{Fe}$ (solid), into a powdery substance, $\mathrm{Fe}_{2}\mathrm{O}_{3}$.$\mathrm{nH}_{2}\mathrm{O}(s)$. Rusting is mechanistically similar to the reactions that occur when copper turns green, when silver tarnishes and turns black, or (in perhaps the favorite reaction of chemists everywhere[19]) when sodium metal explodes in water.[20]
All of these reactions start with a metal in its elemental form. Pure metals have no charge or permanent unequal distribution of charge (which makes them different from salts like $\mathrm{NaCl}$). In fact we can use the synthesis of sodium chloride ($\mathrm{NaCl}$) from its elements sodium ($\mathrm{Na}$) and chlorine ($\mathrm{Cl}_{2}$) to analyze what happens during a redox reaction. The reaction can be written as: $2 \mathrm{Na}(s)+\mathrm{Cl}_{2}(g) \rightleftarrows 2 \mathrm{NaCl}(s)$
We have already looked at the structure of ionic compounds in Chapter $4$ and know that the best way to think about them is to consider $\mathrm{NaCl}$ as a three-dimensional lattice of alternating positive ($\mathrm{Na}^{+}$) and negative ($\mathrm{Cl}^{-}$) ions. That is as the reaction proceeds the metal atoms becomes cations, and the chlorine molecules become anions. We could write this as two separate reactions: The $\mathrm{Na}$ loses an electron – a process that we define as oxidation. $\mathrm{Na} \rightleftarrows \mathrm{Na}^{+}+\mathrm{e}^{-} \text {(an oxidation reaction) }$
The electrons must go somewhere (they cannot just disappear) and since chlorine is an electronegative element, it makes sense that the electrons should be attracted to the chlorine. We define the gain of electrons as a reduction. $\mathrm{Cl}+\mathrm{e}^{-} \rightleftarrows \mathrm{Cl}^{-} \text {(a reduction reaction) }$
It turns out that all reactions in which elements react with each other to form compounds are redox reactions. For example, the reaction of molecular hydrogen and molecular oxygen is also a redox reaction: $2 \mathrm{H}_{2}(g)+\mathrm{O}_{2}(g) \rightleftarrows 2 \mathrm{H}_{2} \mathrm{O}(l)$
The problem here is that there is no obvious transfer of electrons. Neither is there an obvious reason why these two elements should react in the first place, as neither of them has any charge polarity that might lead to an initial interaction. That being said, there is no doubt that $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$ react. In fact, like sodium and water, they react explosively.[21] When we look a little more closely at the reaction, we can see that there is a shift in electron density on individual atoms as they move from being reactants to being products. The reactants contain only pure covalent ($\mathrm{H—H}$ and $\mathrm{O—O}$) bonds, but in the product ($\mathrm{H}_{2}\mathrm{O}$) the bonds are polarized: $\mathrm{H} \delta +$ and $\mathrm{O} \delta -$ (recall that oxygen is a highly electronegative atom because of its highly effective nuclear charge.) There is a shift in overall electron density towards the oxygen. This is a bit subtler than the $\mathrm{NaCl}$ case. The oxygen has gained some extra electron density, and so been reduced, but only partially – it does not gain the whole negative charge. The hydrogen has also been oxidized by losing some electron density. We are really talking about where the electron spends most of its time. In order to keep this straight, chemists have developed a system of oxidation numbers to keep track of the losses and gains in electron density.
Oxidation States and Numbers
Now, we may seem to be deploying more arcane terms designed to confuse the non-chemist, but in fact, oxidation numbers (or oxidation states) can be relatively easy to grasp as long as you remember a few basic principles:[22]
• For an ion, the charge is the oxidation number. The oxidation number of $\mathrm{Na}^{+}$ is +1, the oxidation number of the oxide ion ($\mathrm{O}_{2} {}^{-}$) is –2.
• For elements that are covalently bonded to a different element, we imagine that all the electrons in the bond are moved to the most electronegative atom to make it charged. As an example, the oxygen in water is the more electronegative atom. Therefore, we imagine that the bonding electrons are on oxygen and that the hydrogen atoms have no electrons (rather, they have a +1 charge). The oxidation number of $\mathrm{H}$ (in water) is +1, whereas in oxygen it is -2, because of the -2 charge of the two imagined extra electrons that came from the bond.
• Elements always have an oxidation number of zero (because all of the atoms in a pure element are the same, so none of the bonds are polar).
Remember this is just a way to keep track of the electrons. Oxidation numbers are not real; they are simply a helpful device. It is also important to remember that the oxidation number (or state) of an atom is dependent upon its molecular context. The trick to spotting a redox reaction is to see if the oxidation number of an atom changes from reactants to products. In the reaction: $2 \mathrm{H}_{2}(g)+\mathrm{O}_{2}(g) \rightleftarrows 2 \mathrm{H}_{2} \mathrm{O}(l)$
$\mathrm{H}$ changes from zero in the reactants ($\mathrm{H}_{2}$) to +1 in the products ($\mathrm{H}_{2}\mathrm{O}$), and the oxygen goes from zero ($\mathrm{O}_{2}$) to –2 ($\mathrm{H}_{2}\mathrm{O}$). When oxidation numbers change during a reaction, the reaction is a redox reaction.
Now lets look at the reaction sodium and water, which is a bit more complicated to see if we can spot what is oxidized and what is reduced. $2 \mathrm{Na}(s)+2 \mathrm{H}_{2} \mathrm{O}(l) \rightleftarrows 2 \mathrm{Na}+(aq)+2^{-} \mathrm{OH}(aq)+\mathrm{H}_{2}(g)$
It is relatively easy to see that the sodium gets oxidized, because it loses an electron, going from $\mathrm{Na}$ to $\mathrm{Na}^{+}$. But which species gets reduced? Is it the oxygen or the hydrogen? Or could it be both? If we check for changes in oxidation state, the oxygen in water starts at –2 and in hydroxide (${}^{-}\mathrm{OH}$) it is still –2 (it has not been reduced or oxidized). If we check the hydrogens, we see two distinct fates. One of the hydrogen atoms stays bonded to the oxygen atom (in hydroxide); it starts at +1 and stays there. However, the other type ends up bonded to another hydrogen atom; it starts at +1 and ends at zero. It is these latter two hydrogen atoms that have been reduced!
Historically, the term oxidation has denoted a reaction with oxygen. For example, in simple combustion reactions: $\mathrm{CH}_{4}(g)+\mathrm{O}_{2}(g) \rightleftarrows \mathrm{CO}_{2}(g)+\mathrm{H}_{2} \mathrm{O}(g)$
Oxidation reactions like this provide major sources of energy, in the burning of fuel (natural gas, gasoline, coal, etc.) and also in biological systems. In the latter, carbons containing molecules such as sugars and lipids react with molecular oxygen to form compounds with very stable bonds ($\mathrm{CO}_{2}$ and $\mathrm{H}_{2}\mathrm{O}$), releasing energy that can be used to break bonds and rearrange molecules. In a similar vein the original meaning of reduction was reaction with hydrogen, for example acetic acid can be reduced to ethanol by reacting with hydrogen: $\mathrm{CH}_{3} \mathrm{CO}_{2} \mathrm{H}+\mathrm{H}_{2}(g) \rightleftarrows \mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{OH}$
What is important to note is that, there cannot be an oxidation without a reduction – and vice-versa. Just like there can be no acid without a base.
Questions
Questions to Answer
• For the reaction $\mathrm{CH}_{4}(g)+\mathrm{O}_{2}(g) \rightleftarrows \mathrm{CO}_{2}(g)+\mathrm{H}_{2} \mathrm{O}(g)$, which atoms are oxidized and which are
• reduced?
• For the reaction $\mathrm{CH}_{3} \mathrm{CO}_{2} \mathrm{H}+\mathrm{H}_{2}(g) \rightleftarrows \mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{OH}$ which atoms are oxidized and which are
reduced?
• Write an explanation to a friend who has no chemistry background to explain the difference
between these two reactions that give the same product: $2 \mathrm{H}_{2}(g)+\mathrm{O}_{2}(g) \rightleftarrows 2 \mathrm{H}_{2} \mathrm{O}(l) \text { and } \mathrm{H}^{+}(aq)+{-} \mathrm{OH}(aq) \rightleftarrows \mathrm{H}_{2} \mathrm{O}(l)$
Questions for Later
• Is it possible to separate out the oxidation reaction (where electrons are lost) and the reduction reaction (where electrons are gained)? What would happen?
• What if you separate the two reactions but join them by an electrical connection? What do you think would happen? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/07%3A_A_Field_Guide_to_Chemical_Reactions/7.5%3A_Oxidation-Reduction_Reactions.txt |
All chemical reactions are accompanied by energy changes. Under most circumstances, particularly when the pressure and volume are kept constant, these changes can be ascribed to changes in enthalpy $\Delta \mathrm{H}$. For example, combustion reactions (redox reactions involving oxygen) are a major source of energy for most organisms. In warm-blooded organisms, the energy released through such reactions is used to maintain a set body temperature. Within organisms, combustion reactions occur in highly-controlled stages (which is why you do not burst into flames), through the process known as respiration (different from breathing, although breathing is necessary to bring molecular oxygen to your cells).
Not all biological forms of respiration use molecular oxygen.[23] There are other molecules that serve to accept electrons; this process is known as anaerobic (air-free) respiration. All known organisms use the molecule adenosine triphosphate ($\mathrm{ATP}$) as a convenient place to store energy. $\mathrm{ATP}$ is synthesized from adenosine diphosphate ($\mathrm{ADP}$) and inorganic phosphate. As two separate species, $\mathrm{ADP}$ and inorganic phosphate are more stable than $\mathrm{ATP}$ and the energy captured from the environment use to drive the synthesis of $\mathrm{ATP}$ can be released again via the formation of $\mathrm{ADP}$ and inorganic phosphate: $\mathrm{ADP}+\mathrm{Pi}+\text { energy } \rightleftarrows \mathrm{ATP }+\mathrm{H}_{2} \mathrm{O}$
If we looked closely at the molecular level mechanism of $\mathrm{ATP}$ synthesis, we would see that it is another example of an electrophile–nucleophile interaction. But regardless of the type of reactions, we can ask the same question: Where (ultimately) does the energy released in an exothermic reaction come from? When an exothermic reaction occurs and energy is transferred from the system to the surroundings, the result is a temperature increase in the surroundings and a negative enthalpy change $–\Delta \mathrm{H}$.) What is the source of that energy? Of course, you already know the answer—it has to be the energy released when a bond is formed!
The defining trait of a chemical reaction is a change in the chemical identity of the reactants: new types of molecules are produced. In order for this to occur, at least some of the bonds in the starting material must be broken and new bonds must be formed in the products, otherwise no reaction occurs. So to analyze energy changes in chemical reactions, we look at which bonds are broken and which are formed, and then compare their energies. As we will discuss later, the process is not quite so simple, given that the pathway for the reaction may include higher energy intermediates. As we will see it is the pathway of a reaction that determines its rate (how fast it occurs), whereas the difference between products and reactions determines the extent to which the reaction will occur. The following analysis will lead to some reasonable approximations for estimating energy changes during a reaction.
As we have already seen, bond formation releases energy and bond breaking requires energy. Tables of bond dissociation energies are found in most chemistry books and can be easily retrieved from the Internet.[24] One caveat: these measurements are typically taken in the gas phase and refer to a process where the bond is broken homolytically (each atom in the original bond ends up with one electron and the species formed are known as radicals).[25] The bond dissociation energy for hydrogen is the energy required to drive the process: $\mathrm{H}-\mathrm{H}(g) \rightleftarrows 2 \mathrm{H}\cdot$
where the dot represents an unpaired electron. The enthalpy change for this process is $\Delta \mathrm{H} = +436 \mathrm{~kJ/mol}$. Note that tables of bond energies record the energy required to break the bond. As we noted earlier, enthalpy is a state function – its value does not depend on the path taken for the change to occur, so we also know what the enthalpy change is for the reverse process. That is, when a hydrogen molecule forms from two hydrogen atoms the process is exothermic: $2 \mathrm{H} \cdot \rightleftarrows \mathrm{H}-\mathrm{H}(g) \quad \Delta \mathrm{H}=-436 \mathrm{~kJ} / \mathrm{mol}$
We have tables of bond energy values for most common bond types, so one way to figure out energy changes (or at least the enthalpy changes) for a particular reaction is to analyze the reaction in terms of which bonds are broken and which bonds are formed. The broken bonds contribute a positive term to the total reaction energy change whereas bond formation contributes a negative term. For example, let us take a closer look at the combustion of methane:[26] $\mathrm{CH}_{4}(g)+2 \mathrm{O}_{2}(g) \rightleftarrows \mathrm{CO}_{2}(g)+2 \mathrm{H}_{2} \mathrm{O}(g)$
In the course of this reaction, four $\mathrm{C}—\mathrm{H}$ bonds $\left[4 \times \mathrm{C}—\mathrm{H } (436 \mathrm{~kJ/mol})\right]$ and two $\mathrm{O=O}$ bonds ($498 \mathrm{~kJ/mol}$) are broken. The new bonds formed are $2 \times \mathrm{C=O} (803 \mathrm{~kJ/mol})$ and $4 \times \mathrm{O—H} (460 \mathrm{~kJ/mol})$. If you do the math, you will find that the sum of the bond energies broken is $2740 \mathrm{~kJ}$, whereas the sum of the bond energies formed is $–3330 \mathrm{~kJ}$. In other words, the bonds in the products are $706 \mathrm{~kJ}$ more stable than the bonds in the reactants. This is easier to see if we plot the progress of enthalpy versus reaction; it becomes more obvious that the products are lower in energy (more stable).
There are several important aspects to note about this analysis:
1. This is only an estimation of the enthalpy change, because (as noted above) bond energies are averages and are measured in the gas phase. In the real world, most reactions do not occur in the gas phase. In solutions, there are all kinds of other interactions (intermolecular forces) that can affect the enthalpy change, but for an initial approximation this method often gives surprisingly good results.
2. Remember, every reaction must be considered as a part of the system. Both the reactants and products have to be included in any analysis, as well as the direction of energy transfer between the reaction system and the surroundings.
3. An exothermic reaction occurs when the bonds formed are stronger than the bonds that are broken. If we look closely at this calculation, we can see that combustion reactions are so exothermic because they produce carbon dioxide. The bond energy of the carbon—oxygen double bond is very high (although not two times the $\mathrm{C}—\mathrm{O}$ single bond—can you think why?) The production of $\mathrm{CO}_{2}$ is very favorable from an energy standpoint: it sits in a deep energy well because it has such strong bonds. This point has important ramifications for the world we live in. Carbon dioxide is quite stable; although it can be made to react, such reactions require the input of energy. Large numbers of us expel $\mathrm{CO}_{2}$ into the atmosphere from burning fossil fuels and breathing, at a higher rate than is currently being removed through various types of sequestration processes, including chemical reactions and photosynthesis. You have certainly heard of the greenhouse effect, caused by the build-up of $\mathrm{CO}_{2}$. $\mathrm{CO}_{2}$ is difficult to get rid of because strong bonds give it stability. (Given the notoriety of $\mathrm{CO}_{2}$ in terms of climate change, we will come back to this topic later.)
Questions
Questions to Answer
• Many biology texts refer to energy being released when high-energy bonds in $\mathrm{ATP}$ are broken. In light of what you know, is this a reasonable statement? What do these texts really mean?
• Why do you think the enthalpy change for most Brønsted–Lowry acid–base reactions is independent of the nature of the acid or base? (Hint: What is the reaction that is actually occurring?)
• Using tables of bond dissociation energies, calculate the energy change for the reaction of $\mathrm{CH}_{2}\mathrm{=CH}_{2}+\mathrm{HCl} \rightleftarrows \mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{Cl}$. What steps do you have to take to complete this calculation? Make a list.
• If you look up the enthalpy change for this reaction ($\Delta \mathrm{H}^{\circ}$) you will find it is not exactly what you calculated. Why do you think that is? (Hint: This reaction typically takes place in a solvent. What role might the solvent play in the reaction?) | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/07%3A_A_Field_Guide_to_Chemical_Reactions/7.6%3A_Energy_Changes_and_Chemical_Reactions.txt |
1. That said, one might argue that $10^{-7} \mathrm{~M}$ is complete
2. Although slight traces of ethanol are still detectable; forensic scientists can detect the presence of substances such as hydrocarbons at the scene of a fire, even though the amounts are extremely small.
3. Arrhenius proposed these ideas in 1888 and won a Nobel Prize for his discovery of ionization reactions in solution in 1903.
4. http://www.nature.com/nature/journal.../397601a0.html
5. This theory was postulated simultaneously by both Brønsted and Lowry in 1923.
6. In strong acids, the proton is completely donated to water in aqueous solution (i.e., there is no detectable amount of un-ionized acid in the water).
7. Recall also that electronegativity stems directly from the effective nuclear charge on a particular atom. If you don’t remember why, go back to chapter $2$ and review this important idea.
8. Although some highly-charged metal ions react with water, we will not consider these reactions at the moment. Group I and II cations are stable in water.
9. There are some nitrogenous compounds that are not basic because the lone pair is already being used for some other purpose. If you continue to study organic chemistry, you will learn about these ideas in more detail.
10. Note reactions between molecules are intermolecular reactions; those that involve a single molecule are intramolecular.
11. http://www.springerlink.com/content/n274g10812m30107/
12. In fact $\mathrm{K}_{\mathrm{w}}$ increases with temperature due to Le Chatelier’s principle, about which we will have more to say shortly.
13. The $\mathrm{pH}$ scale was first developed in 1909 by Danish biochemist Soren Sorensen.
14. In fact, $\mathrm{pH}$ is better defined as $\mathrm{pH} = \left\{\mathrm{H}_{3}\mathrm{O}^{+}\right\}$, where the { } refer to the activity of the species rather than the concentration. This is a topic better left to subsequent courses, although it is important to remember that any resulting calculations on $\mathrm{pH}$ using concentrations provide only approximations.
15. Litmus is a water-soluble mixture of different dyes extracted from lichens, especially Roccella tinctoria— Wikipedia!
16. $\mathrm{pH}$ is typically measured by using a $\mathrm{pH}$ meter that measures the differences between the electrical potential of the solution relative to some reference. As the concentration of hydronium ion increases, the voltage (potential between the solution and the reference) changes and can be calibrated and reported as $\mathrm{pH}$.
17. http://www.bbc.co.uk/dna/h2g2/A8819652
18. http://www.ncbi.nlm.nih.gov/pmc/arti...00237-0011.pdf
19. This is based on the personal memories of one (and only one) of the authors.
20. Visit http://www.youtube.com/watch?v=eCk0lYB_8c0 for an entertaining video of what happens when sodium and other alkali metals are added to water (yes, they probably faked the cesium).
21. Hydrogen and oxygen can be used as rocket fuel, and the so-called “hydrogen economy” is based on the energy released when hydrogen reacts with the oxygen from the air.
22. http://www.youtube.com/watch?v=oXHtOjXxvRo
23. When $\mathrm{O}_{2}$ is used, the process is known as aerobic respiration.
24. Although bond dissociation energy and bond energy are often used interchangeably, they are slightly different. Bond dissociation energy is the energy required to break a particular bond in a molecule; bond energy is the average energy required to break a bond of that type. For our purposes, the difference is not important. Tables of bond energies usually refer to average bond energies.
25. Species with unpaired electrons
26. To begin this calculation, you must be able to figure out what bonds are present in the molecule; you must be able to draw the Lewis structure. | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/07%3A_A_Field_Guide_to_Chemical_Reactions/7.7%3A_In-Text_References.txt |
After our overview of common chemical reactions in Chapter \(7\), the next questions on your mind may well be what determines whether or not a reaction will happen, how fast it will go, how far it will go, or whether it will go in the forward or reverse direction? What causes gasoline to suddenly combust in a violent explosion whereas an iron nail slowly rusts over many years? Are these mysteries of the universe, or can we untangle them in some coherent way?
Once again, it turns out that the universe behaves in an orderly way, and by paying attention to various experimental observations, chemists over the last few centuries have come to understand the factors that control the rate, extent, and direction of reactions. The subject of rate and extent will lead us back to thermodynamics and Gibbs free energy, as we work out the molecular reorganizations that occur during the forward and reverse reactions. In this chapter, we introduce concepts that will allow us to consider how fast a reaction occurs and predict how far it will go.
Thumbnail: Molecular collisions frequency. (Public Domain; Sadi Carnot via Wikipedia)
08: How Far How Fast
The key to understanding the behavior of chemical reactions is to remember that:
1. chemical reactions are systems in which reactants and products interact with their environment and
2. at the molecular level, all reactions are reversible, even though some reactions may seem irreversible.
For example, once a log starts burning, we cannot easily reassemble it from carbon dioxide ($\mathrm{CO}_{2}$), water ($\mathrm{H}_{2}\mathrm{O}$), and energy. But in fact, we can reassemble the log in a fashion by allowing a tree to grow, and by using $\mathrm{CO}_{2}$ from the air, $\mathrm{H}_{2}\mathrm{O}$ from the ground, and energy from the sun (photosynthesis). However, this type of reverse (or backward) reaction is far more complex and involved than the simple forward reaction of burning.
There are, however, a number of factors that we can use to predict how fast and how far a particular reaction will go, including the concentration of the reactants, the temperature, the type of reaction, and the presence of a catalyst. The concentrations of molecules and the temperature of the system are important because all reactions involve collisions between molecules (except for reactions driven by the absorption of light—and you could view those as collisions of a sort). The concentration of reactants determines how often various types of collisions take place (i.e., the more molecules per unit volume, the more frequently collisions occur), whereas the temperature determines the energetics of the collisions: recall that there is a distribution of kinetic energies of molecules at a particular temperature, so not all collisions will lead to a reaction. Molecular structure also matters because it determines whether or not collisions are productive. The only collisions that work are those in which molecules hit each other in particular orientations and with particular energies.
As a reaction proceeds, and reactants are converted into products, the probability of reactant molecules colliding decreases (since there are fewer of them) while the probability of product molecules colliding increases. That is the rate of the forward reaction slows down and the rate of the reverse reaction speeds up This will continue until the rates of the forward reaction and the backward reaction are equal, and the system reaches equilibrium: the point at which no more macroscopic changes occur and the concentrations of reactants and products remain constant at the macroscopic scale.[1] However, as we will discuss further, the forward and back reactions have not stopped, and if we could see the molecules we would see both forward and back reactions still occurring, although there is no overall change in concentration.
As an example, Brønsted–Lowry acid–base reactions are very fast because the probability that the reaction occurs per unit of time is high. When an acid and a base are mixed together, they react immediately with no waiting and without the addition of heat. For example, if we dissolve enough hydrogen chloride gas ($\mathrm{HCl}$) in water to make a $0.1 \mathrm{~M}$ solution of hydrochloric acid, the $\mathrm{pH}$ immediately drops from $7$ (the \mathrm{pH}\) of water) to $1$.[2] This measurement tells us that all the $\mathrm{HCl}$ has ionized, to give: $\left[\mathrm{H}^{+}\right]=0.1$ and $\left[\mathrm{Cl}^{-}\right]=0.1$.
Now let us take the case of acetic acid ($\mathrm{CH}_{3}\mathrm{COOH}$). If we dissolve enough acetic acid in water to make a $0.1-\mathrm{M}$ solution, the $\mathrm{pH}$ of the solution immediately changes from $\mathrm{pH } 7$ (pure water) to $2.9$ (not $1$). Even if you wait (as long as you want) the $\mathrm{pH}$ stays constant, around $3$. You might well ask, “What is going on here?” The acid–base reaction of acetic acid and water is fast, but the $\mathrm{pH}$ is not as low as you might have predicted. We can calculate the $\left[\mathrm{H}^{+}\right]$ from the $\mathrm{pH}$, again using the relationship $\mathrm{pH}=-\log \left[\mathrm{H}^{+}\right]$ and $\left[\mathrm{H}^{+}\right] = 10^{-\mathrm{pH}$\), giving us a value of $\left[\mathrm{H}^{+}\right] = 1.3 \times 10^{-3}\mathrm{~M}$. Thus, the concentration of $\mathrm{H}^{+}$ is more than two orders of magnitude less than you might have expected! If you think about this, you will probably conclude that the amount of acetic acid ($\mathrm{AcOH}$)[3] that actually reacted with the water must have been very small indeed. In fact we can calculate how much acetic acid reacted using the relationships from the equation: $\mathrm{AcOH}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}+\mathrm{AcO}^{-}$
If the concentration of acetic acid started at $0.10 \mathrm{~M}$, and after the ionization reaction $1.3 \times 10^{-3}\mathrm{~M}$ of $\mathrm{H}^{+}$ are present, then the final concentration of acetic acid must be $(0.10 \text { minus } 1.3 \tiems 10^{-3}) \mathrm{~M}$. If we use the appropriate number of significant figures, this means that the concentration of acetic acid is still $0.10 \mathrm{~M}$ (actually $0.0986 \mathrm{~M}$).
There are two important conclusions here: first, the reaction of acetic acid is fast, and second, most of the acetic acid has not, in fact, reacted with the water. But wait—there is more! Even if the reaction appears to have stopped because the $\mathrm{pH}$ is not changing any further, at the molecular level things are still happening. That is, the reaction of acetic acid with water continues on, but the reverse reaction occurs at the same rate. So the bulk concentrations of all the species remain constant, even though individual molecules present in each population are constantly changing.[4] The questions of how far a reaction proceeds (towards products) and how fast it gets there are intertwined. We will demonstrate the many factors that affect these two reaction properties.
Questions
Questions to Answer
• Draw out a general Brønsted–Lowry acid–base reaction that might occur in water.
• Why do you think the reaction occurs so fast (as soon as the molecules bump into each other)?
• Do you think the water plays a role in the reaction? Draw out a molecular-level picture of your acid–base reaction, showing the solvent interactions.
Question to Ponder
• How do you think the reaction would be affected if it took place in the gas phase instead of an aqueous solution? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/08%3A_How_Far_How_Fast/8.1%3A_What_Factors_Control_Reactions.txt |
In science, when we talk about a rate we mean the change in a quantity over time. A few non-chemical examples include: certain investments with an interest rate, which is the increase in the principle over time (if the rate is negative, then it means that the amount of principle is decreasing over time—not a good investment!); your speed, which is the rate at which you travel down the road, given in miles per hour (or kilometers per hour); a child’s growth rate, which might be an inch or two per year (while the elderly might shrink at a different rate); and the growth rate of some plants, like kudzu, which can grow at a rate of 12 inches per day. The units of rate are an amount divided by a period of time. This might seem too obvious to dwell on, but it is worth noting that most real processes do not have a constant rate of change; rates themselves can and do change. This is one reason why calculus is useful in chemistry: it provides the mathematical tools needed to deal with changing rates, like those associated with planetary motions, falling bodies, and (it turns out) chemical reactions.
If we apply the idea of an amount divided by a period of time to the speed of a chemical reaction, what can we measure to determine a reaction’s rate? What units tell us the amount present, in the same way that miles and meters measure distance? We can’t use mass, because reactions occur between particles (atoms, molecules, ions), which have different masses. We must use the unit that tells us how many particles of a particular type there are—moles. Furthermore, because most reactions (particularly the ones involved in biological and environmental systems) occur in aqueous solutions or in the atmosphere, we usually use units of concentration—molarity ($\mathrm{M}, \mathrm{~mol/L}$)—to describe the amount of a substance taking part in or produced by a reaction. Typically, the concentration of substance $\mathrm{A}_{2}$ is written $\left[\mathrm{A}_{2}\right]$, and the rate of a reaction can be described as the change in concentration of a reactant or product over a unit of time. So, $\Delta\left[\mathrm{A}_{2}\right] / \Delta \mathrm{t}$ or $\left[\mathrm{A}_{2}\right]_{2}-\left[\mathrm{A} 2_{2}\right]_{1} / t_{2}-t_{1},$, where $\left[\mathrm{A}_{2}\right]_{2}$ is the concentration at time $t_{2}$, and $\left[\mathrm{A}_{2}\right]_{1}$ is the concentration at time $t_{2}$ (assuming that $t_{2}$ occurs later in time than $t_{1}$).
Reaction Rates and Probabilities
Let us now step back and think about what must happen in order for a reaction to occur. First, the reactants must be mixed together. The best way to make a homogeneous mixture is to form solutions, and it is true that many reactions take place in solution. When reactions do involve a solid, like the rusting of iron, the reactants interact with one another at a surface. To increase the probability of such a reaction, it is common to use a solid that is very finely divided, so that it has a large surface area and thus more places for the reactants to collide.[5]
We will begin with a more in-depth look at reaction rates with a simple hypothetical reaction that occurs slowly, but with a reasonable rate in solution. Our hypothetical reaction will be $\mathrm{A}_{2}+\mathrm{~B}_{2} \rightleftarrows 2 \mathrm{AB}$. Because the reaction is slow, the loss of reactants ($\mathrm{A}_{2}+\mathrm{~B}_{2}$) and the production of product ($\mathrm{AB}$) will also be slow, but measurable. Over a reasonable period of time, the concentrations of $\mathrm{A}_{2}$, $\mathrm{B}_{2}$, and $\mathrm{AB}$ change significantly. If we were to watch the rate of the forward reaction ($\mathrm{A}_{2}+\mathrm{~B}_{2} \rightleftarrows 2 \mathrm{AB}$), we would find that it begins to slow down. One way to visualize this is to plot the concentration of a reactant versus time (as shown in the graph). We can see that the relationship between them is not linear, but falls off gradually as time increases. We can measure rates at any given time by taking the slope of the tangent to the line at that instant.[6] As you can see from the figure, these slopes decrease as time goes by; the tangent at time = 0 is much steeper than the tangent at a later time. On the other hand, immediately after mixing $\mathrm{A}_{2}+\mathrm{~B}_{2}$, we find that the rate of the backward reaction (that is: $2 \mathrm{AB} \rightleftarrows \mathrm{~A}_{2}+\mathrm{~B}_{2}$) is zero, because there is no $\mathrm{AB}$ around to react, at least initially. As the forward reaction proceeds, however, the concentration of $\mathrm{AB}$ increases, and the backward reaction rate increases. As you can see from the figure, as the reaction proceeds, the concentrations of both the reactants and products reach a point where they do not change any further, and the slope of each concentration time curve is now 0 (it does not change and is “flat”).
Let us now consider what is going on in molecular terms. For a reaction to occur, some of the bonds holding the reactant molecules together must break, and new bonds must form to create the products. We can also think of forward and backward reactions in terms of probabilities. The forward reaction rate is determined by the probability that a collision between an $\mathrm{A}_{2}$ and a $\mathrm{B}_{2}$ molecule will provide enough energy to break the $\mathrm{A—A}$ and $\mathrm{B—B}$ bonds, together with the probability of an $\mathrm{AB}$ molecule forming. The backward reaction rate is determined by the probability that collisions (with surrounding molecules) will provide sufficient energy to break the $\mathrm{A—B}$ bond, together with the probability that $\mathrm{A—A}$ and $\mathrm{B–B}$ bonds form. Remember, collisions are critical; there are no reactions at a distance. The exact steps in the forward and backward reactions are not specified, but we can make a prediction: if these steps are unlikely to occur (low probability), the reactions will be slow.
As the reaction proceeds, the forward reaction rate decreases because the concentrations of $\mathrm{A}_{2}$ and $\mathrm{B}_{2}$ decrease, while the backward reaction rate increases as the concentration of $\mathrm{AB}$ increases. At some point, the two reaction rates will be equal and opposite. This is the point of equilibrium. This point could occur at a high concentration of $\mathrm{AB}$ or a low one, depending upon the reaction. At the macroscopic level, we recognize the equilibrium state by the fact that there are no further changes in the concentrations of reactants and products. It is important to understand that at the molecular level, the reactions have not stopped. For this reason, we call the chemical equilibrium state a dynamic equilibrium. We should also point out that the word equilibrium is misleading because in common usage it often refers to a state of rest. In chemical systems, nothing could be further from the truth. Even though there are no macroscopic changes observable, molecules are still reacting.[7]
Questions
Questions to Answer
• What does linear mean (exactly) when referring to a graph?
• Imagine you are driving at a constant speed of 60 miles per hour. Draw a graph of distance versus time, over a time period of four hours.
• How would you determine your speed from the graph (assuming you did not already know the answer)?
• Now imagine you take your foot off the accelerator and the car coasts to a stop over the course of one hour. What is the average speed over the last hour? How would you figure that out?
• What is the speed exactly 30 minutes after you take your foot off the brake? How would you figure that out?
• Consider the reaction $\mathrm{A}_{2}+\mathrm{~B}_{2} \rightleftarrows 2 \mathrm{AB}$. If the rate of the forward reaction $=-\Delta\left[\mathrm{A}_{2}\right] / \Delta \mathrm{t}$ (at a given time). How would you write the rate in terms of $\left[\mathrm{B}_{2}\right]$ or in terms of $\left[\mathrm{AB}\right]$?
• How does the rate of the forward reaction change over time? Does it increase, decrease or stay the same? Why?
• What does a probability of “0” mean?
• How do we know that, at equilibrium, the forward and reverse reactions are still occurring.
• Design an experiment that would allow you to investigate whether a reaction had stopped:at the macroscopic level and at the molecular level
Questions to Ponder
• Why can a macroscopic reaction be irreversible, even though at the molecular level reaction is reversible?
• Under what conditions (if any) would a reaction stop completely?
• Why are molecular level and macroscopic behaviors different?
Questions for Later
• Why do you think the amounts of products and reactants do not change after a certain time?
• What is the observable rate of reaction after the time when the concentrations of products and reactants change? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/08%3A_How_Far_How_Fast/8.2%3A_Reaction_Rates.txt |
The study of reaction rates, called chemical kinetics, encompasses a wide range of activities, measurements, and calculations. You might wonder why anyone would bother with this, but it turns out that we can use kinetic data to get more information about a reaction than just how fast it goes; we can find out about the pathway that the reaction takes from reactants to products, known as the mechanism of the reaction. If you think about a reaction in molecular terms, it seems clear that there must be a continuous pathway between reactants and products. The reactants do not suddenly disappear and then reappear as products, and in most reactions only one or two bonds are broken and formed as the reaction proceeds. This pathway, or mechanism, denotes the order in which bonds are broken and formed, and the intermediate species involved. However, because we cannot see directly what happens at the molecular level during a reaction, we have to rely on indirect methods to determine what is going on. Even using modern spectroscopic techniques, discussed in more detail in the spectroscopy section, some species in reaction pathways may only be present for femto ($10^{-15}$) or atto ($10^{-18}$) seconds. Events on these time scales are difficult to study, and in fact much of the current cutting edge research in chemistry and physics is directed at detecting and characterizing such ephemeral molecular-level events. As we will see, information about how the reaction rate varies with concentration and temperature can give us fascinating chemical insights into reaction pathways.
Concentrations and Reaction Rates
As we have seen as the probability of collisions between reactant molecules increases, the rate of reaction increases. In order to get information about the reaction mechanism we need to know the exact relationship between concentrations and rates. This can be done using a number of different techniques and experimental set-ups. But before we do that, we need to go over a few more terms. Recall that the rate of the reaction is the change in concentration of reactant per unit time. If the time interval is measurable and real, the rate we get is called the average rate (over that time interval), as shown in the earlier graph. If we imagine that the time interval drops to 0, we get the instantaneous rate, which is the slope of the tangent to the concentration versus time curve at a given time (more calculus). The rate at the beginning of the reaction can be obtained by taking the tangent at the start of the reaction ($t = 0$). This initial rate is useful in many situations because as the reactants form products, these products can interfere with or inhibit the forward reaction. This is particularly true in biological systems, where a product may influence its own formation. For example, it can bind to a site on the enzyme that catalyzes the reaction. This type of interaction is common, and often inhibits the enzyme’s activity (a form of feedback regulation).
We can measure the initial rate for a reaction using different initial concentrations of reactants. Using an appropriate experimental design, we can figure out how the rate of the reaction varies with each reactant. For many common reactions, the relationship between the rate and the concentration is fairly straightforward. For example, in the reaction: $\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}+{ }^{-} \mathrm{OH}+\mathrm{Na}^{+} \rightleftarrows\left(\mathrm{CH}_{3}\right)_{3} \mathrm{COH}+\mathrm{Br}^{-}+\mathrm{Na}^{+}$
the rate is dependent only on the concentration of t-butyl bromide $\left[\left(\mathrm{CH}_{3}\right)_{3}\mathrm{CBr}\right]$, not on the concentration of the sodium ion $\left[\mathrm{Na}^{+}\right]$ or the hydroxide ion $\left[{}^{-}\mathrm{OH}\right]$. “But why only the t-butyl bromide?” you might well ask. We will get to that point shortly, because it gives us some very interesting and important insights into the reaction mechanism. First, let us delve into a bit more background.
Because the rate is directly proportional to the $\left[(\mathrm{CH}_{3})_{3}\mathrm{CBr}\right]$, we can write the relationship between rate and concentration as: $\text {rate } \propto \left[\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}\right]$, or we can put in a constant ($k$) to make the equation: $\text { rate }=k\left[\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}\right]$
We could also write $-\Delta\left[\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}\right] / \Delta \mathrm{t}=k\left[\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}\right] ,$
or if we let the time interval drop to zero, $-d\left[\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}\right] / dt=k\left[\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}\right] .$
In all these forms, the equation is known as the rate equation for the reaction. The rate equation must be experimentally determined. It is worth noting that you cannot write down the rate equation just by considering the reaction equation. (Obviously, in this case, ${}^{-}\mathrm{OH}$ or $\mathrm{Na}^{+}$ do not appear in the rate equation.) The constant ($k$) is known as the rate constant and is completely different from the equilibrium constant ($\mathrm{K}_{eq}$). The fact that they are both designated by $k$ (one lower case and one upper case) is just one of those things we have to note and make sure not to confuse. A rate equation that only contains one concentration is called a first-order rate equation, and the units of the rate constant are 1/time.
Now, in contrast to the first-order reaction of methyl bromide and hydroxide, let us compare the reaction of methyl bromide with hydroxide:[8] $\mathrm{CH}_{3} \mathrm{Br}+{ }^{-} \mathrm{OH}+\mathrm{Na}^{+} \rightleftarrows \mathrm{CH}_{3} \mathrm{OH}+\mathrm{Br}^{-}+\mathrm{Na}^{+} ,$
For all intents and purposes, this reaction appears to be exactly the same as the one discussed on the previous page. That is, the bromine that was bonded to a carbon has been replaced by the oxygen of hydroxide.[9] However, if we run the experiments, we find that the reaction rate depends on both the methyl bromide concentration $\left[\mathrm{CH}_{3}\mathrm{Br}\right]$ and on the hydroxide concentration $\left[{}^{-}\mathrm{OH}\right]$. The rate equation is equal to $\mathrm{k}\left[\mathrm{CH}_{3} \mathrm{Br}\right]\left[^{-}\mathrm{OH}\right]$. How can this be? Why the difference? Well, the first thing it tells us is that something different is going on at the molecular level; the mechanisms of these reactions are different.
Reactions that depend on the concentrations of two different reactants are called second-order reactions, and the units of $k$ are different (you can figure out what they are by dimensional analysis). In general: \begin{aligned} &\text { rate }= k[\mathrm{A}] \text { first order} \ &\text { rate }=k[\mathrm{A}][\mathrm{B}] \text { second order (first order in} \mathrm{~A} \text { and first order in} \mathrm{~B}) \ &\text { rate }=k[\mathrm{A}]^{2} \text { second order (in} \mathrm{~A}) \ &\text { rate }=k[\mathrm{A}]^{2}[\mathrm{B}] \text { third order (second order in} \mathrm{~A} \text { and first order in} \mathrm{~B}). \ \end{aligned}
There are a number of methods for determining the rate equation for a reaction. Here we will consider just two. One method is known as the method of initial rates. The initial rate of the reaction is determined for various different starting concentrations of reactants. Clearly, the experimental design is of paramount importance here. Let us say you are investigating our reaction $\mathrm{A} + \mathrm{~B} \rightleftarrows 2\mathrm{AB}$. The rate may depend on $[\mathrm{A}]$ and/or $[\mathrm{B}]$. Therefore, the initial concentrations of $[\mathrm{A}]$ and $[\mathrm{B}]$ must be carefully controlled. If $[\mathrm{A}]$ is changed in a reaction trial, then $[\mathrm{B}]$ must be held constant, and vice versa (you cannot change both concentrations at the same time because you would not know how each one affects the rate).
The method of initial rates requires running the experiment multiple times using different starting concentrations. By contrast, the graphical method involves determining the rate equation from only one run of the reaction. This method requires the collection of a set of concentration versus time data (the same data that you would collect to determine the rates). Ideally we would like to manipulate the data so that we can obtain a linear equation ($y = mx + b$). For example, if we have a set of $[\mathrm{A}]$ versus time data for a reaction, and we assume the reaction is first order in $\mathrm{A}$, then we can write the rate equation as: $-d[\mathrm{A}] / dt=k[\mathrm{A}]$.
Now, if we separate the variables $[\mathrm{A}]$ and $t$ to get: $-d[\mathrm{A}] / [\mathrm{A}] = kt$. We can then integrate the equation over the time period $t = 0$ to $t = t$ to arrive at: $\ln [\mathrm{A}]_{t}=-kt+[\mathrm{A}]_{0} .$
link associated with the equation above.[10]
You will notice that this equation has the form of a straight line; if we plot our data ($\ln [\mathrm{A}]$ versus $t$) and if the reaction is first order in $[\mathrm{A}]$, then we should get a straight line, where the slope of the line is $–k$. We can do a similar analysis for a reaction that might be second order in $[\mathrm{A}]$: $\text{rate } = k[\mathrm{A}]^{2} .$
In this case, we can manipulate the rate equation and integrate to give the equation: $1 /[\mathrm{A}]_{t}=kt+1 /[\mathrm{A}]_{0}$
Therefore, plotting $1/[\mathrm{A}]$ versus $t$ would give a straight line, with a slope of $k$, the rate constant. This method of analysis quickly becomes too complex for reactions with more than one reactant (in other words, reactions with rates that depend on both $[\mathrm{A}]$ and $[\mathrm{B}]$), but you can look forward to that in your later studies!
Order Rate Law Integrated Rate Law Graph for Straight Line Slope of Line
0 $\text{rate } = k$ $[\mathrm{A}]_{t} =-kt + [\mathrm{A}]_{0}$ $[\mathrm{A}]$ vs. $t$ $–k$
1 $\text{rate } = k[\mathrm{A}]$ $\ln [\mathrm{A}]_{t}=-kt+[\mathrm{A}]_{0}$ $\ln [\mathrm{A}]$ vs. $t$ $–k$
2 $\text{rate } = k[\mathrm{A}]^{2}$ $1 /[\mathrm{A}]_{t}=kt+1 /[\mathrm{A}]_{0}$ $1/[\mathrm{A}]$ vs. $t$ $k$
The two approaches (multiple runs with different initial conditions and the graphical method finding the best line to fit the data) provide us with the rate law. The question is, what does the rate law tell us about the mechanism? We will return to this question at the end of this chapter.
Questions
Questions to Answer
• It turns out that most simple reactions are first or second order. Can you think why?
• Design an experiment to determine the rate equation for a reaction $2\mathrm{A} + \mathrm{B} \rightleftarrows \mathrm{C}$. Using the method of initial rates and a first experimental run using $0.1-\mathrm{M}$ concentrations of all the reactants, outline the other sets of conditions you would use to figure out what that rate equation is.
• What is the minimum number of runs of the reaction that you would have to do?
• How would you determine the rate for each of your sets of conditions?
• Now imagine you have determined that this reaction $2\mathrm{A} + \mathrm{B} \rightleftarrows \mathrm{C}$ does not depend on $[\mathrm{B}]$. Outline a graphical method you could use to determine the rate equation. What data would you have to collect? What would you do with it?
Questions for Later
• Why do you think it is that we cannot just write the rate equation from the reaction equation?
• Why do you think that the most common rate equations are second order?
Temperature and Reaction Rates
Temperature is another important factor when we consider reaction rates. This makes sense if you remember that the vast majority of reactions involve collisions and that the effects of collisions are influenced by how fast the colliding objects are moving. We know intuitively that heating things up tends to make things happen faster. For example, if you want something to cook faster you heat it to a higher temperature (and cooking, as we know, is just a series of chemical reactions). Why is this so? If we consider the reaction of hydrogen and oxygen, discussed in Chapter $7$, which is a highly exothermic reaction—explosive, in fact. Yet a mixture of hydrogen and oxygen is quite stable unless energy is supplied, either by heating or a spark of electricity. The same is true of wood and molecular oxygen. The question is: What is the initial spark of energy being used for?
The answer lies within one of the principles that we have returned to over and over again: When atoms form bonds, the result is a more stable system, compared to the energy of non-bonded atoms. But not all bonds are equally stable; some are more stable than others. Nevertheless, energy is always required to disrupt a bond—any bond. If a reaction is to take place, then at least one of the bonds present in the reactants must be broken, and this requires energy.
Imagine two reactants approaching each other. As the reaction starts to occur, the first thing that happens is that at least one bond in a reactant molecule must start to break. It is the initial, partial-bond-breaking step that requires an input of energy from the molecule’s surroundings, and the amount of energy required and available will determine if the reaction occurs. If the amount of energy in the environment is not enough to begin the breaking of bonds in the reactants (for example, in the burning of wood, large amounts of energy are required for the initial bond breaking), then the reaction will not occur without an energy “push”. Wood does not just burst into flames (at least at standard temperatures)—and neither do humans.[11] The burning wood reaction, $\text{wood } + \mathrm{O}_{2} \rightleftarrows \mathrm{H}_{2}\mathrm{O} + \mathrm{CO}_{2}$, does not occur under normal conditions, but if the temperature increases enough, the reaction starts. Once the reaction starts, however, the energy released from the formation of new bonds is sufficient to raise the local temperature and lead to the breaking of more bonds, the formation of new ones, and the release of more energy. As long as there is wood and oxygen available, the system behaves as a positive and self-sustaining feedback loop. The reaction will stop if one of the reactants becomes used up or the temperature is lowered.
It is the activation energy associated with reactions that is responsible for the stability of our world. For example, we live in an atmosphere of $\sim 20 \%$ oxygen ($\mathrm{O}_{2}$). There are many molecules in our bodies and in our environment that can react with $\mathrm{O}_{2}$. If there were no energy barriers to combustion (i.e., reaction with $\mathrm{O}_{2}$), we would burst into flames. Sadly, as Salem witches and others would have attested (if they could have), raise the temperature and we do burn. And once we start burning, it is hard to stop the reaction. As we have said before, combustion reactions are exothermic. Once they have produced enough thermal energy, the reaction doesn’t need that spark any more. But that initial spark needs the addition of energy (such as the kind provided by a detonator) for explosions to occur.
If we plot energy versus the progress of the reaction, we can get a picture of the energy changes that go on during the reaction. Remember that the reaction coordinate on the x-axis is not time; we have seen that reactions go backwards and forwards all the time. For a simple one-step reaction as shown in the figure, the highest point on the energy profile is called the transition state. It is not a stable entity and only exists on the timescale of molecular vibrations (femtoseconds). The energy change between the reactants and the transition state is called the activation energy. This is the energy that must be supplied to the reactants before the reaction can occur. This activation energy barrier is why, for example, we can mix hydrogen and oxygen and they will not explode until we supply a spark, and why we can pump gasoline in an atmosphere that contains oxygen, even though we know that gasoline and oxygen can also explode. The amount of energy that must be supplied to bring about a reaction is a function of the type of reaction, some reactions (acid base) have low activation energies and correspondingly high rates, and some (rusting) have high activation energies and low rates.
Now it should be easier to understand how increasing temperature increases the reaction rate—by increasing the average kinetic energy of the molecules in the environment. Recall that even though individual molecules have different kinetic energies, all of the different populations of molecules in a system have the same average kinetic energy. If we consider the effect of temperature on the Maxwell–Boltzmann distribution of kinetic energies, we see right away that at higher temperatures there are relatively more molecules with higher kinetic energy. Collisions between these high-energy molecules provide the energy needed to overcome the activation energy barrier, that is, the minimum energy required to start a chemical reaction. As the temperature rises, the probability of productive collisions between particles per unit time increases, thus increasing the reaction rate. At the same time, it is possible that raising the temperature will allow other reactions to occur (perhaps reactions we have not been considering). This is particularly likely if we are dealing with complex mixtures of different types of molecules.
The rate equation does not appear to contain a term for temperature, and typically we have to specify the temperature at which the rate is measured. However, because the rate changes with temperature, it must be the rate constant that changes. Sure enough, it has been determined experimentally that the rate constant is $k$ can be described by the equation $k=\mathrm{A}e^{-\mathrm{E}_{a} / \mathrm{RT}} ,$
where $k$ is the rate constant, $\mahtrm{E_{a}$ is the activation energy, $\mathrm{T}$ is the temperature, and $\mathrm{R}$ and $\mathrm{A}$ are constants.[12] This is known as the Arrhenius equation. As you can see, $k$ is directly proportional to the temperature, and indirectly proportional to the activation energy $\mathrm{E}_{a}$. The constant $\mathrm{A}$ is sometimes called the frequency factor and has to do with the collision rate. $\mathrm{A}$ changes depending on the specific type of reaction (unlike $\mathrm{R}$, the gas constant, which does not change from reaction to reaction). One way of thinking about the rate constant is to consider it as a representation of the probability that a collision will lead to products: the larger the rate constant, the more frequently productive collisions occur and the faster the reaction.
The activation energy for a reaction also depends upon the type of reaction that is occurring. For example, a Brønsted–Lowry acid–base reaction has a very low activation energy barrier. In these reactions the only thing that is happening is that a proton is being transferred from one electronegative element to another: $\mathrm{H-Cl}+\mathrm{H-O-H} \rightleftarrows \mathrm{Cl}^{-}+\mathrm{H}_{3} \mathrm{O}^{+}$
(draw this out to better see what is happening).
The reaction is rapid because the $\mathrm{Cl—H}$ bond is highly polarized and weak. In a sense, it is already partially broken. Also, these reactions usually take place in water, which interacts with and stabilizes the growing charges. Low-energy collisions with water molecules are sufficient to finish breaking the $\mathrm{Cl—H}$ bond. We say that acid–base reactions like this are kinetically controlled because they occur upon mixing and do not require heating up or extra energy to proceed. Essentially all collisions involving the $\mathrm{HCl}$ molecule provide sufficient energy to break the $\mathrm{H—Cl}$ bond. This is also true for almost all proton-transfer reactions. However, for most other types of reactions, simply mixing the reactants is not enough. Energy must be supplied to the system to overcome this energy barrier, or we have to wait a long time for the reaction to occur. In fact, most organic reactions (those in which carbon is involved) are quite slow. Why the difference? The answer should be reasonably obvious. There is simply not enough energy in the vast majority of the collisions between molecules to break a $\mathrm{C—H}$, $\mathrm{C—C}$, $\mathrm{C—N}$, or $\mathrm{C—O}$ bond. If you take organic chemistry lab, you will discover that large portions of time are spent waiting as solutions are heated to make reactions happen faster. As we mentioned before, this is quite fortunate. As we mentioned before, this is quite fortunate, since we are (basically) organized by chance and natural selection, from collections of organic reactions. If these reactions occurred spontaneously and rapidly, we would fall apart and approach equilibrium (and equilibrium for living things means death!). You may already see the potential problem in all of this: it is generally not advisable to heat up a biological system, but we certainly need biological systems to undergo reactions. Biological systems need different reactions to proceed in different places and at different rates, without being heated up. For this, biological systems (and many other types of systems) use a wide range of catalysts, the topic of our next section.
Questions
Questions to Answer:
• When a reaction releases energy, where does the energy come from?
• There is a rule of thumb that increasing the temperature by $10^{\circ}\mathrm{C}$ will double the rate for many reactions.
• What factor in the Arrhenius equation is always changing?
• Explain why the reaction rate increases when the temperature increases. | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/08%3A_How_Far_How_Fast/8.3%3A_Kinetics_and_the_Mechanisms_of_Reactions.txt |
A catalyst provides an alternate pathway for a reaction to occur. More importantly, this pathway usually involves a lower activation energy than the uncatalyzed pathway, as shown in the graph. This means that the rate of the reaction can increase. It can do so because at a given temperature, collisions with enough energy to overcome the new lower activation energy barrier occur more frequently. But because the catalyst is neither a reactant nor a product of the reaction, it does not influence the reaction’s overall energy change. In biological systems, there are protein and RNA-based catalysts (enzymes and ribozymes); in non-living systems, minerals and metals often act as catalysts. Even simple species such as protons can be considered catalysts. Anything that is unchanged at the start and at the end of the reaction can be considered a catalyst. There are many different mechanisms through which catalysts can act. Biological catalysts are generally very selective in terms of the reactions they catalyze and very effective in speeding reactions up. It is not uncommon for the rate of a catalyzed reaction to be millions of times faster than the uncatalyzed reaction. In a complex reaction system, speeding up one reaction at the expense of others can have profound effects. However, there are also many examples where enzymes catalyze “off-target” reactions of the same or different types (although these reactions are generally accelerated to a much lesser extent). This ability to catalyze a range of reactions occurs because the surfaces of enzyme molecules are complex and often accommodate and bind a range of molecules. In other words, they are promiscuous.[13] The common analogy of an enzyme as a lock and the reactant molecules are viewed as the unique key, but this is far too simplistic. In reality, there are many molecules that can bind to a specific active site in an enzyme with greatly varying affinities. Although the mode of action of enzymes varies, in many cases the active site holds the two reactive molecules in close juxtaposition, which can speed their reaction. Can you imagine why?[14]
An organic chemical reaction that requires a catalyst is the addition of hydrogens across a $\mathrm{C=C}$ bond. Without the catalyst, this reaction would not occur on a human timescale. It is an important reaction in many pharmaceutical syntheses and in the production of fat (solid) from oil (liquids). For example, margarine is produced by adding hydrogen to the $\mathrm{C=C}$ bonds of oils extracted from plants, as shown in the figure. The removal of the $\mathrm{C=C}$ bond makes the molecules pack better together. This is because London dispersion forces can now act upon the whole length of the molecule, increasing the strength of the van der Waals interactions between the molecules. Thus, the hydrogenated oil is a solid at room temperature. The catalyst is usually a transition metal, palladium ($\mathrm{Pd}$) or platinum ($\mathrm{Pt}$), finely divided and adsorbed onto the surface of an inert substance like charcoal (carbon), as shown in the figure. The transition metal has empty $\mathrm{d}$ orbitals that interact with the $\mathrm{C=C}$ bond’s pi orbital, destabilizing the pi bond and making it more susceptible to reaction. $\mathrm{H}_{2}$ molecules also adsorb onto (interact with) the surface of the transition metal and insert themselves between the $C$ and the catalyst, forming a fully-hydrogenated fat. Unfortunately, in many cases the hydrogen does not add across the double bond. Instead, the bond isomerizes from cis to trans, forming the unnatural trans isomer which has been implicated in the development of heart disease.[15]
Questions
Questions to Answer
• Draw a representation of an enzyme active site. What kinds of interactions do you think hold the substrate molecule in the active site?
• Why do you think binding two reactants in close proximity will increase the reaction rate? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/08%3A_How_Far_How_Fast/8.4%3A_Catalysis.txt |
Now that we have a good idea about the factors that affect how fast a reaction goes, let us return to a discussion of what factors affect how far a reaction goes. As previously discussed, a reaction reaches equilibrium when the rate of the forward reaction equals the rate of the reverse reaction, so the concentrations of reactants and products do not change over time. The equilibrium state of a particular reaction is characterized by what is known as the equilibrium constant, $\mathrm{K}_{eq}$.
We can generalize this relationship for a general reaction: $n\mathrm{A}+m\mathrm{B} \rightleftarrows o\mathrm{C}+p\mathrm{D} .$
Note that each concentration is raised to the power of its coefficient in the balanced reaction. By convention, the constant is always written with the products on the numerator, and the reactants in the denominator. So large values of $\mathrm{K}_{eq}$ indicate that, at equilibrium, the reaction mixture has more products than reactants. Conversely, a small value of $\mathrm{K}_{eq}$ (typically <1, depending on the form of $\mathrm{K}_{eq}$) indicates that there are fewer products than reactants in the mixture at equilibrium. The expression for $\mathrm{K}_{eq}$ depends on how you write the direction of the reaction. You can work out for yourself that $\mathrm{K}_{eq} (\text{forward})= 1/\mathrm{K}_{eq}(\text{reverse})$. One other thing to note is that if a pure liquid or solid participates in the reaction, it is omitted from the equilibrium expression for $\mathrm{K}_{eq}$. This makes sense because the concentration of a pure solid or liquid is constant (at constant temperature). The equilibrium constant for any reaction at a particular temperature is a constant. This means that you can add reactants or products and the constant does not change.[16] You cannot , however, change the temperature, because that will change the equilibrium constant as we will see shortly. The implications of this are quite profound. For example, if you add or take away products or reactants from a reaction, the amounts of reactants or products will change so that the reaction reaches equilibrium again—with the same value of $\mathrm{K}_{eq}$. And because we know (or can look up and calculate) what the equilibrium constant is, we are able to figure out exactly what the system will do to reassert the equilibrium condition.
Let us return to the reaction of acetic acid and water: $\mathrm{ACOH}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}+\mathrm{AcO}^{-} ,$
we can figure out that the equilibrium constant would be written as: $\mathrm{K}_{eq}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{AcO}^{-}\right] /[\mathrm{AcOH}] .$
Keq = [H3O+][AcO]/[AcOH].
The $\mathrm{H}_{2}\mathrm{O}$ term in the reactants can be omitted even though it participates in the reaction, because it is a pure liquid and its concentration does not change appreciably during the reaction. (Can you calculate the concentration of pure water?) We already know that a $0.10-\mathrm{M}$ solution of $\mathrm{AcOH}$ has a $\mathrm{pH}$ of $2.9$, so we can use this experimentally-determined data to calculate the equilibrium constant for a solution of acetic acid. A helpful way to think about this is to set up a table in which you note the concentrations of all species before and after equilibrium.
$[\mathrm{AcOH}]\mathrm{~M}$ $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]$ $\left[\mathrm{AcO}^{-}\right] \mathrm{~M}$
Initial Concentration $0.10$ $1 \times 10^{-7}$ (from water) $0$
Change in Concentration (this is equal to the amount of $\mathrm{AcOH}$ that ionized – and can be calculated from the $\mathrm{pH}$) $– 1.3 \times 10^{-3}\mathrm{~M}$
(because the $\mathrm{AcOH}$ must reduce by the same amount that the $\mathrm{H}^{+}$ increases)
$10^{-\mathrm{pH} = 1.3 \times 10^{-3} \mathrm{~M}$
$1.3 \times 10^{-3} \mathrm{~M}$ (because the same amount of acetate must be produced as $\mathrm{H}^{+}$)
Final
(equilibrium concentration)
$0.10 – 1.3 \times 10^{-3}$
$\sim 0.10$
$(1.3 \times 10^{-3}) + (1 \times 10^{-7}) \sim 1.3 \times 10^{-3}$
$1.3 \times 10^{-3}$
You can also include the change in concentration as the system moves to the equilibrium state: $\mathrm{ACOH}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}+\mathrm{AcO}^{-}$. Using the data from this type of analysis, we can calculate the equilibrium constant: $\mathrm{K}_{e q}=\left(1.3 \times 10^{-3}\right)^{2} / 0.1$, which indicates that Keq for this reactions equals $1.8 \times 10^{-5}. Note that we do not use a large number of significant figures to calculate \(\mathrm{K}_{eq}$ because they are not particularly useful, since we are making approximations that make a more accurate calculation not justifiable. In addition, note that $\mathrm{K}_{eq}$ itself does not have units associated with it.
Free Energies and Equilibrium Constants
Now we can calculate the equilibrium constant $\mathrm{K}_{eq}$, assuming that we can measure or calculate the concentrations of reactants and products at equilibrium. All well and good, but is this simply an empirical measurement? It was certainly discovered empirically and has proven to be applicable to huge numbers of reactant systems. It just does not seem very satisfying to say this is the way things are without an explanation for why the equilibrium constant is constant. How does it relate to molecular structure? What determines the equilibrium constant? What is the driving force that moves a reaction towards equilibrium and then inhibits any further progress towards products?
You will remember (we hope) that it is the second law of thermodynamics that tells us about the probability of a process occurring. The criterion for a reaction proceeding is that the total entropy of the universe must increase. We also learned that we can substitute the Gibb’s free energy change ($\Delta \mathrm{G}$) for the entropy change of the universe, and that $\Delta \mathrm{G}$ is much easier to relate to and calculate because it only pertains to the system. So it should not be a surprise to you that there is a relationship between the drive towards equilibrium and the Gibbs free energy change in a reaction. We have already seen that a large, negative Gibbs free energy change (from reactants to products) indicates that a process will occur (or be spontaneous, in thermodynamic terms[17]), whereas a large, positive equilibrium constant means that the reaction mixture will contain mostly products at equilibrium.
Think about it this way: the position of equilibrium is where the maximum entropy change of the universe is found. On either side of this position, the entropy change is negative and therefore the reaction is unlikely. If we plot the extent of the reaction versus the dispersion of energy (in the universe) or the free energy, as shown in the graph, we can better see what is meant by this. At equilibrium, the system sits at the bottom of an energy well (or at least a local energy minimum) where a move in either direction will lead to an increase in Gibbs energy (and a corresponding decrease in entropy). Remember that even though at the macroscopic level the system seems to be at rest, at the molecular level reactions are still occurring. At equilibrium, the difference in Gibbs free energy, $\Delta \mathrm{G}$, between the reactants and products is zero. It bears repeating: the criterion for chemical equilibrium is that $\Delta \mathrm{G} = 0$ for the reactants $\rightleftarrows$ products reaction. This is also true for any phase change. For example, at $100 { }^{\circ}\mathrm{C}$ and 1 atmosphere pressure, the difference in free energy for $\mathrm{H}_{2}\mathrm{O}(g)$ and $\mathrm{H}_{2}\mathrm{O}(l)$ is zero. Because any system will naturally tend to this equilibrium condition, a system away from equilibrium can be harnessed to do work to drive some other non-favorable reaction or system away from equilibrium. On the other hand, a system at equilibrium cannot do work, as we will examine in greater detail.
The relationship between the standard free energy change and the equilibrium constant is given by the equation: $\Delta \mathrm{G}^{\circ} = - \mathrm{RT} \ln \mathrm{K}\) which can be converted into the equation \[\ln \mathrm{K}_{eq}=-\Delta \mathrm{G}^{\circ} / \mathrm{RT} \text { or } \mathrm{K}_{eq}=e^{-\Delta \mathrm{G}^{\circ} / \mathrm{RT}} .$
As we saw earlier, the superscript ${}^{\circ}$ refers to thermodynamic quantities that are measured and calculated at standard states. In this case $\Delta \mathrm{G}^{\circ}$ refers to 1 atmosphere pressure and $298 \mathrm{~K}$ and (critical for our present discussion) $1 \mathrm{~M}$ concentrations for both reactants and products. That is,: \Delta \mathrm{G}^{\circ} tells you about the free energy change if all the substances in the reacting system were mixed with initial concentrations of $1.0 \mathrm{~M}$. It allows us to calculate equilibrium constants from tables of free energy values (see Chapter $9$). Of course, this is a rather artificial situation and you might be tempted to think that $\Delta \mathrm{G}^{\circ}$ is not very useful in the real world where initial concentrations of both reactants and products are rarely $1.0 \mathrm{~M}$. But no, $\Delta \mathrm{G}^{\circ}$ does tell us something useful: it tells us which way a reaction will proceed under these starting conditions. If we have a specific set of conditions, we can use $\Delta \mathrm{G}^{\circ}$ to calculate the actual free energy change $\Delta \mathrm{G}$, where: $\Delta \mathrm{G} = \Delta \mathrm{G}^{\circ} + \mathrm{RT} \ln \mathrm{Q}\) In this equation, the variable $\mathrm{Q}$ is called the reaction quotient. It has the same form as $\mathrm{K}_{eq}([\text{products}]/[\text{reactants}]$, except that the concentrations are not $1.0 \mathrm{~M}$. Rather, they are the actual concentrations at the point in the reaction that we are interested in. The sign and magnitude of $\Delta \mathrm{G}$ then will tell us which way the reaction will proceed and how far in that direction it will go. The differences between $\mathrm{Q}$ and $\mathrm{K}_{eq}$, $\Delta \mathrm{G}$, and $\Delta \mathrm{G}^{\circ}$ are important to keep in mind. It is easy to get mixed up and apply them incorrectly. $\mathrm{Q}$ and $\Delta \mathrm{G}$ relate to non-equilibrium systems whereas $\mathrm{K}_{eq}$ and $\Delta \mathrm{G}^{\circ}$ tell us about the equilibrium state itself. At equilibrium, $\mathrm{Q} = \mathrm{K}_{eq}$, and $\Delta \mathrm{G} = 0$, so that the equation $\Delta \mathrm{G} = \Delta \mathrm{G}^{\circ} +\mathrm{RT} \ln \mathrm{Q}$ becomes $\Delta \mathrm{G}^{\circ} = – \mathrm{RT} \ln \mathrm{K}_{eq}$. Note that $\mathrm{K}_{eq}$ and $\Delta \mathrm{G}^{\circ}$ are constant for a given reaction at a given temperature, but $\mathrm{Q}$ and $\Delta \mathrm{G}$ are not; their values vary according to the reaction conditions. In fact, by using $\mathrm{Q}$ and/or $\Delta \mathrm{G}$, we can predict how a system will behave under a specific condition as it moves towards the highest entropy state (to where $\Delta \mathrm{G} = 0$). Equilibrium and Non-Equilibrium States Let us look at a chemical system macroscopically. If we consider a reaction system that begins to change when the reactants are mixed up (that is, it occurs spontaneously), we will eventually see that the change slows down and then stops. It would not be unreasonable to think that the system is static and assume that the molecules in the system are stable and no longer reacting. However, as we discussed earlier, at the molecular level we see that the system is still changing and the molecules of reactants and products are still reacting in both the forwards and reverse reactions. In the case of our acetic acid example, there are still molecules of acetic acid, ($\mathrm{AcOH}$), acetate ($\mathrm{AcO}^{-}$), and hydronium ion ($\mathrm{H}_{3}\mathrm{O}^{+}$) colliding with solvent water molecules and each other. Some of these reactions will have enough energy to be productive; molecules of acetate will transfer protons to water molecules and the reverse reaction will also occur. What has changed is that the rate of acetate ($\mathrm{AcO}^{-}$) and hydronium ion ($\mathrm{H}_{3}\mathrm{O}^{+}$) formation is equal and opposite to the rate of acetic acid deprotonation (transfer of the proton to water). Although there is no net change at the macroscopic level, things are happening at the molecular level. Bonds are breaking and forming. This is the dynamic equilibrium we discussed earlier. Now what happens when we disturb the system. At equilibrium, the acetic acid–water system contains acetic acid ($\mathrm{AcOH}$), protons ($\mathrm{H}_{3}\mathrm{O}^{+}$), and acetate ion ($\mathrm{AcO}^{-}$). We know that a $0.10-\mathrm{M}$ solution of acetic acid has concentrations of $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right] = \left[\mathrm{AcO}^{-}\right] = 1.3 \times 10^{-3} \mathrm{~M}$. Now we add enough acetate[18] to make the acetate concentration $0.10 \mathrm{~M}$? One way to think about this new situation is to consider the probabilities of the forward and backward reactions. If we add more product (acetate), the rate of the backward reaction must increase (because there are more acetate ions around to collide with). Note that to do this, the acetate must react with the hydronium ion, so we predict that the $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right]$ will decrease and the acetate will increase. But as we saw previously, as soon as more acetic acid is formed, the probability of the forward reaction increases and a new equilibrium position is established, where the rate of the forward reactions equal the rate of the backward reactions. Using this argument we might expect that at the new equilibrium position there will be more acetic acid, more acetate, and less hydronium ion than there was originally. We predict that the position of equilibrium will shift backwards towards acetic acid. This probability argument gives us an idea of what will happen when an reaction at equilibrium is disturbed, but it doesn’t tell us exactly where it will restabilize. For that we have to look at $\mathrm{Q}$ and $\mathrm{K}_{eq}$. If we take the new initial reaction conditions ($0.10 \mathrm{~M AcOH}, 0.10 \mathrm{~M AcO}^{-}$, and $1.3 \times 10^{-3}\mathrm{~M H}_{3}\mathrm{O}^{+}$) and analyze them to determine the concentrations of all participating species, we can calculate $\mathrm{Q}$ and compare it to $\mathrm{K}_{eq}$: \[\mathrm{Q}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{AcO}^{-}\right] /[\mathrm{AcOH}]=\left(1.3 \times 10^{-3}\right)(0.1) /(0.1)$
This generates a value for $\mathrm{Q}$ as $1.3 \times 10^{-3}$. Now, if we compare $\mathrm{Q}$ and $\mathrm{K}_{eq}$, we see that $\mathrm{Q}$ is larger than $\mathrm{K}_{eq}$ ($1.3 \times 10^{-3} > 1.8 \times 10^{-5}$). To re-establish equilibrium, the system will have to shift so that $\mathrm{Q}$ becomes smaller or equal to $\mathrm{K}_{eq}$ (at which point $\Delta \mathrm{G} = 0$). To do this, the numerator (products) must decrease, while the denominator (reactants) must increase.[19] In other words, the reaction must go backwards in order to reestablish an equilibrium state. This approach leads us to the same conclusion as our earlier probability argument.
If we recalculate the $\left[\mathrm{H}_{3}\mathrm{O}^{+}\right]$ under the new equilibrium conditions (that is $0.10 \mathrm{~M AcOH}$ and $0.10 \mathrm{~M}$ acetate), we find that it has decreased considerably from its initial value of $1.3 \times 10^{-3}$, down to the new value of $1.8 \times 10^{-5} \mathrm{~M}$.[20] Using this to calculate the $\mathrm{pH}$, we discover that addition of sodium acetate causes the $\mathrm{pH}$ to rise from $2.9$ to $4.5$. This may not seem like much, but remember that each $\mathrm{pH}$ unit is a factor of 10, so this rise in $\mathrm{pH}$ actually indicates a drop in hydronium ion concentration of a bit less than a hundredfold. In order to regain the most stable situation, the system shifts to the left, thereby reducing the amount of product: $\mathrm{ACOH}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}+\mathrm{AcO}^{-}$
There are a number of exercises that will allow you to better understand the calculations involved in defining the effects of perturbations (changes in conditions, concentrations, and temperature) on the equilibrium state of a system. (Many chemistry books are full of such buffer and pH problems.) What is really important to note is that a system will return to equilibrium upon perturbation. This is where the system is most stable. And once the system is at equilibrium, further perturbations will lead to a new equilibrium state.
Le Chatelier’s Principle
You may recognize the preceding discussion as a rather well-known idea articulated by Henry Louis Le Chatelier: “If a chemical system at equilibrium experiences a change in concentration, temperature, volume, or partial pressure, then the equilibrium shifts to counteract the imposed change and a new equilibrium is established.” Le Chatelier’s principle is one of the best-known and most widely applicable heuristics (a rule of thumb that helps you predict an outcome) in science. However, it is important to understand why this principle works. Le Chatelier’s principle is yet another reminder that the second law of thermodynamics is always in force.
Le Chatelier’s principle specifically mentions different kinds of changes that can affect the position of equilibrium, yet we have only discussed changes in concentrations. What about temperature, volume, and partial pressure? How do they affect equilibrium? We have also not specifically addressed equilibrium reactions that take place in the gas phase. As an example, important atmospheric reactions such as the formation and depletion of ozone take place in the gas phase. There is nothing particularly special or different about calculating the equilibrium constant for gas phase reactions. We can use either partial pressures of each gas or concentrations ($\mathrm{mol/L}$), although the value of $\mathrm{K}_{eq}$ differs depending on which units you choose. Also, you can’t mix and match; you must use either all pressures or all concentrations. The effect of increasing the volume is the same as decreasing the concentration, and increasing the pressure has the same effect as increasing the concentration. Note, however, that adding a gas that is not a participant in the reaction has no effect even though the total pressure is increased.
Temperature, Equilibrium, and Reaction Rates
The effect of changing the temperature on the position of equilibrium is a little more complex. At first guess, you might predict that increasing the temperature will affect the rates of both the forward and backward reactions equally. However, if we look more closely, we see that this is not true. Cast your mind back to the discussions of temperature and thermal energy. If the temperature of the system is raised, it means that thermal energy has been added to the system from the surroundings. We can treat the addition of energy to the system as a perturbation and according to Le Chatelier’s principle, if something in the system is changed (concentration, volume, pressure, temperature), then the system shifts to a new equilibrium state. In order to predict the effect of adding energy to the system, we need to have more information about the energy changes associated with that system. As we saw earlier, the enthalpy change ($\Delta \mathrm{H}$) tells us about the thermal energy change for systems under constant pressure (most of the systems we are interested in). We can measure or calculate enthalpy changes for many reactions and therefore use them to predict the effect of increasing the temperature (adding thermal energy). For example, take the reaction of nitrogen and hydrogen to form ammonia.[21] This reaction is: $\mathrm{N}_{2}(g)+3 \mathrm{H}_{2}(g) \rightleftarrows 2 \mathrm{NH}_{3}(g)(\Delta \mathrm{H}=-92.4 \mathrm{~kJ} / \mathrm{mol})$
The reaction is exothermic because for each mole of ammonia ($17\mathrm{g}$), $92.4 \mathrm{~kJ}$ of thermal energy is produced and transferred to the surroundings (as indicated by the negative sign of the enthalpy change). Now, if we heat this reaction up, what will happen to the position of equilibrium? Let us rewrite the equation to show that thermal energy is produced: $\mathrm{N}_{2}(g)+3 \mathrm{H}_{2}(g) \rightleftarrows 2 \mathrm{NH}_{3}(g)+184.8 \mathrm{~kJ}$
($2 \times 92.4 \mathrm{~kJ}$ since two moles of ammonia are produced). If thermal energy is a product of the reaction, Le Chatelier’s principle tells us that if we add more product, the reaction should shift towards the reactants. Sure enough, if we heat this reaction up, the position of equilibrium shifts towards ammonia and hydrogen—it starts to go backward! This is actually quite a problem, as this reaction requires a fairly high temperature to make it go in the first place. The production of ammonia is difficult if heating up the reaction makes it go in the opposite direction to the one you want.
It is important to remember that Le Chatelier’s principle is only a heuristic; it doesn’t tell us why the system shifts to the left. To answer this question, let us consider the energy profile for an exothermic reaction. We can see from the graph $\rightarrow$ that the activation energy for the reverse (or back) reaction ($\Delta \mathrm{G} \neq \text{ reverse}$) is larger than that for the forward reaction ($\Delta \mathrm{G} \neq \text{ forward}$). Stated in another way: more energy is required for molecules to react so that the reverse (back) reaction occurs than for the forward reaction. Therefore, it makes sense that if you supply more energy, the reverse reaction is affected more than the forward reaction.[22]
There is an important difference between disturbing a reaction at equilibrium by changing concentrations of reactants or products, and changing the temperature. When we change the concentrations, the concentrations of all the reactants and products change as the reaction moves towards equilibrium again, but the equilibrium constant stays constant and does not change. However, if we change the temperature, the equilibrium constant changes in value, in a direction that can be predicted by Le Chatelier’s principle.
Equilibrium and Steady State
Now here is an interesting point: imagine a situation in which reactants and products are continually being added to and removed from a system. Such systems are described as open systems, meaning that matter and energy are able to enter or leave them. Open systems are never at equilibrium. Assuming that the changes to the system occur on a time scale that is faster than the rate at which the system returns to equilibrium following a perturbation, the system could well be stable. Such stable, non-equilibrium systems are referred as steady state systems. Think about a cup with a hole in it being filled from a tap. If the rate at which water flows into the cup is equal to the rate at which it flows out, the level of water in the cup would stay the same, even though water would constantly be added to and leave the system (the cup). Living organisms are examples of steady state systems; they are open systems, with energy and matter entering and leaving. However, most equilibrium systems studied in chemistry (at least those discussed in introductory texts) are closed, which means that neither energy nor matter can enter or leave the system.
In addition, biological systems are characterized by the fact that there are multiple reactions occurring simultaneously and that a number of these reactions share components—the products of one reaction are the reactants in other reactions. We call this a system of coupled reactions. Such systems can produce quite complex behaviors (as we’ll explore further in Chapter $9$). An interesting coupled-reaction system (aside from life itself) is the Belousov–Zhabotinsky (BZ) reaction in which cesium catalyzes the oxidation and bromination of malonic acid.[23] If the system is not stirred, this reaction can produce quite complex and dynamic spatial patterns, as shown in the figure. The typical BZ reaction involves a closed system, so it will eventually reach a boring (macroscopically-static) equilibrium state. The open nature of biological systems means that complex behaviors do not have to stop; they continue over very long periods of time. The cell theory of life (the theory that all cells are derived from preexisting cells and that all organisms are built from cells or their products), along with the fossil record, indicates that the non-equilibrium system of coupled chemical reactions that has given rise to all organisms has persisted, uninterrupted, for at least $\sim 3.5$ billion years (a very complex foundation for something as fragile as life).
The steady state systems found in organisms display two extremely important properties: they are adaptive and homeostatic. This means that they can change in response to various stimuli (adaptation) and that they tend to return to their original state following a perturbation (homeostasis). Both are distinct from Le Chatelier’s principle in that they are not passive; they are active processes requiring energy. Adaptation and homeostasis may seem contradictory, but in fact they work together to keep organisms alive and able to adapt to changing conditions.[24] Even the simplest organisms are characterized by great complexity because of the interconnected and evolved nature of their adaptive and homeostatic systems.
Questions
Questions to Answer
• What does it mean when we say a reaction has reached equilibrium?
• What does the magnitude of the equilibrium constant imply about the extent to which acetic acid ionizes in water?
• Write out the equilibrium constant for the reaction $\mathrm{H}_{3} \mathrm{O}^{+}+\mathrm{AcO}^{-} \rightleftarrows \mathrm{ACOH}+\mathrm{H}_{2} \mathrm{O}$.
• What would be the value of this equilibrium constant? Does it make sense in terms of what you know about acid-base reactions?
• If the $\mathrm{pH}$ of a $0.15-\mathrm{M}$ solution of an acid is $3.6$, what is the equilibrium constant Ka for this acid? Is the acid a weak or strong acid? How do you know?
• Calcium carbonate ($\mathrm{CaCO}_{3}$) is not (very) soluble in water. Write out the equation for the dissolution of $\mathrm{CaCO}_{3}$. What would be the expression for its $\mathrm{K}_{eq}$? (Hint: recall pure solids and liquids do not appear in the expression.) If $\mathrm{K}_{eq}$ for this process is $6.0 \times 10^{-9}$, what is the solubility of $\mathrm{CaCO}_{3}$ in $\mathrm{mol/L}$?
• What factors determine the equilibrium concentrations for a reaction?
• For the reaction $\mathrm{N}_{2}(g)+3 \mathrm{H}_{2}(g) \rightleftarrows 2 \mathrm{NH}_{3}(g)(\Delta \mathrm{H}=-92.4 \mathrm{~kJ} / \mathrm{mol})$, predict the effect on the position of equilibrium, and on the concentrations of all the species in the system, if you:
• add nitrogen
• remove hydrogen
• add ammonia
• heat the reaction up
• cool it down
• Draw a reaction energy diagram in which the reverse reaction is much faster than the forward reaction (and vice versa).
• As a system moves towards equilibrium, what is the sign of $\Delta \mathrm{G}$? As it moves away from equilibrium, what is the sign of $\Delta \mathrm{G}$?
• Explain in your own words the difference between $\Delta \mathrm{G}^{\circ}$ and $\Delta \mathrm{G}$.
• Imagine you have a reaction system $\mathrm{A} \rightleftarrows \mathrm{B}$ for which $\mathrm{K}_{eq} = 1$. Draw a graph of how $\Delta \mathrm{G}$ changes as the relative amounts of $[\mathrm{A}]$ and $[\mathrm{B}]$ change.
• What would this graph look like if $\mathrm{K}_{eq} = 0.1$? or $\mathrm{K}_{eq} = 2$?
• If $\Delta \mathrm{G}^{\circ}$ is large and positive, what does this mean for the value of $\mathrm{K}_{eq}$?
• What if $\Delta \mathrm{G}^{\circ}$ is large and negative, how does the influence $\mathrm{K}_{eq}$?
Questions for Later
• Why is $\mathrm{K}_{eq}$ temperature-dependent?
• Explain mechanistically why random deviations from equilibrium are reversed.
• If the value of $\mathrm{Q}$ is $> \mathrm{K}_{eq}$, what does that tell you about the system? What if $\mathrm{Q}$ is $< \mathrm{K}_{eq}$?
Questions to Ponder
• The acid dissociation constant for ethanol ($\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{OH}$) is $\sim 10^{-15}$. Why do you think acetic acid is 10 billion times more acidic than ethanol? (Hint: draw out the structures and think about the stability of the conjugate base.)
• If $\Delta \mathrm{G}$ for a system is = 0, what does that mean? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/08%3A_How_Far_How_Fast/8.5%3A_Equilibrium.txt |
Recall that one of the most important reasons for studying reaction kinetics is to get information about the reaction pathway, or mechanism. Now that we have all the concepts we need to understand these ideas, let us go back and see how to put it all together. The rate equation, along with the equilibrium constant, is the key to unraveling what happens during a reaction.
We have seen that, at a given temperature, the reaction rate depends on the magnitude of the rate constant and the concentrations of one or more of the reactants. However, for the two seemingly similar substitution reactions we discussed earlier, the rate equations are different. What is going on here? The answer lies in the fact that most reactions do not occur in one step. In many cases, there is not a smooth transition from reactants to products with a single transition state and activation energy, as we have simplistically portrayed it. Rather, there are a series of steps, each with their own transition state and activation energy. Here, we will only consider one-step and two-step reactions, but in reality there could be many distinct steps from reactant to product. Each step represents a kind of sub-reaction, each with its own activation energy and equilibrium state. The kinetics of a reaction is generally determined by the slowest of these sub-reactions, whereby a kind of bottleneck or rate-limiting step is formed. The rate equation gives us information about what reactants are present in the rate-determining step of the reaction. The reaction can only go as fast as the slowest step (the step with the highest activation energy barrier). As an analogy, imagine you are traveling at $70 \mathrm{~mph}$ on a five-lane highway. If the lanes suddenly narrow to allow only one lane of traffic, all the cars slow down. Although they are capable of traveling faster, no one can get past the slowest cars.
The reaction we discussed earlier between methyl bromide ($\mathrm{CH}_{3}\mathrm{Br}$) and hydroxide (${}^{-}\mathrm{OH}$): $\mathrm{CH}_{3} \mathrm{Br}+{ }^{-} \mathrm{OH}+\mathrm{Na}^{+} \rightleftarrows \mathrm{CH}_{3} \mathrm{OH}+\mathrm{Br}^{-}+\mathrm{Na}^{+}$
has been shown experimentally to have the rate equation: $\text{ rate } = k \left[\mathrm{CH}_{3}\mathrm{Br}\right] \left[{}^{-}\mathrm{OH}\right]$
What this tells us that the rate of this reaction depends on both reactants. This means that whatever the mechanism of the reaction, both reactants must be present in the transition state (the species at the highest energy on the energy profile) that determines the rate of reaction. From this information we might begin to think about what the pathway for the reaction might be. It turns out that the simplest possibility is actually the correct – which is that the reaction takes place in one step, as shown in the figure. that is the hydroxide (the nucleophile) is attracted to the carbon, and at the same time the carbon bromine bond is broken. that is the reaction take place in one step that involves both the hydroxide and the methyl bromide.
We can imagine what the structure of the transition state might look like (although we cannot detect it by any traditional methods because transition states only exist for one molecular vibration and are very difficult to detect). The nucleophile (${}^{-}\mathrm{OH}$) is attracted to the $\delta +$ on the methyl carbon. At the same time, the bromide ion starts to leave, so that at the “top” of the transition state (the most unstable point, requiring the most energy to form), we have a carbon that is coordinated to five other atoms by partial or full bonds. Given that carbon normally makes four bonds, it is no wonder that this pentavalent species sits at the reaction’s highest energy point.
However, if we analyze what appears to be a very similar reaction: $\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}+{}^{-}\mathrm{OH} \rightleftarrows \left(\mathrm{CH}_{3}\right)_{3} \mathrm{COH}+\mathrm{Br} .$
We must come to the conclusion that it has a different mechanism. Why? Because the rate equation for this reaction is first order: $\text{ rate } = k \left[\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}\right]$. This tells us that only $\left(\mathrm{CH}_{3}\right)_{3} \mathrm{CBr}$ is involved in the step that determines the rate. In other words, the transition state with the largest activation energy involves only the t-butyl bromide molecule. There is no nucleophile (the hydroxide) present during the step that determines how fast the reaction goes.
While there are a number of possible mechanism that we could postulate for this reaction, the mechanism for this reaction involves two discrete steps, as shown in the figure.
The first is the ionization of the t-butyl bromide, which involves breaking the $\mathrm{C—Br}$ bond. This results in a positively-charged carbon (the bromine takes all the electrons and becomes bromide ion)—a very unstable and distinct species known as a carbocation. The resulting carbocation is an intermediate: it sits in an energy well between two less stable states. This distinguishes it from the transition state, which precariously sits at the highest local energy state (surrounded by lower energy states). Intermediates lie in energy “valleys”, while transition states are at the summit of an energy “hill”, as shown in the figure. The carbocation can react with the hydroxide, to form the t-butyl alcohol, or it can react with the bromide to reform the original product (or a variety of other side reactions can occur). The important point here is that we can deduce how the reaction will proceed from the rate equation for each reaction.
Questions
Questions to Answer
• Draw a reaction energy diagram for a two-step reaction in which the second (or first) step is rate determining.
• What is the rate equation for a reaction which occurs in the following steps:
• $A + B \rightarrow C$ (fast)?
• $A + C \rightarrow D$ (slow)?
• Explain why it is not possible to write a rate equation from the reaction equation.
In this chapter we have explored how the fate of reactions is determined by a variety of factors, including the concentrations of reactants and products, the temperature, and the Gibbs energy change. We have learned that we can make a reaction go backward, forward, faster, or slower by examining the nature of the reaction and the conditions under which it is performed. You can now extend these principles to imagine how we might control reactions to do what we want, rather than let nature (or entropy) take its course. In the next chapter, we will take this one step further to see what happens when reactions are removed from isolation and allowed to interact with each other. | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/08%3A_How_Far_How_Fast/8.6%3A_Back_to_Reaction_Mechanisms.txt |
1. The rate of reaction is discussed in the next section.
2. Recall that the $\mathrm{pH} = – \log \left[\mathrm{H}^{+}\right]$, so $\left[\mathrm{H}^{+}\right] = 10-\mathrm{pH}$, if the $\mathrm{pH}$ is $= 1$ then the concentration of $\mathrm{H}^{+} = 10^{-1}$, or $0.1 \mathrm{~M}$.
3. We write acetic acid in this condensed formula for clarity, remembering that the actual structure of acetic acid is $\mathrm{CH}_{3}\mathrm{C=O}(\mathrm{O-H})$, and it is the $\mathrm{H}$ on the terminal $\mathrm{O}$ that is donated to a base (water).
4. This is true provided that we are talking about reasonably large numbers of molecules - the smaller the number of molecules, the “noisier” the process. You can think about the molecular movements of a single molecule compared to the movement of many molecules, as an example.
5. One very unfortunate consequence of this is that flour stored in grain silos can explode without warning, if exposed to a spark or other energy source. http://en.Wikipedia.org/wiki/Grain_e...tor_explosions
6. The slope of the tangent is the change in concentration/change in time or the rate of the reaction. The slope of the tangent is the derivative of the curve at that point (calculus!).
7. You might ask yourself: How do we know the molecules are still reacting if we can only observe the macroscopic level? There are a number of ways of tracking what happens at the molecular level. For example, there are spectroscopic techniques such as $\mathrm{NMR}$ that can be used, but they are beyond the scope of this book.
8. In fact, this reaction has a number of different products. For now we will concentrate on this one.
9. We call these kinds of reactions substitution reactions because one group has been substituted for another. In fact, they are also nucleophilic substitution reactions, because the hydroxide is acting as a nucleophile here.
10. It is not necessary to be able to follow this mathematical reasoning; it is included to show where the equation comes from.
11. http://en.Wikipedia.org/wiki/Spontan...man_combustion
12. $\mathrm{R}$ is known as the gas constant; it turns up in many different equations. For example, the ideal gas law $\mathrm{PV} = n\mathrm{RT}$ (the units depend on the equation where it is used). $\mathrm{R}$ is also related to the Boltzmann constant $k_{\mathrm{B}}$, (or $k$, yet another confusing use of symbols, since the Boltzmann constant is in no way related to the rate constant ($k$), or the equilibrium constant ($K$)).
13. http://www.ncbi.nlm.nih.gov/pubmed/21332126
14. Here is an example: http://www.febsletters.org/article/S...971-4/abstract
15. http://www.webmd.com/diet/features/t...ence-and-risks
16. Strictly speaking, it is not concentrations that appear in the expression for $K$. Rather, it is another property called the activity (a)—often called the effective concentration. The activity takes into account the interactions between molecules and ions and solvents, but for our purposes it is acceptable to use concentrations in the expressions for $K_{\mathrm{eq}}$. One outcome of this is that activity is a dimensionless quantity, so equilibrium constants are one of the few places where we don’t have to worry about getting the right units!
17. Once more it is important to note that in thermodynamic terms, reactions referred to as spontaneous (inappropriately, in our view) do not indicate the rate at which a reaction will happen, but rather whether it will ever happen. In fact some “Spontaneous” reactions either do not occur at all (wood in an atmosphere containing oxygen does not burn spontaneously) or occur quite slowly (iron rusting).
18. Of course, there is no such thing as acetate ($\mathrm{CH}_{3}\mathrm{COO}^{-}$) alone. There must also be a counter-ion present. Typically, we use ions such as $\mathrm{Na}^{+}$ or \mathrm{K}^{+}\), stable monovalent cations that will not participate in any further reaction. So when we say we add acetate to the solution, we really mean we add sodium acetate—the sodium salt of acetic acid (just like sodium chloride is the sodium salt of hydrochloric acid).
19. If you think about it for a moment you will see that if the concentration of any species changes in a closed system, then the concentrations of all the other species must also change.
20. You might be wondering if there is some trick here. There is—we are ignoring several side reactions that in fact tend to cancel each other out. If you are interested, there are a number of helpful sites that can assist you with the more complex calculations required.
21. The production of ammonia is a commercially-important process because nitrogen is an important element necessary for plant growth (it is commonly added to fertilizers). However, the major source of nitrogen is “locked up” in the air as molecular nitrogen, - a substance that is quite unreactive and inaccessible to most plants.
22. By analogy, consider the NCAA basketball tournament: if the field is widened to allow more participants, it helps the weaker teams because the stronger teams would have made it into the tournament anyway.
23. http://www.youtube.com/watch?v=IBa4kgXI4Cg
24. This type of adaptation is physiological and occurs within individual organisms; it is distinct from, but based on, evolutionary processes that act on populations of organisms. | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/08%3A_How_Far_How_Fast/8.7%3A_In-Text_References.txt |
In the real world, simple chemical reaction systems are rare. Chemistry lab experiments typically involve mixing pure chemicals together in well-defined amounts under tightly-controlled conditions. In the wild, things are messier. There are usually a number of chemical species present, and this leads to competing reactions. Laboratory systems are effectively closed systems, and the results are analyzed only after the reaction has reached equilibrium. Real systems, on the other hand, are usually open and rarely reach equilibrium. This is particularly true for living systems, which tend to die if they reach equilibrium or become enclosed. In fact, most real systems are subject to frequent short- and long-term perturbations. We learned in the last chapter that perturbations (adding or taking away a product or a reactant) lead to compensatory changes and the system responds, as described by Le Chatelier’s principle. In the context of a more complex system, this simple behavior can produce quite dramatic results. Life is an example of such a system that has survived in its various forms uninterrupted for over $3.5\times 10^{9}$ years.
In this chapter, we examine a range of complex systems and consider how living systems keep the concentration of important chemical species at a reasonable level (for example, by buffering the $\mathrm{pH}$); how they use differences in concentrations of chemical species to drive cellular processes (like thought); and how reactions that release energy (by forming more stable compounds with stronger bonds) can be coupled to reactions that require energy in order to occur.
09: Reaction Systems
We begin with a few important reactions that can either move backward or forward depending on conditions.[1] Molecular oxygen ($\mathrm{O}_{2}$) is a vital component in a number of reactions in our bodies, such as aerobic respiration, the evolutionarily ancient process by which we capture energy from food.[2] $\mathrm{O}_{2}$ must be transported to every cell so that it can participate in cellular reactions. $\mathrm{O}_{2}$ diffuses into the bloodstream in the lungs, but it is not very soluble in water (the main component of blood). If we relied on the solubility of oxygen in water to transport it around the body, we would be in trouble. Instead $\mathrm{O}_{2}$ reacts with (we usually say “binds to”, but this is definitely a chemical reaction) a protein called hemoglobin. The structure of hemoglobin is complex: it is composed of four polypeptide subunits and each polypeptide is associated with a heme group.[3] The heme group contains an iron ion ($\mathrm{Fe}^{2+}$) complexed to four nitrogenous bases linked into a ring (called a porphyrin) to form a more or less planar arrangement, as shown in the figure. Heme is also the central active portion of one of the major components of our immune system, myeloperoxidase.[4]When you blow your nose, that familiar green color is actually caused by the light absorbing properties of the heme group in this enzyme, rather than the bacterial infection. Because the heme group is in a different molecular environment, its color appears green rather than red. Chlorophyll, a similar molecule, differs most dramatically from heme in that the iron ion is replaced by a magnesium ion (as shown in the figure). Its function is not to bind $\mathrm{O}_{2}$ (or $\mathrm{CO}_{2}$), but rather to absorb visible light and release an energetic electron as part of the photosynthetic process.
Iron is a transition metal. Recall that these elements have d orbitals, some of which are empty and available for bonding. Iron II ($\mathrm{Fe}^{2+}$) has plenty of energetically-available orbitals, and therefore can form Lewis acid–base complexes with compounds that have available electrons (such as nitrogenous bases). Within the porphyrin ring, four nitrogens interact with the $\mathrm{Fe}^{2+}$ ion. Typically, transition metals form complexes that are geometrically octahedral. In the case of the heme group, four of these interactions involve nitrogens from the four rings; a fifth involves a nitrogen of histidine residue of one of the protein’s polypeptides that approaches from below the ring plane. This leaves one site open for the binding of an $\mathrm{O}_{2}$ molecule, which has available lone electron pairs.[5] When an $\mathrm{O}_{2}$ binds to one of these heme groups, $\text {Hemoglobin } + \mathrm{O}_{2} \rightleftarrows \text { Hemoglobin } - \mathrm{O}_{2} .$
Note that this way of depicting the reaction is an oversimplification. As we said initially, each hemoglobin molecule contains four polypeptides, each of which is associated with a heme group (green in the figure), so there are four heme groups in a single hemoglobin molecule. Each heme group can bind one $\mathrm{O}_{2}$ molecule. When an $\mathrm{O}_{2}$ molecule binds to the heme iron, there are structural and electronic changes that take place within the protein as a whole. This leads to a process known as cooperativity, wherein the four heme groups do not act independently. Binding $\mathrm{O}_{2}$ to one of the four heme groups in hemoglobin causes structural changes to the protein, which increases the affinity for $\mathrm{O}_{2}$ in each of the remaining three heme groups. When a second $\mathrm{O}_{2}$ binds, affinity for $\mathrm{O}_{2}$ is once again increased in the remaining two heme groups.
As you might suspect, this process is reversible. Imagine a hemoglobin protein with four bound oxygen molecules. When an $\mathrm{O}_{2}$ is released from the hemoglobin molecule, the affinity between the remaining $\mathrm{O}_{2}$'s and the heme groups is reduced, making it more likely that more of the bound $\mathrm{O}_{2}$'s will be released. This is an equilibrium reaction, and we can apply Le Chatelier’s principle to it. Where $\mathrm{O}_{2}$ is in abundance (in the lungs), the reaction shifts to the right (binding and increasing affinity for $\mathrm{O}_{2}$). Where $\mathrm{O}_{2}$ is present at low levels, the reaction shifts to the left (releasing and reducing affinity for $\mathrm{O}_{2}$). The resulting hemoglobin molecule has a high capacity for binding $\mathrm{O}_{2}$ where $\mathrm{O}_{2}$ is present at high concentrations and readily releases $\mathrm{O}_{2}$ where $\mathrm{O}_{2}$ is present at low concentrations. In the blood, [hemoglobin] ranges between $135–170 \mathrm{~g/L}$, approximately $2$ millimoles per liter ($\mathrm{mM}$), and because there are four $\mathrm{O}_{2}$ binding sites per hemoglobin, this results in approximately $\sim 250 \mathrm{~mg/L}$ or $s8-\mathrm{mM}$ concentration of $\mathrm{O}_{2}$.
By comparison, $\mathrm{O}_{2}$‘s solubility in water is $\sim 8 \mathrm{~mg/L}$ at $37 { }^{\circ}\mathrm{C}$, or $250$ micromoles per liter ($\mu \mathrm{M}$). The reaction can be written like this:
$\mathrm{O}_{2}$ in the air $\rightleftarrows \(\mathrm{~O}_{2}$ in the $\text{ blood (liquid) + hemoglobin } \rightleftarrows \text { hemoglobin-}\mathrm{O}_{2} + \mathrm{O}_{2} \text{ in the blood } \rightleftarrows \text { hemoglobin-}2\mathrm{O}_{2} + \mathrm{~O}_{2} \text{ in the blood } \rightleftarrows \text { hemoglobin-}3\mathrm{O}_{2} + \mathrm{O}_{2} \text{ in the blood } \(\rightleftarrows$ \text { hemoglobin-}4\mathrm{O}_{2}
When the hemoglobin reaches areas of the body where $\left[\mathrm{O}_{2}\right]$ is low, the oxygen dissociates from the hemoglobin into the blood. The dissolved \mathrm{O}_{2} is then removed from the blood by aerobic (oxygen-utilizing) respiration: $\mathrm{C}_{6}\mathrm{H}_{12}\mathrm{O}_{6} + 6\mathrm{O}_{2} \rightleftarrows 6\mathrm{CO}_{2} + 6\mathrm{H}_{2}\mathrm{O} .$
The combination of Le Chatelier’s principle and the cooperativity of the $\mathrm{O}_{2} +$ hemoglobin reaction now leads to the complete dissociation of the \text{ hemoglobin—}4\mathrm{O}_{2}\) complex, releasing $\mathrm{O}_{2}$. The products of aerobic respiration (essentially a combustion reaction) are carbon dioxide and water. Clearly the water can be carried away in cellular fluid, but the carbon dioxide must be removed in a variety of ways: a small part is removed by reacting with the hemoglobin (but not at the Fe center), some is dissolved in the blood, and some takes part in the buffering system present in the blood, and most is released in the lungs, into the air that you breath out.
Questions
Questions to Answer
• What complicates reaction systems in the real world (outside the lab)?
• Why is $\mathrm{O}_{2}$ not very soluble in water?
• By what factor does binding with hemoglobin increase solubility of $\mathrm{O}_{2}$ in water?
• Draw Lewis structures for $\mathrm{O}_{2}$ and $\mathrm{CO}$. Why do you think they bind in similar ways to hemoglobin?
• Why does $\mathrm{CO}_{2}$ react differently with hemoglobin from the way $\mathrm{O}_{2}$ interacts with hemoglobin?
Questions to Ponder
• Why does it make physiological sense that $\mathrm{O}_{2}$ binds to oxygen-free hemoglobin (deoxyhemoglobin) relatively weakly and cooperatively ? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/09%3A_Reaction_Systems/9.1%3A_Systems_Composed_of_One_Reaction.txt |
When you think of the word buffer, you probably think of it as a safeguard or a barrier—something that provides a cushion or shield between you and something harmful. But in chemistry and biology, a buffer is a solution that resists changes in $\mathrm{pH}$. As we will soon learn, this ability is critical to all living systems. Many reactions are affected by changes in $\mathrm{pH}$. For example, strong acid and base solutions are harmful to living tissue because they cause rapid hydrolysis of the bonds that hold living organisms together. That is, acidic or basic solutions can speed up the reactions in which the bonds are broken (dead bodies are often disposed of in murder mysteries—and sometimes in real life—by dissolving them in strong acid or base). Proteins have many weak acid and base groups, and so even relatively small fluctuations in $\mathrm{pH}$ can cause changes in the charges of these groups. This can affect protein structure and function dramatically, in a way that is physiologically damaging to living systems.
Aqueous solution chemistry is terrifically complicated in living systems. However, we can begin to understand it by looking at simple chemical buffer systems. Let us first consider what happens if we take $0.10$ moles of hydrogen chloride gas and dissolve it in enough water to make one liter of solution. The resulting $0.10-\mathrm{M}$ solution of hydrochloric acid $\mathrm{HCl}$ ($aq$) has a $\mathrm{pH}$ of $1$ ($\mathrm{pH} = – \log (0.10)$, $= – \log \left(1.0 \times 10^{-1}\right)$. So the $\mathrm{pH}$ of the solution changes from $7$ (for pure water) to $1$, a change of 6 orders of magnitude in $\left[\mathrm{H}^{+}\right]$. Now, if we do the same experiment adding $0.10 \mathrm{~mol HCl}(g)$ to an appropriately buffered solution, we find the pH of the resulting solution does not change very much at all.
To understand how this happens, we have to review some acid–base chemistry. Specifically, we must reexamine what happens when acids and base react, what the products are, and how those products behave. We just calculated that the $\mathrm{pH}$ of a $0.10-\mathrm{M}$ strong acid is $1.0$. It does not matter which strong acid we choose, as long as it only has one proton to donate. So the $\mathrm{pH}$ of solutions $\mathrm{HCl}$, $\mathrm{HBr}$, and $\mathrm{HClO}_{4}$ are all the same because they are all almost completely ionized in aqueous solution. However, what happens if we use a weak acid like acetic acid ($\mathrm{CH}_{3}\mathrm{COOH}$), hydrogen fluoride ($\mathrm{HF}$), phosphoric acid ($\mathrm{H}_{3}\mathrm{PO}_{4}$), or carbonic acid ($\mathrm{H}_{2}\mathrm{CO}_{3}$)? The $\mathrm{pH}$ of each differs and none of them is as low as the strong acids because they do not ionize completely in solution. For example, the $\mathrm{pH}$ of $0.10-\mathrm{M}$ acetic acid is $\sim 2.9$ because the concentration of H+ is lower than in $0.10-\mathrm{M HCl}$. Although that may not seem very different from a $\mathrm{pH}$ of $1$, remember that $\left[\mathrm{H}^{+}\right]$ is $10^{-1}$ (or $0.1 \mathrm{~M}$) for a $\mathrm{pH}$ of $1$ and $10^{-2.9}$ (or $0.0012 \mathrm{~M}$) for a $\mathrm{pH} 2.9$.
Now if we look at the conjugate bases of weak and strong acids we will see an analogous difference in their behavior to produce solutions with different $\mathrm{pH}$'s. The conjugate base of $\mathrm{HCl}$ is $\mathrm{Cl}^{–}$ (the chloride ion.) However since we can’t just get a bottle of chloride (we need a counter ion for charge balance), we will use sodium chloride, $\mathrm{NaCl}$, since we know that sodium ions are not reactive, they are usually “spectator” ions. If we measure the $\mathrm{pH}$ of a solution of $\mathrm{NaCl}$, we will find that it is $7$, just like water. Neither the sodium ion nor the chloride ion affects the $\mathrm{pH}$. However, if we take the corresponding conjugate base from acetic acid, for example sodium acetate ($\mathrm{CH}_{3}\mathrm{COONa}$), we find that a $0.1 \mathrm{~M}$ solution has a $\mathrm{pH}$ of about $9$. This is quite surprising at first glance. Sodium acetate belongs to the class of compounds that we label generically as salts. In everyday life, salt refers to sodium chloride, but in chemistry the term salt refers to a compound that contains the conjugate base of an acid and a cation. Although it is tempting to think of all salts as innocuous and unreactive (like sodium chloride), it turns out that components of the salt (the conjugate base anion, and the cation) both affect the properties, even in a simple reaction like dissolving in water. In fact, the $\mathrm{pH}$ of any conjugate base of a weak acid tends to be basic.
Let us investigate a bit further. The previous observation implies that the acetate ion $\left(\mathrm{CH}_{3}\mathrm{COO}^{-}\right)$ must be reacting with water to produce hydroxide (since we already know the $\mathrm{Na}^{+}$ does not react with water). This reaction is called a hydrolysis reaction. The name is derived from the Greek words for water (hydro-) and to break or separate (-lysis); it refers to reactions in which water is one of the reactants. We can write this hydrolysis reaction as: $\mathrm{CH}_{3} \mathrm{COO}^{-}(aq)+\mathrm{H}_{2} \mathrm{O}(l) \rightleftarrows \mathrm{CH}_{3} \mathrm{COOH}(aq)+{}^{-}\mathrm{OH}(aq)$
The production of hydroxide increases $\left[{}^{–}\mathrm{OH}\right]$, in turn affects $\left[\mathrm{H}^{+}\right]$ because the two are related by the equilibrium expression $\left[\mathrm{H}^{+}\right][{}^{-}\mathrm{OH}]=1 \times 10^{-14}=\mathrm{K}_{\mathrm{w}}$. In other words, when the salt of a weak acid (that is, its conjugate base) is dissolved in water, a weak base is produced and that weak base has all the properties of any base: it can react with an acid.
It is possible to calculate the $\mathrm{pH}$ of weak base solutions, just as it is to calculate the $\mathrm{pH}$ of weak acids, if you know the acid equilibrium constant.[6] However, what is more interesting is what happens when a solution contains significant amounts of both a weak acid and its conjugate base. For example, if we take a solution that is $0.10 \mathrm{~M}$ in both acetic acid and sodium acetate, we can calculate the $\mathrm{pH}$ by setting up the equilibrium table:
$\mathrm{AcOH}$ $+ \mathrm{H}_{2}\mathrm{O}$ $\rightleftarrows \mathrm{~H}_{3}\mathrm{O}^{+}$ $+ { }^{-}\mathrm{OAc}$ $+ \mathrm{~Na}^{+}$
Initial concentrations $0.10 \mathrm{~M}$ $1 \times 10^{-7}$ $0.10 \mathrm{~M}$ $0.10 \mathrm{~M}$
Note that even though acetate is present in the initial mixture, we have put it on the product side. This is because when both acetic acid and acetate are present in the same solution, their concentrations are “linked”: they become part of an equilibrium system that can be described by the equilibrium constant for acetic acid.[7] If the concentration of one species is changed, the other must respond. Recall in Chapter $8$ that we looked at what happens to the $\mathrm{pH}$ of a solution of acetic acid when an acetate ion is added: the presence of acetate affects the position of equilibrium for the acetic acid dissociation, and instead of a $\mathrm{pH}$ of $2.9$ (for $0.10 \mathrm{~M}$ acetic acid), the $\mathrm{pH}$ of a solution that is $0.10 \mathrm{~M}$ in both acetic acid and sodium acetate is $4.7$. The presence of the common ion acetate has suppressed the ionization of acetic acid. We can calculate the $\mathrm{pH}$ of any similar solution by adapting the expression for the acid dissociation equilibrium: $\mathrm{K}_{a}=\left[\mathrm{H}^{+}\right]\left[\mathrm{AcO}^{-}\right] /[\mathrm{AcOH}] .$
As in our previous work with weak acids we are going to ignore any reaction with water from both the acetic acid and the acetate ion because they do not affect the pH significantly; both species are relatively weak as acids or bases. Even if we take these reactions into account, they do not change the answer we get. Substituting in the equation for $\mathrm{K}_{a}$, we get: $\mathrm{K}_{a}=1.8 \times 10^{-5} = \left[\mathrm{H}^{+}\right] (0.10) / (0.10) .$
Alternatively, we can use $\mathrm{pK}_{a}$, the negative log of $\mathrm{K}_{a}$, giving us: \begin{aligned} \mathrm{pK}_{a} &=\mathrm{pH}-\log \left[\mathrm{AcO}^{-}\right] /[\mathrm{AcOH}] \ \text { or } \mathrm{pH} &=\mathrm{pK}_{a}+\log \left[\mathrm{AcO}^{-}\right] /[\mathrm{AcOH}] . \end{aligned}
This equation is known as the Henderson–Hasselbalch equation. It is a convenient way to calculate the $\mathrm{pH}$ of solutions that contain weak acids and their conjugate bases (or weak bases and their conjugate acids).
Recall that a buffer can resist changes in $\mathrm{pH}$. So the question is: how exactly does this happen? Let us take a closer look. Imagine we have a buffer solution that is $1.0 \mathrm{~M}$ in both acetic acid and acetate. The $\mathrm{pH}$ of this system is $– \log 1.8 \times 10^{-5} = 4.74$ (because $\left[\mathrm{AcO}^{-}\right] = \left[\mathrm{AcOH}\right]$). Now let us add some acid to this buffer. To make calculations easy, we can add $0.01 \mathrm{~mol HCl}$ to $1.0 \mathrm{~L}$ of buffer solution.[8] What happens? The major species in the buffer solution are acetic acid, acetate, and water (hydronium ion and hydroxide ion are minor components). Which one will react with $\mathrm{HCl}(aq)$? Just as in any acid–base reaction, it is more likely that the base will react with the acid that is, the acetate part of the buffer will react with the H+. The resulting reaction is: $\mathrm{H}^{+}+{ }^{-}\mathrm{OAc} \rightleftarrows \mathrm{AcOH}+\mathrm{H}_{2} \mathrm{O} .$
In this reaction, the acetate concentration decreases and the acetic acid increases. We can now calculate the initial (pre-reaction) and final (post-reaction) concentrations:
$\mathrm{AcOH}$ $+ \mathrm{~H}_{2}\mathrm{O}$ $\rightleftarrows \mathrm{~H}_{3}\mathrm{O}^{+}$ $+ { }^{-}\mathrm{OAc}$
Initial $\left[\mathrm{M}\right]$ $1.00$ (negligible) $1.00$
add $0.01 \mathrm{~M H}^{++}$ $+ 0.01$ $– 0.01$
Equilibrium $\left[\mathrm{M}\right]$ $1.01$ $x$ $0.99$
The $\mathrm{pH}$ of this system can be calculated from the Henderson–Hasselbalch equation: $\mathrm{pH}=\mathrm{pK}_{a}+\log (0.99 / 1.01)=4.73 .$
The $\mathrm{pH}$ has hardly budged! (Recall that the $\mathrm{pH}$ of $0.01 \mathrm{~M HCl}$ is $2.0$.) Even if we add more acid (say, $0.1 \mathrm{~mol HCl}$) to our liter of buffer, the resulting $\mathrm{pH}$ does not change much (it is $\mathrm{pH} = 4.74 + \log (0.90 / 1.10) = 4.65$). Note that the addition of acid has moved the $\mathrm{pH}$ in the direction we would expect—slightly lower and more acidic but nowhere near what it would be if we had added the $\mathrm{HCl}$ directly to $1 \mathrm{~L}$ of water.
We can also look at what happens when we add a strong base to the buffer solution. If we add 0.01 mol sodium hydroxide to our liter of buffer, the “active” component of the buffer is now the acid, and the reaction is written: $\mathrm{HOAc} + { }^{-}\mathrm{OH} \rightleftarrows \mathrm{AcO}^{-} + \mathrm{H}_{2}\mathrm{O}$.
The strong base reacts with the weak acid. The acid concentration falls and its conjugate base concentration rises, so:
$\mathrm{AcOH}$ $+ \mathrm{~H}_{2}\mathrm{O}$ $\rightleftarrows \mathrm{~H}_{3}\mathrm{O}^{+}$ $+ { }^{-}\mathrm{OAc}$
Initial $[\mathrm{M}]$ $1.00$ (negligible) $1.00$
add $0.01 \mathrm{~M} { }^{-}\mathrm{OH}$ $- 0.01$ $+ 0.01$
Equilibrium $[\mathrm{M}]$ $0.99$ $x$ $1.01$
The new $\mathrm{pH}$ of the solution is $\mathrm{pH} = 4.74 + \log (1.01 / 0.99) = 4.75$—a slight increase but hardly detectable. (Note that the $\mathrm{pH}$ of a $0.01 \mathrm{~M}$ solution of $\mathrm{NaOH}$ is $12$.)
So, buffers can keep the $\mathrm{pH}$ of a solution remarkably constant. – which, as we will see, this is very important for biological systems. But this raises another question, just how much acid could we add to the system before the $\mathrm{pH}$ did change appreciably, or rather, enough to influence the behavior of the system? In biological systems, the tolerance for $\mathrm{pH}$ change is fairly low. As we discussed previously, changes in $\mathrm{pH}$ can cause a cascade of reactions that may prove catastrophic for the organism.
The amount of acid or base that a buffer solution can absorb is called its buffering capacity. This capacity depends on the original concentrations of conjugate acid and base in the buffer and their ratio after reaction, or [conjugate acid]/[conjugate base]. If you start with a buffer that has equal amounts of acid and base, the ratio is equal to $1.0$. As the ratio moves further away from $1.0$, the $\mathrm{pH}$ is affected more and more, until it changes out of the desired range.
Another important property of buffers is the range of $\mathrm{pH}$ that they can act over. As we have seen from the Henderson–Hasselbalch equation, when the concentration of acid is equal to the concentration of base, the $\mathrm{pH}$ of the solution is equal to the $\mathrm{pK}_{a}$ of the acid. Thus, the acetic acid/acetate buffer has a $\mathrm{pH} = 4.74$. Generally, the effective buffering range is $+1$ or $–1 \mathrm{pH}$ unit around the $\mathrm{pK}_{a}$. So the acetic acid/acetate acts as an effective buffer in the range of $\mathrm{pH} 3.7-5.7$, well within the acidic $\mathrm{pH}$ region. There are biological compartments (the stomach, lysosomes, and endosomes) that are acidic but the major biological fluids (cytoplasm and blood plasma) have $\mathrm{pH}$'s around $7.2–7.4$. In these systems, buffers are phosphate or carbonate systems. For example, the phosphate buffer system is composed mainly of $\mathrm{H}_{2}\mathrm{PO}_{4}{}^{-}$ (the proton donor or acid), and $\mathrm{HPO}_{4}{}^{2-}$ (the proton acceptor or base): $\mathrm{H}_{2}\mathrm{PO}_{4}{}^{-} + \mathrm{H}_{2}\mathrm{O} \rightleftarrows \mathrm{HPO}_{4}{}^{2-} + \mathrm{H}_{3}\mathrm{O}^{+}$
What counts as an acid or a base depends entirely on the reaction system you are studying. This is an important point. Both $\mathrm{H}_{2}\mathrm{PO}_{4}{}^{-}$ and $\mathrm{HPO}_{4}{}^{2-}$ can act as either an acid or a base depending on the $\mathrm{pH}$. (Try writing out the reactions.) But at physiological $\mathrm{pH} (7.2-7.4)$, the predominant forms are $\mathrm{H}_{2}\mathrm{PO}_{4}{}^{-}$ and $\mathrm{HPO}_{4}{}^{2-}$. The $\mathrm{pK}_{a}$ of the conjugate acid is $6.86$ so it makes sense that this buffer system is active in cellular fluids.
Questions
Questions to Answer
• How much acid would you have to add to change the $\mathrm{pH}$ of a buffer that is $1.0 \mathrm{~M}$ in acid and conjugate base by one full $\mathrm{pH}$ unit?
• If the buffer is $0.1 \mathrm{~M}$ in acid and conjugate base, would you have to add the same amount of acid? Why or why not?
• What buffer systems would you use to buffer a $\mathrm{pH}$ of $4$, $6$, $8$, and $10$? What factors would you take into account?
• Carbonic acid ($\mathrm{H}_{2}\mathrm{CO}_{3}$) has two acidic protons. Draw out the structure of carbonic acid, and show how each proton can take part in an acid–base reaction with a strong base such as sodium hydroxide.
• What is the $\mathrm{pH}$ of a buffer system if the concentration of the acid component is equal to the concentration of its conjugate base?
• Can any buffer system buffer any $\mathrm{pH}$? For example, could an acetic acid/acetate system effectively buffer a $\mathrm{pH}$ of $9$?
• What criteria would you use to pick a buffer system for a particular $\mathrm{pH}$?
Questions to Ponder
• What factors might make reactions sensitive to $\mathrm{pH}$?
• Why are protein structure and activity sensitive to changes in $\mathrm{pH}$?
• Which parts of proteins are affected by changes in $\mathrm{pH}$? What kinds of chemical properties must they have? What groups of atoms do these bits of proteins contain?
• Would you expect nucleic acids to be more or less sensitive to $\mathrm{pH}$ changes than proteins? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/09%3A_Reaction_Systems/9.2%3A_Buffered_Systems.txt |
In addition to examining how adding a strong acid or base affects a buffer solution, we can also look at the effect of $\mathrm{pH}$ on a particular acid or base. This is particularly important in biological systems where there are many weak acid or base groups that can be affected by the $\mathrm{pH}$. For example, proteins contain both weakly acidic $–\mathrm{COOH}$ and weakly basic $–\mathrm{NH}_{2}$ groups. A $1.0-\mathrm{M}$ solution of a simple carboxylic acid like acetic acid has a $\mathrm{pH}$ of $\sim 2.8$; it turns out that most carboxylic acids behave in a similar way. If we manipulate the $\mathrm{pH}$, for example, by adding a strong base, the acetic acid reacts with the base to form an acetate ion. Based on the Henderson–Hasselbalch equation, when [acetate] = [acetic acid], the $\mathrm{pH}$ equals the acid’s $\mathrm{pK}_{a}$, which is $4.74$. As the $\mathrm{pH}$ increases, the concentration of acetate must also increase, until by $\mathrm{pH} \sim 7$ (approximately normal physiological $\mathrm{pH}$). At this point the concentration of acetic acid is very small indeed. The ratio of base to acid is about $200/1$. That is, at physiological $\mathrm{pH}$'s groups such as carboxylic acids are deprotonated and exist in the carboxylate (negatively charged) form.
Conversely, if we look at the amino group ($–\mathrm{NH}_{2}$) of a protein, it is actually the base part of a conjugate acid-base pair in which the acid is the protonated form $–\mathrm{NH}_{3}{}^{+}$. The $\mathrm{pK}_{a}$ of an $–\mathrm{NH}_{3}{}^{+}$ group is typically $\sim 9$. At a $\mathrm{pH}$ of $9$ there are equal amounts of the protonated ($–\mathrm{NH}_{2}$) and unprotonated forms ($–\mathrm{NH}_{3}{}^{+}$). So if we change the $\mathrm{pH}$ by adding an acid, the concentration of $–\mathrm{NH}_{3}{}^{+}$ form increases as the base form $–\mathrm{NH}_{2}$ is protonated. At $\mathrm{pH} \sim 7$ there is little of the $–\mathrm{NH}_{2}$ form remaining. Interestingly, this means that an amino acid (shown in the figure) never exists in a state where both the amino ($–\mathrm{NH}_{2}$) group and the carboxylic acid ($–\mathrm{CO}_{2}\mathrm{H}$) exist at the same time. The “neutral” species is in fact the one in which $–\mathrm{NH}_{3}{}^{+} / –\mathrm{CO}_{2}$– are present. This zwitterion (that is a neutral molecules with a positive and negative electrical charge at different locations, from the German zwitter, meaning “between”) is the predominant form at physiological $\mathrm{pH}$.
A protein is composed mainly (sometimes solely) of polymers of amino acids, known as polypeptides. In a polypeptide, the amino ($–\mathrm{NH}_{2}$) and carboxylic acid ($–\mathrm{CO}_{2}\mathrm{H}$) groups of amino acids are bonded together to form a peptide bond (see figure). The resulting amide group (peptide bond) is neither acidic nor basic under physiological conditions.[9] That being said, many of the amino acids found in proteins have acidic (aspartic acid or glutamic acid) or basic (lysine, arginine, or histidine) side chains. The $\mathrm{pH}$ of the environment influences the conformations of the protein molecule and the interactions between these charged side chains (the spontaneous native conformations of the molecule are called protein folding). Changes from the “normal” environment can lead to changes in protein structure, and this in turn can change biological activity. In some cases, protein activity is regulated by environmental $\mathrm{pH}$. In other cases, changes in $\mathrm{pH}$ can lead to protein misfolding (or denaturation, which in living organisms can cause disruption of cell activity or death). For example, if these groups are protonated or deprotonated, the electronic environment in that region of the protein can change drastically, which may mean that the protein will not only change how it interacts with other species but its shape may change so as to minimize repulsive interactions or produce new attractive interactions. Small changes in protein shape can have profound effects on how the protein interacts with other molecules and, if it is a catalyst, its efficiency and specificity. In fact, there are cases where environmental $\mathrm{pH}$ is used to regulate protein activity.
Questions
Questions to answer:
• What would be the ratio of $–\mathrm{NH}_{3}{}^{+} / –\mathrm{NH}_{2}$ in a solution of a protein at $\mathrm{pH} 5$, $\mathrm{pH} 7$, and $\mathrm{pH} 9$?
• What kinds of interactions would each form participate in?
• What is the predominant form of a carboxylic acid group at $\mathrm{pH} 5$? $\mathrm{pH} 7$, $\mathrm{pH} 9$
• What kinds of interactions would each form participate in?
• What is the ratio of $–\mathrm{NH}_{3}{}^{+} / –\mathrm{NH}_{2}$ in a solution of a protein at $\mathrm{pH} 5$, $\mathrm{pH} 7$, and $\mathrm{pH} 9$? What kinds of interactions does each form participate in?
• What is the predominant form of a carboxylic acid group at $\mathrm{pH} 5$? How about at $\mathrm{pH} 7$ and $\mathrm{pH} 9$? What kinds of interactions does each form participate in? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/09%3A_Reaction_Systems/9.3%3A_Amino_Acids%2C_Proteins%2C_and_pH.txt |
Another important buffer system is the carbonic acid ($\mathrm{H}_{2}\mathrm{CO}_{3}$) bicarbonate ($\mathrm{HCO}_{3}{}^{-}$) buffer, which is a major buffering component of blood plasma. This system is more complex than the phosphate buffer, because carbonic acid is formed by the reversible reaction of carbon dioxide in water: $\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2} \rightleftarrows \mathrm{H}_{2} \mathrm{CO}_{3} \quad \text { and } \quad \mathrm{H}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{HCO}_{3}^{-}+\mathrm{H}_{3} \mathrm{O}^{+}$
These are two reactions linked (or coupled) by a common intermediate. By examining these reactions more closely, we see how some systems exist under non-equilibrium conditions and how some reactions occur despite the fact that they have a positive free energy change and appear to contravene the second law of thermodynamics.
As we have seen previously, simple chemical reactions are characterized by how fast they occur (their rate) and how far they proceed toward equilibrium. While you will learn much more about reactions if you continue on in chemistry, that is not something we will pursue here – rather we will consider the behavior of systems of reactions and their behavior, particularly when they have not reached equilibrium. This is a situation common in open systems, systems in which energy and matter are flowing in and out. In Chapter $8$, we considered single reactions and what happens when we perturb them, either by adding or taking away matter (reactants or products) or energy (heating or cooling the reaction.) Now it is time to look at what happens when reactions are coupled: when the products of one reaction are the starting materials for other reactions occurring in the same system.
Take for example the coupled system introduced above – the pair of reactions that are linked by the formation and reaction of carbonic acid. $\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2} \rightleftarrows \mathrm{H}_{2} \mathrm{CO}_{3} \text { and } \mathrm{H}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{HCO}_{3}^{-}+\mathrm{H}_{3} \mathrm{O}^{+}$
These coupled reactions are important for a number of reasons: they are responsible for the transport of excess carbon dioxide to the lungs and for buffering the pH of blood. Carbon dioxide enters the blood stream by dissolving in the plasma. However, it can also react with water in a reaction where the water acts as a nucleophile and the carbon dioxide acts an electrophile.
The formation of carbonic acid is thermodynamically unfavorable. The equilibrium constant for hydration of carbon dioxide is $1.7 \times 10^{-3}$ and the standard free energy change $\Delta \mathrm{G}^{\circ}$ for the reaction[10] is $16.4 \mathrm{~kJ}$. This means that the amount of carbonic acid in blood plasma is quite low; most carbon dioxide is just dissolved in the plasma (rather than reacted with the water). However, as soon as carbonic acid is formed, it can react with water: $\mathrm{H}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{HCO}_{3}{ }^{-}+\mathrm{H} 3 \mathrm{O}^{+}$
to produce bicarbonate ($\mathrm{HCO}_{3}{}^{-}$). Note that we now have the components of a buffer system (a weak acid, carbonic acid, and its conjugate base bicarbonate). The rate of this reaction is increased by the enzymatic catalyst carbonic anhydrase. In this buffer system the carbonic acid can react with any base that enters the bloodstream, and the bicarbonate with any acid. This buffering system is more complex than the isolated ones we considered earlier, because one of the components (carbonic acid) is also part of another equilibrium reaction. In essence, this means that the pH of the blood is dependent on the amount of carbon dioxide in the bloodstream: $\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2} \rightleftarrows \mathrm{H}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{HCO}_{3}^{-}+\mathrm{H}_{3} \mathrm{O}^{+}$
If we remove water from the equations (for the sake of clarity) we can see the connection better: $\mathrm{CO}_{2} \rightleftarrows \mathrm{H}_{2} \mathrm{CO}_{3} \rightleftarrows \mathrm{HCO}_{3}^{-}+\mathrm{H}_{3} \mathrm{O}^{+}$
Figure $9.4.1$ Lactic Acid
The $\mathrm{pK}_{a}$ of carbonic acid is $6.37$ and the $\mathrm{pH}$ of blood is typically $7.2–7.4$, which does fall just within the buffering range. Under normal circumstances, this buffer system can handle most changes. However, for larger changes, other systems are called into play to help regulate the pH. For example, if you exert yourself, one of the products generated is lactic acid, (which we denote as $\mathrm{LacOH}$).[11] When lactic acid finds its way into the bloodstream, it lowers the $\mathrm{pH}$ (increasing the amount of $\mathrm{H}_{3}\mathrm{O}^{+}$) through the reaction: $\mathrm{LacOH}+\mathrm{H}_{2} \mathrm{O} \rightleftarrows \mathrm{H}_{3} \mathrm{O}^{+}+\mathrm{LacO}^{-}$
If we use Le Chatelier’s principle, you can see that increasing the $\mathrm{H}_{3}\mathrm{O}^{+}$ shifts the equilibrium toward the production of carbon dioxide in the buffer system. As the concentration of $\mathrm{CO}_{2}$ increases, a process known as chemoreception activates nervous systems, which in turn regulate (increase) heart and respiratory rates,which in turn lead to an increase in the rate of $\mathrm{CO}_{2}$ and oxygen exchange in the lungs.[12] As you breathe in $\mathrm{O}_{2}$, you breathe out $\mathrm{CO}_{2}$ (removing it from your blood). In essence, Le Chatelier’s principle explains why we pant when we exercise![13] Conversely, when some people get excited, they breathe too fast (hyperventilate); too much $\mathrm{CO}_{2}$ is removed from the blood, which reduces the $\mathrm{H}_{3}\mathrm{O}^{+}$ concentration and increases the $\mathrm{pH}$. This can lead to fainting (which slows down the breathing), a rather drastic way to return your blood to its correct $\mathrm{pH}$. An alternative, non-fainting approach is to breathe into a closed container. By breathing expelled $\mathrm{CO}_{2}$ (and a lower level of $\mathrm{O}_{2}$), you increase your blood $\mathrm{pH}$.
While we can use Le Chatelier’s principle to explain the effect of rapid or slow breathing, this response is one based on what are known as adaptive and homeostatic systems. Biological systems are characterized by many such interconnected regulatory mechanisms. They maintain a stable, internal chemical environment essential for life. Coupled regulatory systems lie at the heart of immune and nervous system function. Understanding the behavior of coupled regulatory systems is at the forefront of many research areas, such as: measuring the physiological response to levels of various chemicals (chemoreception); recognizing and responding to foreign molecules in the immune system; and measuring the response to both external stimuli (light, sound, smell, touch) and internal factors (such as the nervous system). Downstream of the sensory systems examined by such efforts are networks of genes, proteins, and other molecules whose interactions are determined by the thermodynamics of the chemical system. Although they were formed by evolutionary processes, and are often baroque in their details, they are understandable in terms of molecular interactions, chemical reactions, and their accompanying energy changes.
Questions
Questions to Answer
• If the $\mathrm{pK}_{a}$ of carbonic acid is $6.35$ and the $\mathrm{pH}$ of blood is over $7$, what do you think the relative amounts of carbonic acid and bicarbonate are? Why?
• Draw out the series of reactions that occur when lactic acid is introduced into the blood stream and explain why this affects the concentration of carbon dioxide in the blood stream.
• If the amount of carbon dioxide in the atmosphere increases, what effect does it have on oceans and lakes?
• If carbon dioxide dissolves in water to give carbonic acid, what do you think nitrogen dioxide ($\mathrm{NO}_{2}$) gives when dissolved in water? How about sulfur dioxide? What effect does this have on the $\mathrm{pH}$ of the water it dissolves in? | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/09%3A_Reaction_Systems/9.4%3A_Coupled%2C_Non-Equilibrium_Reaction_Systems.txt |
We have seen that for systems of coupled reactions, changing the concentration of one of the components in the system affects all the other components, even if they are not directly reacting with the one that is changed. We can use the same principles to explain why it is possible to carry out reactions that are thermodynamically unfavorable. We will consider a fairly simple example and then move on to see how this works in biological systems.
Many metals are not found in their elemental form. For example, copper—an important metal used for a wide range of applications from wires to roofs—is often found as chalcocite, an ore containing copper as copper sulfide. We can imagine a simple chemical reaction to separate the copper from the sulfide:[14] $\mathrm{Cu}_{2} \mathrm{S}(s) \rightleftarrows 2 \mathrm{Cu}(s)+\mathrm{S}(s) \quad \Delta \mathrm{G}^{0}=86.2 \mathrm{kJ} / \mathrm{mol}$
Note that this reaction is a redox reaction in which the $\mathrm{Cu}^{+}$ ion is reduced to $\mathrm{Cu}$ by the addition of an electron (from the sulfide $\mathrm{S}^{2-}$, which is oxidized to sulfur with an oxidation state of 0.) Unfortunately, because the free energy change for this reaction is positive, the system at equilibrium is composed mostly of $\mathrm{Cu}_{2} \mathrm{S}(s)$. How can we get copper out of copper sulfide? One possibility to exploit the reaction between sulfur and oxygen: $\mathrm{S}(s)+\mathrm{O}_{2}(g) \rightleftarrows \mathrm{SO}_{2}(g), \text { for which } \Delta \mathrm{G}^{\circ}=-300.1 \mathrm{kJ} / \mathrm{mol}$
This reaction is highly favorable and “goes” toward the production of $\mathrm{SO}_{2}$2. It is basically the burning of sulfur (analogous to the burning of carbon) and is another redox reaction in which the sulfur is oxidized (from an oxidation state of 0 to +4). Note that one reason why this reaction is so favorable is the formation of the strong $\mathrm{S-O}$ bonds, which is a highly exothermic process.
If we take $\mathrm{Cu}_{2}\mathrm{S}(s)$ together with $\mathrm{O}_{2}(g)$, we have a system composed of two reactions: \begin{aligned} &\mathrm{Cu}_{2} \mathrm{~S}(s) \rightleftarrows 2 \mathrm{Cu}(s)+\mathrm{S}(s) \text { [reaction 1] } \ &\mathrm{S}(s)+\mathrm{O}_{2}(g) \rightleftarrows \mathrm{SO}_{2}(g) \text { [reaction 2] } \end{aligned}
These two reactions share a common component ($\mathrm{S}(s)$); therefore, they are coupled. Imagine what happens when reaction 1 proceeds, even a little. The $\mathrm{S}(s)$ produced can then react with the $\mathrm{O}_{2}(g)$ present. As this reaction proceeds toward completion, $\mathrm{S}(s)$ is removed, leaving $\mathrm{Cu}(s)$ and $\mathrm{SO}_{2} (g)$. Based on Le Chatelier’s principle, reaction 1 is now out of equilibrium, and thus generates more $\mathrm{S}(s)$ and $\mathrm{Cu}(s)$. Reaction 1 in isolation produces relatively little copper or sulfur, but it is dragged toward the products by reaction 2, a favorable reaction that removes sulfur from the system. If we assume that there are no other reactions occurring within the system, we can calculate the $\Delta \mathrm{G}^{\circ}$ for the coupled reactions 1 and 2. Under standard conditions, we simply add the reactions together:
$\mathrm{Cu}_{2}\mathrm{S}(s) \rightleftarrows 2\mathrm{Cu}(s) + \mathrm{S}(s)$ $86.2 \mathrm{~kJ/mol}$
$\mathrm{S}(s) + \mathrm{O}_{2}(g) \rightleftarrows \mathrm{SO}_{2} (g)$ $-300.1 \mathrm{~kJ/mol}$
$\mathrm{Cu}_{2}\mathrm{S}(s) + \mathrm{O}_{2} (g) \rightleftarrows 2\mathrm{Cu}(s) + \mathrm{SO}_{2} (g)$ $-213.9 \mathrm{~kJ/mol}$
So, the $\Delta \mathrm{G}^{\circ}$ for the coupled reaction is $-213.9 \mathrm{~kJ/mol}$. This same basic logic applies to any coupled reaction system. Note that the common intermediate linking these two reactions is sulfur ($\mathrm{S}$). However, it is not always so simple to identify the common intermediate. In this system, we are tacitly assuming that $\mathrm{O}_{2}$ and $\mathrm{SO}_{2}$ do not react with either $\mathrm{Cu}_{2}\mathrm{S}$ or $\mathrm{SO}_{2}$. If they did, those reactions would also need to be considered in our analysis. In fact, we need to consider all of the reactions that are possible with a system. This is normally not a big issue with simple chemical systems that contain relatively small numbers of different types of molecules (sometimes called species), but it is a significant concern when we consider biological or ecological systems that contain thousands of different types of molecules, which can interact and react in a number of ways.
For example, you may have learned in biology that the molecule adenosine triphosphate ($\mathrm{ATP}$) is used to store and provide energy for cellular processes. What exactly does this mean? First, let us look at the structure of $\mathrm{ATP}$: it is composed of a base called adenine, a sugar ribose, and three phosphate units. For our purposes, the adenine base and sugar (called adenosine when attached to each other) are irrelevant. They do not change during most of the reactions in which $\mathrm{ATP}$ takes part. They are organic “building blocks” with functional groups that allow them to interact with other components in the cell for other functions (for example, in $\mathrm{RNA}$ and $\mathrm{DNA}$). To examine energy transfer, we can just use “A” (adenosine) to stand in for their structure. The important bit for our purposes are the phosphates hooked together by the $\mathrm{P—O—P}$ (phosphoanhydride) linkages. At physiological $\mathrm{pH}$, most (if not all) of the oxygens of the phosphate esters are deprotonated. This means that there is a fairly high concentration of charge in this tri-ester side chain, which acts to destabilize it. The bonds holding it together are relatively weak, and the molecule reacts with any available entity to relieve some of this strain and form even more stable bonds. For example, $\mathrm{ATP}$ is unstable in water and reacts (hydrolyzes) to form adenosine diphosphate ($\mathrm{ADP}$) and inorganic phosphate ($\mathrm{HPO}_{4}$), which is often written as $\mathrm{P}_{i}$. This reaction is written as $\mathrm{ATP} + \mathrm{H}_{2}\mathrm{O} \rightleftarrows \mathrm{ADP} + \mathrm{P}_{i}$.
The standard free energy change for this reaction $\Delta \mathrm{G}^{\circ} = – 29 \mathrm{~kJ/mol}$ (at $\mathrm{pH } 7$). This is a highly exergonic (heat or energy releasing) reaction; both the enthalpy and entropy changes for this reaction are favorable. $\Delta \mathrm{H}$ is negative and $\Delta \mathrm{S}$ is positive. It makes sense that the entropy change is positive. After all, we are producing two molecules from one. The enthalpy change also makes sense. We have already mentioned that $\mathrm{ATP}$ is unstable, and the loss of one of the phosphate groups relieves some of the strain caused by the charge repulsion between the three negatively charged phosphate groups in $\mathrm{ATP}$. The bond energies in the product are stronger than the bond energies in the reactants and thus the reaction is exothermic. Like everything in living systems, this is all somewhat complicated by the presence of other substances in the cellular fluids, such as the metal ions $\mathrm{Ca}^{2+}$ and $\mathrm{Mg}^{2+}$, and changes in pH. However, the explanation is still valid. Make sure that you do not fall prey to the commonly held misconception that it is the breaking of the $\mathrm{P—O}$ bond that releases energy. On the contrary—it is the formation of more stable (stronger) bonds that releases energy.
If we go one step further and look at the actual free energy change $\Delta \mathrm{G}$ (as opposed to the standard change), using typical cellular concentrations of $\mathrm{ATP}$, $\mathrm{ADP}$ and $\mathrm{P}_{i}$, and $\Delta \mathrm{G} = \Delta \mathrm{G}^{\circ} + \mathrm{RT} \ln \mathrm{Q}$ (where $\mathrm{Q}=[\mathrm{ADP}]\left[\mathrm{P}_{i}\right] /[\mathrm{ATP}]$), we can calculate: $\Delta \mathrm{G} = – 52 \mathrm{~kJ/mol}$, assuming that the concentration of $\mathrm{ATP}$ is typically about ten times that of $\mathrm{ADP}$, and that $\left[\mathrm{P}_{i}\right]$ is about $0.001 \mathrm{~M}$. So in real conditions in the cell, the Gibbs free energy change is much higher than the standard Gibbs free energy change. This energy is not wasted; it is used to drive other reactions that would not otherwise occur. However, this energy cannot be used to drive just any random reaction. The reactions have to be coupled by common intermediates (just like the carbon dioxide carbonate system).
A typical reaction scenario is the transfer of the terminal phosphate group to another biomolecule as shown in the diagram. This transfer occurs with lipids and proteins, but typically the reacting group is an alcohol ($\mathrm{ROH}$) or sometimes a carboxylic acid ($\mathrm{RCOOH}$). The reaction that takes place is almost the same as the hydrolysis reaction except that the incoming nucleophile has much more “stuff” attached to the oxygen.
The formation of these phosphate esters makes the original functional group more reactive. For example, the formation of an amide bond (the major bond that holds proteins together) is normally exergonic (about $+2$ to $4 \mathrm{~kJ/mol}$). The formation of amide bonds is not spontaneous (you might want to think about what this means for the amide bonds in the proteins that make up a good portion of you). Therefore, protein synthesis is coupled with $\mathrm{ATP}$ hydrolysis, as is the production of many biomolecules, sugars, lipids, $\mathrm{RNA}$, and $\mathrm{DNA}$. The reactions are complex, but each of them is driven by a series of individual reactions linked by common intermediates.
Now you might be asking: if $\mathrm{ATP}$ is so unstable, how does it get formed in the first place and how can it be found at such high concentrations? The short answer involves two ideas that we have encountered before: first, while $\mathrm{ATP}$ is unstable (like wood in the presence of $\mathrm{O}_{2}$), its hydrolysis does involve overcoming an activation energy and so under physiological conditions, an enzyme that can catalyze and coupled the hydrolysis of $\mathrm{ATP}$ to other reactions (an $\mathrm{ATP}$ase) is needed; second, $\mathrm{ATP}$ is formed through coupled reactions that link the oxidation of molecules such as glucose or through the direct absorption of energy in the form of light (photosynthesis). When glucose reacts with oxygen it forms carbon dioxide and water: $\mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6}+6 \mathrm{O}_{2} \rightleftarrows 6 \mathrm{CO}_{2}+6 \mathrm{H}_{2} \mathrm{O}$
with an overall standard free energy change $\Delta \mathrm{G}^{\circ} = –2870 \mathrm{~kJ/mol}$. The reasons for this high negative free energy change are that $\Delta \mathrm{S}^{\circ}$ is positive (why do you think this is?), and there is a large negative $\Delta \mathrm{H}^{\circ}$ change. Remember that $\Delta \mathrm{H}^{\circ}$ can be approximated by looking at the changes in bond energy from reactants to products. A major reason for this high enthalpy change is that the bond energies in carbon dioxide and water are very high (a $\mathrm{C=O}$ bond takes $805 \mathrm{~kJ/mol}$ to break, and an $\mathrm{O-H}$ bond $463 \mathrm{~kJ/mol}$), and therefore when $\mathrm{C=O}$ and $\mathrm{O-H}$ bonds are formed a large amount of energy is released. When one mole of glucose is completely oxidized to $\mathrm{CO}_{2}$ and $\mathrm{H}_{2}\mathrm{O}$, the energy produced is harnessed to ultimately produce $\sim 36$ moles of $\mathrm{ATP}$ (from $\mathrm{ADP}$ and $\mathrm{P}_{i}$).
The mechanism(s) involved in this process are complex (involving intervening ion gradients and rotating enzymes), but the basic principle remains: the reactions are coupled by common and often complex intermediate processes. This reaction coupling leads to networks of reactions. The synthesis and reaction of $\mathrm{ATP}$ (and $\mathrm{ADP}$) is governed by the same principles that govern much simpler reactions. Whether or not $\mathrm{ATP}$ or $\mathrm{ADP}$ is the dominant species in any cellular compartment depends upon the conditions and what substrates are present to form a reaction.
Questions
Questions to Answer
• Can you draw the protonated form of $\mathrm{ATP}$?
• Can you draw the unprotonated form of $\mathrm{ATP}$, showing how the negative charge is stabilized by the surrounding cellular fluids? (Hint: the fluid is mainly water.)
• The $\mathrm{pK}_{a}$'s of phosphoric acid ($\mathrm{H}_{3}\mathrm{PO}_{4}$) are $2.15$, $7.2$ and $12.35$. Is the $\mathrm{ATP}$ protonated or deprotonated in the cellular environment?
• Write out a hypothetical sequence of two reactions that result in the production of a thermodynamically unfavorable product.
• How can you tell whether two reactions are coupled?
• Why do biological systems rely on coupled reactions?
• If $\mathrm{ATP}$ is unstable, how is it possible that $\mathrm{ATP}$ can exist at high concentrations within the cell?
Questions to Ponder
• If you are trying to determine if two reactions are coupled, what do you look for?
• Coupling allows unfavorable reactions to occur. Why doesn’t this violate the laws of thermodynamics?Assume that you have a set of five coupled reactions. What factors could complicate the behavior of the system?
• How could you insure that an unfavorable reaction continued to occur at a significant (useful)
rate?
9.6: In-Text References
1. Of course this designation is entirely arbitrary, as backward and forward depend on how the initial reaction is written.
2. We recognize such evolutionarily conserved processes because they used essentially (but not quite) the same reaction components and strategies. For example, aerobic respiration (whether in bacteria, potatoes, or humans) uses a structurally similar membrane system to transfer electrons from molecule to molecule (redox reaction). This generates an $\mathrm{H}^{+}$ gradient then used by a rotatory protein “generator” to synthesize $\mathrm{ATP}$.
3. Amino acid chains are referred to as polypeptides; a protein is a functional unit, which can be composed of multiple polypeptides and non-polypeptide components such as heme groups.
4. http://benbleasdaleblogs.wordpress.c...ide-your-nose/
5. This binding site can also be occupied by other types of molecules, in particular carbon monoxide ($\mathrm{CO}$). Because the binding of $\mathrm{O}_{2}$ to hemoglobin is much weaker and less stable than the $\mathrm{CO}^{-}$ hemoglobin interaction, exposure to $\mathrm{CO}$ blocks $\mathrm{O}_{2}$ transport through the body, leading to suffocation.
6. An explanation of how to do this is provided in the supplementary materials.
7. It is always important to keep in mind that even though we write reaction equations with “sides” – product and reactant – in fact all these species are present in the same reaction vessel.
8. By adding a small amount of solute (rather than a volume of solution) we will not significantly affect the volume of the solution - which will make determining the concentration easier.
9. You might wonder why the amide nitrogen is not basic - even though it appears to have a lone pair of electrons. However these electrons are not available for donation because they are conjugated (interacting) with the $\mathrm{C=O}$ group. You will have to wait until organic chemistry to hear more on this fascinating topic
10. calculated from $\Delta \mathrm{G}^{\circ} = –\mathrm{RT} \ln \mathrm{K}$ at physiologycal temperature $37 { }^{\circ}\mathrm{C}$
11. This occurs primarily because $\mathrm{O}_{2}$ is in short supply and the aerobic respiration reaction cannot proceed to completion.
12. Guyenet et al, 2010. Central $\mathrm{CO}_{2}$ chemoreception and the integrated neural mechanisms of cardiovascular and respiratory control. J. Appl. Physiol. [online] 2010. 108, 995. http://www.ncbi.nlm.nih.gov/pubmed/20075262.
13. Although it does not explain why we would want to exercise in the first place.
14. Adapted from Physical Chemistry for the Chemical and Biological Sciences by Raymond Chang [complete citation] | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/09%3A_Reaction_Systems/9.5%3A_Energetics_and_Coupling.txt |
How we know what we know about the structure of matter? If you think back to our discussions of atomic structure, one of the most important pieces of evidence for the nature of atoms – particularly the arrangement of their electrons – was the way the atoms interact with electromagnetic radiation, that is, light. For example: the idea that both the energy of electromagnetic radiation and that of electrons is quantized came from Einstein’s analysis of the photoelectric effect. Electrons are ejected from metals only if they interact with photons of sufficient and discrete amounts of energy. More evidence for quantized electron energy states was provided from the study of atomic absorption and emission spectra, as photons of energy corresponding to the energy gaps between electron energy levels are either absorbed or emitted by electrons. Photons with the “wrong” amount of energy are not absorbed. Now that we have studied different assemblies of atoms (molecules, ions, networked structures), we can also look at how these larger entities interact with energy (in the form of electromagnetic radiation).
Interactions of electromagnetic radiation and electrons in molecules: As we have seen, just as electrons occupy atomic orbitals in atoms, the electrons in molecules occupy molecular orbitals. As with atomic orbitals, electrons in molecular orbitals can absorb or release photons of a specific energy as they move from one molecular orbital to another. However, there is a significant difference between the absorption/emission process in isolated atoms (or ions) and that of molecules. When an electron is promoted to a higher energy level in an atom, the product is an atom in an excited state – generally the excited atom (or ion) will decay back to the ground state by emitting a photon. $\mathrm{A} + hν \rightarrow \mathrm{A}^{*} (\text{excited state}) \rightarrow \mathrm{A} (\text{ground state}) + hν$
However, when an electron within a molecule is excited it moves (or is “promoted”) from its original molecular orbital to another. Now there are a number of different consequences that may occur. For example, if the electron absorbs a photon and is promoted from a bonding molecular orbital to an anti-bonding orbital, the result will be that the bond will break, since there is now no overall stabilizing interaction. Consider $\mathrm{H–H}$, which is the simplest possible molecule. The set of molecular orbitals for hydrogen includes a σ bonding and a $\sigma^{*}$ anti-bonding orbital. In the ground (or lowest energy) state, molecular hydrogen has a σ bonding orbital containing both of the molecule’s electrons. If one of the bonding electrons absorbs a photon that has just the right amount of energy (the energy difference between the bonding and anti-bonding orbital) it will be promoted and move into the destabilized anti-bonding orbital – causing the bond between the atoms to break. As you might imagine, if chemical bonds were susceptible to breaking merely by being exposed to low energy electromagnetic radiation, such as that of visible light, the world would be a different (and rather boring) place. For example, life would not be possible, since it depends upon the stability of molecules.
The energy of the photons required to bring about bond breaking is quite large. For example, the energy required to break an $\mathrm{H–H}$ bond (the bond energy) is $436 \mathrm{~kJ/mol}$. If you calculate the wavelength of a photon that could deliver this amount of energy, the amount of energy required to break one $\mathrm{H–H}$ bond would be in the far UV section of the electrogmagnetic spectrum ($\sim 280\mathrm{nm}$). The typically strong covalent sigma (or single) bond requires quite high energy photons to break them.
So the question is, if the Earth’s atmosphere blocks out most ($>98 \%$) of high energy (ultraviolet) photons and most biologically important molecules cannot absorb visible light, why is there a need for sunscreen, which filters out the UV A ($400-315 \mathrm{~nm}$) and UV B ($315-280 \mathrm{~nm}$) photons. The answer is that a most biological molecules contain more than simple σ bonds. For example, most complex biological molecules also contain π bonds and non-bonding electrons in addition to $\sigma$ bonds; transitions between these orbitals may be observed as these orbitals require less energy to be effected. As you can see in the figure, the energy gaps between these orbitals are quite different and are smaller than the $\sigma - \sigma^{*}$ difference. Photons with enough energy to cause these electron transitions are present in sunlight. For example, a double bond has both a $\sigma$ and a $\pi$ bond. Absorption of a photon that would promote an electron from a pi bonding orbital to a $\pi^{*}$ anti-bonding orbital would have the effect of breaking the original $\pi$ bond. One way to represent this is shown here $\rightarrow$. One of electrons that was in the $\pi$ bond is now in the high energy $\pi^{*}$ antibonding orbital and is far more reactive.
Another way to think about it is that the electrons are now unpaired, and are much more likely to react to form a more stable entity.[1] An obvious way to regain stability is for the electron in the $\pi$ antibonding orbital to drop back down to the bonding energy level and emit a photon of the same energy, and in most cases this is what happens – ultimately causing no damage. One caveat here is that since double bonds are rotationally constrained, it is possible that rotation can occur around the single ($\sigma$) bond before the π bond reforms; this leads to an isomer of the original alkene. On the other hand, if there is a nearby potentially reactive species, reactions between molecules (or in the case of biological macromolecules, between distinct regions of these molecules) can occur. For example, most of us are aware that exposure to the sun causes skin damage that can lead to skin cancer. A major mechanism for these effects involves $\mathrm{DNA}$. Where two thymidine bases are adjacent to one another, a UV photon can be absorbed by a π bond in one thymine base. This broken π bond (and resulting unpaired electron) is very reactive. It can react with a π bond in an adjacent thymine-moeity leading to a new bond, a reaction that produces a four membered carbon ring, known as a thymine dimer. The $\mathrm{DNA}$ replication machinery cannot accurately replicate a sequence containing a thymine dimer, resulting in a change in $\mathrm{DNA}$ sequence – a mutation. Mutations of this type are a common early step in the generation of a cancerous skin cells.[2]
A more benign example of photon absorption in biological systems underlies the mechanism by which we (and other organisms) detect light, that is how we can see things! While it was originally thought (at least by some) that vision involved rays emitted from eyes,[3] we now understand that to see we need to detect photons that are reflected or emitted by the objects around us. The process begins when the photons of light fall on cells known as photoreceptors. In our eyes, these cells are located within the retina, a sheet of cells than line the interior surface of the eye. Within a subset of retinal cells are a number of different types of molecules that contain π bonds. These molecules are proteins known generically as opsins. An opsin is composed of a polypeptide (or apoprotein) that is covalently bound to another molecule, 11-cis-retinal.[4] This molecule is derived from vitamin A (all trans retinol). The complex of apoprotein and retinal is the functional opsin protein. There are a number of different opsin components that influence the wavelength of the photons absorbed by the functional opsin protein. When a photon is absorbed, it promotes an electron from one of the retinal’s π bonds (between $C_{11}$ and $C_{12}$) to an antibonding orbital. Instead of reacting with another molecule, like thymine, there is a rotation around the remaining single ($\sigma$) bond, and then the re-formation of the $\pi$ bond, which leads to the isomerization of the original 11-cis form into the trans isomer. This change in the shape of the retinal moiety in turn influences the shape of the opsin protein which initiates a cascade of electrochemical events that carry signals to the rest of the brain (the retina is considered an extension of the brain) that are eventually recognized as visual input.
UV-Vis spectroscopy and chromophores – or why are carrots orange? One common recommendation from doctors is that we eat plenty of highly colored fruits and vegetables. The compounds that give these foods their strong color have a number of commonalities. For example, the compound that gives carrots and sweet potatoes their distinctive orange color is beta-carotene. You might well notice its similarity to retinal. The compound that contributes to the red color of tomatoes is lycopene. Molecules of this type are known generically as pigments.
The wavelengths at which a compound absorbs light depends on the energy gap between the orbitals that are involved in the transition. This energy gap is determined by the structure of the molecule. A molecule with only single bonds absorbs light at shorter wavelengths (in the high energy UV), while more complex bonding patterns are associated with the absorption of visible light. For example, the presence of multiple π bonds and their interactions within the molecule can affect the energy gap between the molecular orbitals. Recall our discussion of graphite. Rather than thinking of graphite as sheets of fused six membered rings with alternating single and double bonds, we can think of each bond as a localized σ bond and a delocalized π bond. There are huge numbers of $\pi$ molecular orbitals spread over the whole sheet of carbon atoms. The more $\pi$ MO’s there are the more the energy gap between these orbital decreases, that is, the less energy (longer wavelength light) is needed to move an electron from a $\pi$ to $\pi^{*}$ orbital. In the case of network substances like graphite and metals, the energy gap between the orbitals becomes negligible, and we think of the bonding model as a band of molecular orbitals. In these cases, many wavelengths of light can be absorbed and then re-emitted which gives graphite and metals their characteristic shininess. In substances like lycopene or $\beta$-carotene we also find this pattern of alternating single and double bonds. We say that compounds with this pattern of alternating single and double bonds (e.g. $\mathrm{–C=C–C=C–}$) are conjugated, and we can model the bonding in the same way as graphite. There are $\pi$ MO’s that can extend over the region of the molecule, and the more orbitals there are, the closer together in energy they get.
For an isolated $\mathrm{C=C}$ double bond, the energy required to promote an electron from the $\pi$ to the $\pi^{*}$ orbital corresponds to the light in the UV region (around $170 \mathrm{~nm}$), but as the number of double bonds that are conjugated (separated by single bonds) increases, the energy gap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) decreases. Eventually, the wavelength of light needed to promote an electron from the HOMO to the LUMO moves into the visible region, and the substance becomes colored. (Note that it does not become the color of the light that is absorbed – but rather the remaining light that is transmitted or reflected). These conjugated regions of molecules are called chromophores.[5] The longer the conjugated section of the molecule, the longer the wavelength that is absorbed. You will notice that both lycopene and B-carotene contain large chromophore regions.
Samples of UV-VIS absorption spectra are shown here. Note that in contrast to the atomic absorption spectra we saw earlier which consisted of sharp lines corresponding to the wavelength of light absorbed by atoms, these spectra are broad and ill defined. In addition, you can see that the longer (larger) the chromophore the longer the wavelength that is absorbed, and each of these compounds appears to be a different color.
The fact that the peaks in these spectra are not sharp means that UV-VIS spectroscopy is typically not used for identification of compounds (see below for IR and NMR spectroscopy which can be used for this purpose). However the amount of light absorbed is proportional to the concentration of the substance and therefore UV-VIS spectroscopy can be used to determine the concentration of samples. There are other optical behaviors associated with complex molecules in organisms, including molecular systems that that emit light, a process known as bioluminescence that we will not discuss here.[6]
InfraRed spectroscopy – looking at molecular vibrations: Up to now we have concentrated on the absorption (and emission) of energy caused by transitions of electrons between quantized energy levels. However, as we discussed earlier, electron energies are not the only quantized energies at the atomic-molecular level. In molecules, the energies of both molecular vibrations and rotations are also quantized, but the energies involved are much lower than those needed to break bonds. When two atoms are bonded, the atoms can move back and forth relative to each other: as they move, the potential energy of the two atom system changes (why is that?). There are also motions associated with rotations around bonds. But (weirdly, and quantum mechanically) rather than being able to assume any value, the energies of these vibrations (and rotations) are also quantized. The energy gaps between the vibrational energy levels tend to be in the range of infrared radiation. When we look at the light absorbed or emitted by vibrational energy changes we are doing what is known as infrared spectroscopy. Transitions from one rotational energy level to anther can be promoted by microwave radiation, leading to microwave spectroscopy. The table below provides examples of different types of spectroscopy, the wavelength of electromagnetic radiation typically involved, and the uses of each kind of spectroscopy.
Table I
Type of Spectroscopy Radiation wavelength Interaction with matter Purpose
UV-VIS $350-700 \mathrm{~nm}$ Electronic transitions in molecules Often used to determine concentrations
IR $2,500-16,000 \mathrm{~nm}$ Molecular vibrations To determine presence of particular groups of atoms (functional groups
NMR $10-100 \mathrm{~m}$ Nuclear transitions Identify types of C and H in molecules
MRI $10-100 \mathrm{~m}$ Nuclear transitions Imaging (typically human body parts)
Why, you might ask, are we interested in the vibrations and rotations of molecules? It turns out that many molecules and fragments of molecules have very distinctive IR absorption patterns that can be used to identify them. The figure shows an IR spectrum of a carboxylic acid and how the various peaks can be ascribed to vibrations of different bonds or groups within the molecule. Infrared spectroscopy allows us to identify substances from these patterns, both in the lab and for example in interstellar dust clouds. The presence of quite complex molecules in space (hundreds of millions of light years away from earth) has been detected by the use of IR spectroscopy.
Nuclear Magnetic Resonance Spectroscopy (NMR): NMR is a form of spectroscopy which uses the fact that certain nuclei can, depending on their structure, behave like tiny spinning magnets. Two of the most common nuclei used for NMR spectroscopy are ${}^{1}\mathrm{H}$ and ${}^{13}\mathrm{C}$. When materials that contain carbon or hydrogen atoms in them are placed in a magnetic field, there are two possible orientations of these nuclei can exist in with respect to the field: a low energy orientation in which the nuclear magnet is aligned with the field, and a high energy orientation in which the nuclear magnet is aligned against the field. The effect of this is to split the energy levels of the nuclei. This makes it possible to cause a transition between the two energy levels by the absorption of electromagnetic radiation of appropriate energy- which in this instance, is in the radio wave range.
The exact energy of these changes depends upon the environment of the nuclei, if the $\mathrm{C}$ or $\mathrm{H}$ is in an electron deficient environment it will appear at a different frequency in the spectrum than a $\mathrm{C}$ or $\mathrm{H}$ that is surrounded by more electron density (the nucleus is said to be shielded by the electrons.) From a study of the different energies absorbed as the nucleus flips from one spin state to another, it is possible derive information about the structure of the compound.
The information that can be ained from simple NMR spectra has to do with the number and type of nuclei that are in a certain compound. The simplest type of NMR spectrum is based on ${}^{13}\mathrm{C}$. ${}^{13}\mathrm{C}$ is a minor isotope of carbon (about 1% of natural abundance) and is present in all naturally occurring samples of carbon compounds. In a ${}^{13}\mathrm{C}$ spectrum, each carbon atom in the molecule will give rise to a signal or peak in the spectrum based on its chemical environment. For example, ethanol ($\mathrm{CH}_{3}\mathrm{CH}_{2}\mathrm{OH}$) has two peaks in its ${}^{13}\mathrm{C}$ spectrum because there are two, and only two distinct chemical environments that a carbon atom can “inhabit”. In contrast, cyclohexenone produces a spectrum that has six distinct peaks, because each of the six carbon atoms in the molecule inhabits a distinctly different environment. Benzene ($\mathrm{C}_{6}\mathrm{H}_{6}$) on the other hand has only one signal in its ${}^{13}\mathrm{C}$ NMR spectrum since there is only one type of carbon in this molecule, all the positions in the ring are equivalent. (Draw out the Lewis structure to convince yourself that this is true)
Proton or ${}^{1}\mathrm{H}$ NMR spectra appear more complicated because each hydrogen atom tends to give a signal that is split into several different peaks. This is because each $\mathrm{H}$ nucleus can be affected by the neighboring nuclei. This produces multiple energy levels for each $\mathrm{H}$, which results in more complex spectra. ${}^{13}\mathrm{C}$ appears simpler because there are typically only one (or zero) ${}^{13}\mathrm{C}$ nucleus in any molecule, therefore there are no interactions by nearby carbons (${}^{12}\mathrm{C}$ does not have different nuclear energy levels in a magnetic field). Note that this ${}^{1}\mathrm{H}$ NMR spectrum is much more complex than the ${}^{13}\mathrm{C}$ NMR. However, there are five distinct clusters of signals and there are five kinds of protons in the compound that gives rise to this spectrum
A variant of NMR is Magnetic Resonance Imaging (MRI), which is based on the same underlying nuclear behaviors, but uses a somewhat different approach. In MRI the material (usually a person) from which you want to record the spectrum is placed in a large magnet which separates the nuclear spin states as described above. The target is irradiated with a pulse of radiowaves that promotes all the nuclei up to their highest accessible energy state. As the nuclei decay back to the lower spin state they emit photons. Instead of detecting the energies of these photons, the system records the times it takes for photons to be emitted as the nuclei drop back to their lowest energy states. These times are dependent on the environment of the nuclei making it possible, through data manipulation, to develop internal visualizations of the body in the scanner.
These are just a few types of examples of spectroscopy. There are many more that you may encounter, but typically these methods all depend on recording how matter and energy interact and using that data to determine the arrangement of the atoms in the matter that is under investigation.
Thumbnail: White light is dispersed by a prism into the colors of the visible spectrum. (CC BY-SA 3.0; D-Kuru).
10: Appendix- Spectroscopy
1. Species with unpaired electrons are called radicals or free radicals - they are typically highly reactive and are thought to be implicated in many processes involving cellular damage and aging.
2. Fortunately there cellular mechanisms that can detect and repair these kinds of radiation induced mutations (more here?)
3. http://nivea.psycho.univ-paris5.fr/F...entVisions.htm
4. Other related molecules are found throughout the biological world, for more examples see http://www.ncbi.nlm.nih.gov/pubmed/3416013
5. http://phototroph.blogspot.com/2006/...n-spectra.html
6. http://en.Wikipedia.org/wiki/Bioluminescence) and http://www.microscopyu.com/articles/...g/fpintro.html | textbooks/chem/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything/10%3A_Appendix-_Spectroscopy/10.1%3A_In-Text_References.txt |
• 1.1: Introduction to Chemistry
Chemistry is a very universal and dynamically-changing subject to be confined to a fixed definition; it might be better to think of chemistry more as a point of view that places its major focus on the structure and properties of substances— particular kinds of matter— and especially on the changes they undergo.
• 1.2: Pseudoscience
A pseudoscience is a belief or process which masquerades as science in an attempt to claim a legitimacy which it would not otherwise be able to achieve on its own terms; it is often known as fringe-or alternative science. The most important of its defects is usually the lack of the carefully controlled and thoughtfully interpreted experiments which provide the foundation of the natural sciences and which contribute to their advancement.
01: Fundamentals of Science and Chemistry
Learning Objectives
• Distinguish beween chemistry and physics;
• Suggest ways in which the fields of engineering, economics, and geology relate to Chemistry;
• Define the following terms, and classify them as primarily microscopic or macroscopic concepts: element, atom, compound, molecule, formula, structure.
• The two underlying concepts that govern chemical change are energetics and dynamics. What aspects of chemical change does each of these areas describe?
Chemistry is a very universal and dynamically-changing subject to be confined to a fixed definition; it might be better to think of chemistry more as a point of view that places its major focus on the structure and properties of substances— particular kinds of matter— and especially on the changes they undergo.
The real importance of Chemistry is that it serves as the interface to practically all of the other sciences, as well as to many other areas of human endeavor. For this reason, Chemistry is often said (at least by chemists!) to be the "central science". Chemistry can be "central" in a much more personal way: with a solid background in Chemistry, you will find it far easier to migrate into other fields as your interests develop.
Research or teaching not for you? Chemistry is so deeply ingrained into so many areas of business, government, and environmental management that some background in the subject can be useful (and able to give you a career edge as a team member having special skills) in fields as varied as product development, marketing, management, computer science, technical writing, and even law.
So just what is chemistry?
Do you remember the story about the group of blind men who encountered an elephant? Each one moved his hands over a different part of the elephant's body— the trunk, an ear, or a leg— and came up with an entirely different description of the beast. Chemistry can similarly be approached in different ways, each yielding a different, valid, and yet hopelessly incomplete view of the subject. Thus we can view chemistry from multiple standpoints ranging from the theoretical to the eminently practical:
Mainly theoretical Mainly practical
Why do particular combinations of atoms hold together, but not others? What are the properties of a certain compound?
How can I predict the shape of a molecule? How can I prepare a certain compound?
Why are some reactions slow, while others occur rapidly? Does a certain reaction proceed to completion?
Is a certain reaction possible? How can I determine the composition of an unknown substance?
Boiling it down to the basics
At the most fundamental level, chemistry can be organized along the lines shown here.
• Dynamics refers to the details of that rearrangements of atoms that occur during chemical change, and that strongly affect the rate at which change occurs.
• Energetics refers to the thermodynamics of chemical change, related to the uptake or release of heat. This aspect of chemistry controls the direction in which change occurs, and the mixture of substances that are produced as a result.
• Composition and structure define the substances that are produced because of a chemical change. Structure specifically refers to the relative arrangements of the atoms in space. The extent to which a given structure can persist is determined by energetics and dynamics.
• Synthesis refers to formation of new (and usually more complex) substances from simpler ones, but in the present context we use it in the more general sense to denote the operations required to bring about chemical change and to isolate the desired products.
This view of Chemistry is a rather a stringent one that is probably more appreciated by people who already know the subject than by those who are about to learn it, so we will use a somewhat expanded scheme to organize the fundamental concepts of chemical science. But if you need a single-sentence "definition" of Chemistry, this one wraps it up pretty well:
Chemistry is the study of substances; their properties, structure, and the changes they undergo.
Micro-macro: the forest or the trees
Chemistry, like all the natural sciences, begins with the direct observation of nature — in this case, of matter. But when we look at matter in bulk, we see only the "forest", not the "trees" — the atoms and molecules of which matter is composed of — whose properties ultimately determine the nature and behavior of the matter we are looking at. This dichotomy between what we can and cannot directly see constitutes two contrasting views which run through all of chemistry, which we call macroscopic and microscopic.
In the context of Chemistry, "microscopic" implies the atomic or subatomic levels which cannot be seen directly (even with a microscope!) whereas "macroscopic" implies things that we can know by direct observations of physical properties such as mass, volume, etc. The following table provides a conceptual overview of Chemical science according to the macroscopic/microscopic dichotomy we have discussed above. It is of course only one of the many ways of looking at the subject, but you may find it helpful in organizing the many facts and ideas that you will encounter in your study of Chemistry. We will organize the discussion in this lesson along similar lines.
realm
macroscopic view
microscopic view
composition formulas, mixtures structures of solids, molecules, and atoms
properties intensive properties of bulk matter particle sizes, masses and interactions
change (energetics) energetics and equilibrium statistics of energy distribution
change (dynamics) kinetics (rates of reactions) mechanisms
Chemical composition
Mixture or "pure substance" ?
In science it is necessary to know what we are talking about, so before we can begin to consider matter from a chemical point of view, we need to know its composition; whether it is a single substance, or a mixture? (We will get into the details of the definitions later, but for the moment you probably have a fair understanding of the distinction; think of a sample of salt (sodium chloride) as opposed to a solution of salt in water— a mixture of salt and water.)
Elements and compounds
It has been known for at least a thousand years that some substances can be broken down by heating or chemical treatment into "simpler" ones, but there is always a limit; we eventually get substances known as elements that cannot be reduced to any simpler forms by ordinary chemical or physical means. What is our criterion for "simpler"? The most observable (and therefore macroscopic) property is the weight.
The idea of a minimal unit of chemical identity that we call an element developed from experimental observations of the relative weights of substances involved in chemical reactions. For example, the compound mercuric oxide can be broken down by heating into two other substances:
$\ce{2 HgO \rightarrow 2 Hg + O_2}$
... but the two products, metallic mercury and dioxygen, cannot be decomposed into simpler substances, so they must be elements.
Elements and atoms
The definition of an element given above is an operational one; a certain result (or in this case, a non-result!) of a procedure that might lead to the decomposition of a substance into lighter units will tentatively place that substance into one of the categories: element or compound. Because this operation is carried out on bulk matter, the concept of the element is also a macroscopic one. The atom, by contrast, is a microscopic concept which in modern chemistry relates the unique character of every chemical element to an actual physical particle.
The idea of the atom as the smallest particle of matter had its origins in Greek philosophy around 400 BCE but was controversial from the start (both Plato and Aristotle maintained that matter was infinitely divisible.) It was not until 1803 that John Dalton proposed a rational atomic theory to explain the facts of chemical combination as they were then known, thus being the first to employ macroscopic evidence to illuminate the microscopic world. It wasn't until the 1900s that the atomic theory became universally accepted. In the 1920's it became possible to measure the sizes and masses of atoms, and in the 1970's techniques were developed that produced images of individual atoms.
Formula and structure
The formula of a substance expresses the relative number of atoms of each element it contains. Because the formula can be determined by experiments on bulk matter, it is a macroscopic concept even though it is expressed in terms of atoms.
What the ordinary chemical formula does not tell us is the order in which the component atoms are connected, whether they are grouped into discrete units (molecules) or are two- or three dimensional extended structures, as is the case with solids such as ordinary salt. The microscopic aspect of composition is the structure, which gives detailed relative locations (in two or three dimensional space) of each atom within the minimum collection needed to define the structure of the substance.
Macroscopic
Microscopic
Substances are defined at the macroscopic level by their formulas or compositions, and at the microscopic level by their structures. The elements hydrogen and oxygen combine to form a compound whose composition is expressed by the formula H2O.
The molecule of water has the structure shown here.
Chemical substances that cannot be broken down into simpler ones are known as elements. The actual physical particles of which elements are composed are atoms or molecules.
Sulfur – the element in its orthorhombic crystalline form.
molecule is an octagonal ring of sulfur atoms. The crystal shown at the left is composed of an ordered array of these molecules.
(No, they don't actually move around like this, although they are in a constant state of vibrational motion.)
Compounds and molecules
As we indicated above, a compound is a substance containing more than one element. Since the concept of an element is macroscopic and the distinction between elements and compounds was recognized long before the existence of physical atoms was accepted, the concept of a compound must also be a macroscopic one that makes no assumptions about the nature of the ultimate. Thus when carbon burns in the presence of oxygen, the product carbon dioxide can be shown by (macroscopic) weight measurements to contain both of the original elements:
$\ce{C + O2 -> CO2}$
10.0 g + 26.7 g = 36.7 g
One of the important characteristics of a compound is that the proportions by weight of each element in a given compound are constant. For example, no matter what weight of carbon dioxide we have, the percentage of carbon it contains is (10.0 / 36.7) = 0.27, or 27%.
Molecules
A molecule is an assembly of atoms having a fixed composition, structure, and distinctive, measurable properties.
Computer-model of Nicotine molecule C10H14N2 by Ronald Perry
"Molecule" refers to a kind of particle, and is therefore a microscopic concept. Even at the end of the 19th century, when compounds and their formulas had long been in use, some prominent chemists doubted that molecules (or atoms) were any more than a convenient model.
Molecules suddenly became real in 1905, when Albert Einstein showed that Brownian motion, the irregular microscopic movements of tiny pollen grains floating in water, could be directly attributed to collisions with molecule-sized particles.
Finally, we get to see one!
In 2009, IBM scientists in Switzerland succeeded in imaging a real molecule, using a technique known as atomic force microscopy in which an atoms thin metallic probe is drawn ever-so-slightly above the surface of an immobilized pentacene molecule cooled to nearly absolute zero. In order to improve the image quality, a molecule of carbon monoxide was placed on the end of the probe.
The image produced by the AFM probe is shown at the very bottom. What is actually being imaged is the surface of the electron clouds of the molecule, which consists of six hexagonal rings of carbon atoms with hydrogen on its periphery. The tiny bumps that correspond to these hydrogen atoms attest to the remarkable resolution of this experiment.
The atomic composition of a molecule is given by its formula. Thus the formulas CO, CH4, and O2 represent the molecules carbon monoxide, methane, and dioxygen. However, the fact that we can write a formula for a compound does not imply the existence of molecules having that composition. Gases and most liquids consist of molecules, but many solids exist as extended lattices of atoms or ions (electrically charged atoms or molecules.) For example, there is no such thing as a "molecule" of ordinary salt, NaCl (see below.)
Confused about the distinction between molecules and compounds?
Maybe the following will help:
A molecule but not a compound - Ozone, O3, is not a compound because it contains only a single element. This well-known molecule is a compound because it contains more than one element. Ordinary solid salt is a compound but not a molecule. It is built from interpenetrating lattices of sodium and chloride ions that extend indefinitely.
Structure and properties
Composition and structure lie at the core of Chemistry, but they encompass only a very small part of it. It is largely the properties of chemical substances that interest us; it is through these that we experience and find uses for substances, and much of chemistry as a science is devoted to understanding the relation between structure and properties. For some purposes it is convenient to distinguish between chemical properties and physical properties, but as with most human-constructed dichotomies, the distinction becomes more fuzzy as one looks more closely.
Chemical change
Chemical change is defined macroscopically as a process in which new substances are formed. On a microscopic basis it can be thought of as a re-arrangement of atoms. A given chemical change is commonly referred to as a chemical reaction and is described by a chemical equation that has the form
reactants → products
In elementary courses it is customary to distinguish between "chemical" and "physical" change, the latter usually relating to changes in physical state such as melting and vaporization. As with most human-created dichotomies, this begins to break down when examined closely. This is largely because of some ambiguity in what we regard as a distinct "substance".
Example 1: Chlorine
Elemental chlorine exists as the diatomic molecule $\ce{Cl2}$ in the gas, liquid, and solid states; the major difference between them lies in the degree of organization. In the gas the molecules move about randomly, whereas in the solid they are constrained to locations in a 3-dimensional lattice. In the liquid, this tight organization is relaxed, allowing the molecules to slip and slide around each other.
Since the basic molecular units remain the same in all three states, the processes of melting, freezing, condensation and vaporization are usually regarded as physical rather than chemical changes.
Example 2: Sodium Chloride
Solid salt consists of an indefinitely extended 3-dimensional array of Na+ and Cl ions (electrically charged atoms.)
When heated above 801°C, the solid melts to form a liquid consisting of these same ions. This liquid boils at 1430° to form a vapor made up of discrete molecules having the formula $\ce{Na2Cl2 (aq)}$.. Because the ions in the solid, the hydrated ions in the solution, and the molecule $\ce{Na2Cl2}$ are really different chemical species, so the distinction between physical and chemical change becomes a bit fuzzy.
Energetics and Equilibrium
You have probably seen chemical reaction equations such as the "generic" one shown below:
$\ce{A + B → C + D}$
An equation of this kind does not imply that the reactants A and B will change entirely into the products C and D, although in many cases this will be what appears to happen. Most chemical reactions proceed to some intermediate point that yields a mixture of reactants and products.
Example 3
For example, if the two gases phosphorus trichloride and chlorine are mixed together at room temperature, they will combine until about half of them have changed into phosphorus pentachloride:
$\ce{PCl_3 + Cl_2 \rightarrow PCl_5}$
At other temperatures the extent of reaction will be smaller or greater. The result, in any case, will be an equilibrium mixture of reactants and products.
The most important question we can ask about any reaction is "what is the equilibrium composition"?
• If the answer is "all products and negligible quantities of reactants", then we say the reaction can takes place and that it "goes to completion ".
• If the answer is "negligible quantities of products", then we say the reaction cannot take place in the forward direction, but that the reverse reaction can occur.
• If the answer is "significant quantities of all components" (both reactants and products) are present in the equilibrium mixture, then we say the reaction is "reversible" or "incomplete".
The aspect of "change" we are looking at here is a property of a chemical reaction, rather than of any one substance. But if you stop to think of the huge number of possible reactions between the more than 15 million known substances, you can see that it would be an impossible task to measure and record the equilibrium compositions of every possible combination.
One or two directly measurable properties of the individual reactants and products can be combined to give a number from which the equilibrium composition at any temperature can be easily calculated. There is no need to do an experiment!
This is very much a macroscopic view because the properties we need to directly concern ourselves with are those of the reactants and products. Similarly, the equilibrium composition — the measure of the extent to which a reaction takes place — is expressed in terms of the quantities of these substances.
Chemical Energetics
Virtually all chemical changes involve the uptake or release of energy, usually in the form of heat. It turns out that these energy changes, which are the province of chemical thermodynamics, serve as a powerful means of predicting whether or not a given reaction can proceed, and to what extent. Moreover, all we need in order to make this prediction is information about the energetic properties of the reactants and products; there is no need to study the reaction itself. Because these are bulk properties of matter, chemical thermodynamics is entirely macroscopic in its outlook.
Dynamics: Kinetics and Mechanism
The energetics of chemical change that we discussed immediately above relate to the end result of chemical change: the composition of the final reaction mixture, and the quantity of heat liberated or absorbed. The dynamics of chemical change are concerned with how the reaction takes place:
• What has to happen to get the reaction started (which molecule gets bumped first, how hard, and from what direction?)
• Does the reaction take place in a single step, or are multiple steps and intermediate structures involved?
These details constitute what chemists call the mechanism of the reaction. For example, the reaction between nitric oxide and hydrogen (identified as the net reaction at the bottom left), is believed to take place in the two steps shown here. Notice that the nitrous oxide, N2O, is formed in the first step and consumed in the second, so it does not appear in the net reaction equation. The N2O is said to act as an intermediate in this reaction. Some intermediates are unstable species, often distorted or incomplete molecules that have no independent existence; these are known as transition states.
The microscopic side of dynamics looks at the mechanisms of chemical reactions. This refers to a "blow-by-blow" description of what happens when the atoms in the reacting species re-arrange themselves into the configurations they have in the products.
Mechanism represents the microscopic aspect of chemical change. Mechanisms, unlike energetics, cannot be predicted from information about the reactants and products; chemical theory has not yet advanced to the point where we can do much more than make educated guesses. To make matters even more complicated (or, to chemists, interesting!), the same reaction can often proceed via different mechanisms under different conditions.
Kinetics
Because we cannot directly watch the molecules as they react, the best we can usually do is to infer a reaction mechanism from experimental data, particularly that which relates to the rate of the reaction as it is influenced by the concentrations of the reactants. This entirely experimental area of chemical dynamics is known as kinetics.
Reaction rates, as they are called, vary immensely: some reactions are completed in microseconds, others may take years; many are so slow that their rates are essentially zero. To make things even more interesting, there is no relation between reaction rates and "tendency to react" as governed by the factors in the top half of the above diagram; the latter can be accurately predicted from energetic data on the substances (the properties we mentioned in the previous screen), but reaction rates must be determined by experiment.
Catalysts
Catalysts can make dramatic changes in rates of reactions, especially in those whose un-catalyzed rate is essentially zero. Consider, for example, this rate data on the decomposition of hydrogen peroxide. H2O2 is a by-product of respiration that is poisonous to living cells which have, as a consequence, evolved a highly efficient enzyme (a biological catalyst) that is able to destroy peroxide as quickly as it forms. Catalysts work by enabling a reaction to proceed by an alternative mechanism.
In some reactions, even light can act as a catalyst. For example, the gaseous elements hydrogen and chlorine can remain mixed together in the dark indefinitely without any sign of a reaction, but in the sunlight they combine explosively.
Currents of modern Chemistry
In the preceding section we looked at chemistry from a conceptual standpoint. If this can be considered a "macroscopic" view of chemistry, what is the "microscopic" view? It would likely be what chemists actually do. Because a thorough exploration of this would lead us into far more detail than we can accommodate here, we will mention only a few of the areas that have emerged as being especially important in modern chemistry.
Separation science
A surprisingly large part of chemistry has to do with isolating one component from a mixture. This may occur at any number of stages in a manufacturing process, including the very critical steps involved in removing toxic, odiferous, or otherwise undesirable by-products from a waste stream. But even in the research lab, a considerable amount of effort is often devoted to separating the desired substance from the many components of a reaction mixture, or in separating a component from a complex mixture (for example, a drug metabolite from a urine sample), prior to measuring the amount present.
Distillation - separation of liquids having different boiling points. This ancient technique (believed to have originated with Arabic alchemists in 3500 BCE), is still one of the most widely employed operations both in the laboratory and in industrial processes such as oil refining.
Solvent extraction - separation of substances based on their differing solubilities. A common laboratory tool for isolating substances from plants and chemical reaction mixtures. Practical uses include processing of radioactive wastes and decaffeination of coffee beans. The separatory funnel shown here is the simplest apparatus for liquid-liquid extraction; for solid-liquid extraction, the Soxhlet apparatus is commonly used.
Chromatography - This extremely versatile method depends on the tendency of different kinds of molecules to adsorb (attach) to different surfaces as they travel along a "column" of the adsorbent material. Just as the progress of people walking through a shopping mall depends on how long they spend looking in the windows they pass, those molecules that adsorb more strongly to a material will emerge from the chromatography column more slowly than molecules that are not so strongly adsorbed.
Paper chromatography of plant juice [link]
Gel electrophoresis - a powerful method for separating and "fingerprinting" macromolecules such as nucleic acids or proteins on the basis of physical properties such as size and electric charge.
Identification and assay
What do the following people have in common?
• A plant manager deciding on whether to accept a rail tank car of vinyl chloride for manufacture into plastic pipe
• An agricultural chemist who wants to know about the vitamin content of a new vegetable hybrid
• The manager of a city water-treatment plant who needs to make sure that the carbonate content of the water is maintained high enough to prevent corrosion, but low enough to prevent scale build-up
The answer is that all depend on analytical techniques — measurements of the nature or quantity ("assays") of some substance of interest, sometimes at very low concentrations. A large amount of research is devoted to finding more accurate and convenient means of making such measurements. Many of these involve sophisticated instruments; among the most widely used are the following:
Spectrophotometers that examine the ways that light of various wavelengths is absorbed, emitted, or altered by atomic and molecular species.
Mass spectrometers that break up molecules into fragments that can be characterized by electrical methods.
Instruments (NMR spectrometers) that analyze the action of radio waves and magnetic fields on atomic nuclei in order to examine the nature of the chemical bonds attached to a particular kind of atom.
"In the early 1900's a chemist could analyze about 200 samples per year for the major rock-forming elements. Today, using X-ray fluorescence spectrometry, two chemists can perform the same type of analysis on 7,000 samples per year."
Materials, polymers, and nanotechnology
Materials science attempts to relate the physical properties and performance of engineering materials to their underlying chemical structure with the aim of developing improved materials for various applications.
Polymer chemistry - developing polymeric ("plastic") materials for industrial uses. Connecting individual polymer molecules by cross-links (red) increases the strength of the material. Thus ordinary polyethylene is a fairly soft material with a low melting point, but the cross-linked form is more rigid and resistent to heat.
Organic semiconductors offer a number of potential advantages over conventional metalloid-based devices.
Fullerenes, nanotubes, and nanowires - Fullerenes were first identified in 1985 as products of experiments in which graphite was vaporized using a laser, work for which R. F. Curl, Jr., R. E. Smally, and H. W. Kroto shared the 1996 Nobel Prize in Chemistry. Fullerene research is expected to lead to new materials, lubricants, coatings, catalysts, electro-optical devices, and medical applications.
Nanodevice chemistry — constructing molecular-scale assemblies for specific tasks such as computing, producing motions, etc.
Biosensors and biochips - the surfaces of metals and semiconductors "decorated" with biopolymers can serve as extremely sensitive detectors of biological substances and infectious agents.
Biochemistry and Molecular biology
This field covers a wide range of studies ranging from fundamental studies on the chemistry of gene expression and enzyme-substrate interactions to drug design. Much of the activity in this area is directed to efforts in drug discovery.
Drug screening began as a largely scattershot approach in which a pathogen or a cancer cell line was screened against hundreds or thousands of candidate substance in the hope of finding a few "leads" that might result in a useful therapy. This field is now highly automated and usually involves combinatorial chemistry (see below) combined with innovative separation and assay methods.
Drug design looks at interactions between enzymes and possible inhibitors. Computer-modeling is an essential tool in this work.
Proteomics - This huge field focuses on the relations between structure and function of proteins— of which there are about 400,000 different kinds in humans. Proteomics is related to genetics in that the DNA sequences in genes get decoded into proteins which eventually define and regulate a particular organism.
[Science article image link]
Chemical genomics explores the chain of events in which signaling molecules regulate gene expression.
Synthesis
In its most general sense, this word refers to any reaction that leads to the formation of a particular molecule. It is both one of the oldest areas of chemistry and one of the most actively pursued. Some of the major threads are
New-molecule synthesis - Chemists are always challenged to come up with molecules containing novel features such as new shapes or unusual types of bonds.
Combinatorial chemistry refers to a group of largely-automated techniques for generating tiny quantities of huge numbers of different molecules ("libraries") and then picking out those having certain desired properties. Although it is a major drug discovery technique, it also has many other applications.
Green chemistry - synthetic methods that focus on reducing or eliminating the use or release of toxic or non-biodegradable chemicals or byproducts.
Process chemistry bridges the gap between chemical synthesis and chemical engineering by adapting synthetic routes to the efficient, safe, and environmentally-responsible methods for large-scale synthesis.
Congratulations! You have just covered all of Chemistry, condensed into one quick and painless lesson— the world's shortest Chemistry course! Yes, we left out a lot of the details, the most important of which will take you a few months of happy discovery to pick up. But if you keep in mind the global hierarchy of composition/ structure, properties of substances, and change (equilibrium and dynamics) that we have developed in both macroscopic and microscopic views, you will find it much easier to assemble the details as you encounter them and to see where they fit into the bigger picture. | textbooks/chem/General_Chemistry/Chem1_(Lower)/01%3A_Fundamentals_of_Science_and_Chemistry/1.01%3A_Introduction_to_Chemistry.txt |
A pseudoscience is a belief or process which masquerades as science in an attempt to claim a legitimacy which it would not otherwise be able to achieve on its own terms; it is often known as fringe-or alternative science. The most important of its defects is usually the lack of the carefully controlled and thoughtfully interpreted experiments which provide the foundation of the natural sciences and which contribute to their advancement.
Of course, the pursuit of scientific knowledge usually involves elements of intuition and guesswork; experiments do not always test a theory adequately, and experimental results can be incorrectly interpreted or even wrong. In legitimate science, however, these problems tend to be self-correcting, if not by the original researchers themselves, then through the critical scrutiny of the greater scientific community. Critical thinking is an essential element of science.
Other Types of Defective Science
There have been several well-documented instances in which the correction process referred to above was delayed until after the initial incorrect interpretation became widely publicized, resulting in what has been called pathological science. The best known of these incidents are the "discoveries" of N-rays, of polywaters, and of cold fusion. All of these could have been averted if the researchers had not been so enthused with their results that they publicized them before they had received proper review by others. Human nature being what it is, there is always some danger of this happening; to discourage it, most of the prestigious scientific journals will refuse to accept reports of noteworthy work that has already been made public.
Another term, junk science, is often used to describe scientific theories or data which, while perhaps legitimate in themselves, are believed to be mistakenly used to support an opposing position. There is usually an element of political or ideological bias in the use of the term. Thus the arguments in favor of limiting the use of fossil fuels in order to reduce global warming are often characterized as junk science by those who do not wish to see such restrictions imposed, and who claim that other factors may well be the cause of global warming. A wide variety of commercial advertising (ranging from hype to outright fraud) would also fall into this category; at its most egregious it might better be described as deceptive science.
"9944100% Pure: It Floats"
This description of Ivory Soap is a classic example of junk science from the 19th century. Not only is the term "pure" meaningless when applied to an undefined mixture such as bath soap, but the implication that its ability to float is evidence of this purity is deceptive. The low density is achieved by beating air bubbles into it, actually reducing the "purity" of the product and in a sense cheating the consumer.
Hoax science is another category that describes deliberately contrived sensationalist writings that have received wide publicity (and earned substantial royalties for their authors.) Immanual Velikovsky's Worlds in Collision (1950) is now probably the best known of these, followed by Erich von Däniken's Chariots of the Gods? (1968). Perhaps the most recent contender in this field is David Talbott, who with Wallace Thornhill wrote The Electric Universe and Thunderbolts of the Gods.
Fraudulent science and Scientific Misconduct refer to work that is intentionally fabricated or misrepresented for personal (recognition or career-advancement) or commercial (marketing or regulatory) reasons. Suppression of science for political reasons often occurred during the second Bush administration. The tobacco and pharmaceutical industries have been notoriously implicated in the latter category. The tobacco industry even published a phoney "scientific" journal containing articles written by hack authors that disputed warnings about smoking-induced cancer.
Charges of minor fudging go back to the days of Ptolemy, Gailileo, and Isaac Newton, but revelations of more contemporary frauds and their contamination of the scientific literature makes these far more problematic. Since about 1980, scientists at several major U.S. universities have been compelled to withdraw articles from prestigious journals. One of the most widely-publicized cases was that of the eminent Korean researcher who reported bogus stem cell results. Some rather troubling recent cases involve at least seventy articles describing bogus chemical structures reported by a group of scientists in China, and about an equal number of papers that had been forged or falsified by one or more scientists at a university in India.
Finally, there is just plain bad science, which would logically encompass all of the evils being discussed here, but is commonly used to describe well-intentioned but incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. An example would be the statement that electrons revolve in orbits around the atomic nucleus, a picture that was discredited in the 1920's, but is so much more vivid and easily grasped than the one that supplanted it that it shows no sign of dying out.
Note: "It's only a theory"
In ordinary conversation, the word "theory" connotes an opinion, a conjecture, or a supposition. But in science, the term has a much more limited meaning. A scientific theory is an attempt to explain some aspect of the natural world in terms of empirical evidence and observation. It commonly draws upon established principles and knowledge with the aim of extending them in a logical and consistent way that enables one to make useful predictions. All scientific theories are tentative and subject to being tested and modified. As theories become more mature, they grow into more organized bodies of knowledge that enable us to understand and predict a wider range of phenomena. Examples of such theories are quantum theory, Einstein's theories of relativity, and evolution.
Scientific theories fall into two categories:
1. Theories that have been shown to be incorrect, usually because they are not consistent with new observations;
2. All other theories
Hence, theories cannot be proven to be correct; there is always the possibility that further observations will disprove the theory. Furthremore, a theory that cannot be refuted or falsified is not a scientific theory.
For example, the theories that underlie astrology (the doctrine that the positions of the stars can influence one's life) are not falsifiable because they, and the predictions that follow from them, are so vaguely stated that the failure of these predictions can always be "explained away" by assuming that various other influences were not taken into account. It is similarly impossible to falsify so-called "creation science" or "intelligent design" because one can simply evoke the "then a miracle occurs" at any desired stage.
Recognizing Pseudoscience?
There is no single test that unambiguously distinguishes between science and pseudoscience, but as the two diverge more and more from one another, certain differences become apparent, and these tend to be remarkably consistent across all fields of interest. In examining the following table, it might be helpful to consider examples of astronomy vs. astrology, or of chemistry vs. alchemy, which at one time were single fields that gradually diverged into sciences and pseudosciences.
Many scientists' ordinary response to pseudoscientific claims is simply to laugh at them. But mythology has always been an important part of human culture, often by giving people the illusion of having some direct control over their lives. This can lead to their becoming advocates for various kinds of health quackery, to commercial scams, and to cult-like organizations such as scientology. Worst of all, they can pressure political and educational circles to adopt their ideologies.
Does the "Establishment" Actively Suppress new Ideas?
Anyone who has been around for long enough has encountered statements like these:
• An inventor's design for a device that uses water as a fuel has been bought up and suppressed by the oil companies.
• "Alternative health" techniques (homeopathy, chiropractic, chelation therapy— you name it!) are actively suppressed by the medical profession or the pharmaceutical industry in a desperate attempt to serve their selfish interests.
• Reports of unidentified flying objects (UFO's) are suppressed by the U.S. Government in an attempt to prevent panic and/or to maintain control over citizens.
• Editors of scientific journals and the reviewers they call on to assess the worth of submitted papers reject out-of-hand anything that comes from persons who are not members of the scientific "establishment" or which report results not consistent with presently-accepted science.
Claims of these kinds are frequently made and widely believed, especially by those who are inclined to see conspiracies around every corner. There is little if any evidence for any of these claims. The real reason that new devices or new theories get thrown aside is that the arguments or evidence adduced to support them is inadequate or not credible. The individuals who believe themselves to be unfairly thwarted by the scientific community are very often so isolated from it that they are unable to appreciate its norms of clarity, rigor, and consistency with existing science.
A common refrain is that " they laughed at Galileo, at Thompson, and at Wegner," whose theores were eventually supported. Well, with Galileo, they did not exactly laugh; it was more a case of challenge to religious doctrine that forced him to recant his assertion that the Sun, and not the Earth, is at the center of the solar system. There have been innumerable cases in which the world was simply not ready to accept a new idea. This was especially common before the scientific method had been developed, and before the technology needed to apply it had become available.
When J.J. Thomson discovered evidence that the atom is not the ultimate fundamental particle and could be broken up into smaller units, even Thomson himself was reluctant to accept it, and he became a laughingstock for several years until more definitive evidence became available.
Alfred Wegener's theory of continental drift was bitterly attacked when it was first published in 1915, and it did not become generally accepted until about 50 years later. Others had made similar proposals based on the way the continents of Africa and South America could be fitted together, but Wegener was the first to make a careful study of fossil and geological similarities between the two continents. Nevertheless, the idea that continents could float around was too hard to accept at a time when nothing was known about the interior structure of the Earth, and the evidence he presented was rejected as inadequate.
On the other hand, the even-more-revolutionary concepts of special- and general relativity, and of quantum theory (which developed in several stages), achieved rapid acceptance when they were first presented, as did Louis Pasteur's germ theory of disease. In all of these cases the new theories provided credible explanations for what was previously unexplainable, and the tools for confirming them existed at the time, or in the case of general relativity, would soon become available. | textbooks/chem/General_Chemistry/Chem1_(Lower)/01%3A_Fundamentals_of_Science_and_Chemistry/1.02%3A_Pseudoscience.txt |
Most courses in Chemistry, especially those at the college/university level, assume that their students have had prior courses in general science, and often in physics, which provide them with an understanding of important concepts such as significant figures, units of measure, treatment measurement error, density and buoyancy. But if most of that has receded into the fuzzy past, the six sections of this unit will bring you up to speed. Neglect this stuff at your peril — it will come up in one way or another in any Chemistry course you take!
• 2.1: Classification and Properties of Matter
Matter is “anything that has mass and occupies space”. Matter is what chemical substances are composed of. But what do we mean by chemical substances? How do we organize our view of matter and its properties? These very practical questions will be the subjects of this lesson.
• 2.2: Energy, Heat, and Temperature
All chemical changes are accompanied by the absorption or release of heat. The intimate connection between matter and energy has been a source of wonder and speculation from the most primitive times; it is no accident that fire was considered one of the four basic elements (along with earth, air, and water) as early as the fifth century BCE. This unit will cover only the very basic aspects of the subject.
• 2.3: The Measure of Matter
The natural sciences begin with observation and this usually involves numerical measurements of quantities. Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. All measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning, and must be understood when converting between different unit systems.
• 2.4: The Meaning of Measure
In science, there are numbers and there are "numbers". What we ordinarily think of as a "number" and will refer to here as a pure number is just that: an expression of a precise value. The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something– the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as pure numbers.
02: Essential Background
Learning Objectives
• Give examples of extensive and intensive properties of a sample of matter. Which kind of property is more useful for describing a particular kind of matter?
• Explain what distinguishes heterogeneous matter from homogeneous matter.
• Describe the following separation processes: distillation, crystallization, liquid-liquid extraction, chromatography.
• To the somewhat limited extent to which it is meaningful, classify a given property as a physical or chemical property of matter.
Matter is “anything that has mass and occupies space”, we were taught in school. True enough, but not very satisfying. A really complete answer is unfortunately beyond the scope of this course, but we will offer a hint of it in a later chapter on atomic structure. For the moment, let’s put off trying to define matter and focus on the chemist’s view: matter is what chemical substances are composed of. But what do we mean by chemical substances? How do we organize our view of matter and its properties? These very practical questions will be the subjects of this lesson.
Properties of Matter
The science of chemistry developed from observations made about the nature and behavior of different kinds of matter, which we refer to collectively as the properties of matter. The properties we refer to in this lesson are all macroscopic properties: those that can be observed in bulk matter. At the microscopic level, matter is of course characterized by its structure: the spatial arrangement of the individual atoms in a molecular unit or an extended solid. By observing a sample of matter and measuring its various properties, we gradually acquire enough information to characterize it; to distinguish it from other kinds of matter. This is the first step in the development of chemical science, in which interest is focused on specific kinds of matter and the transformations between them.
If you think about the various observable properties of matter, it will become apparent that these fall into two classes. Some properties, such as mass and volume, depend on the quantity of matter in the sample we are studying. Clearly, these properties, as important as they may be, cannot by themselves be used to characterize a kind of matter; to say that “water has a mass of 2 kg” is nonsense, although it may be quite true in a particular instance. Properties of this kind are called extensive properties of matter.
Suppose we take further measurements, and find that the same quantity of water whose mass is 2.0 kg also occupies a volume of 2.0 liters. We have measured two extensive properties (mass and volume) of the same sample of matter. This allows us to define a new quantity, the quotient m/V which defines another property of water which we call the density. Unlike the mass and the volume, which by themselves refer only to individual samples of water, the density (mass per unit volume) is a property of all samples of pure water at the same temperature. Density is an example of an intensive property of matter.
This definition of the density illustrates an important general rule: the ratio of two extensive properties is always an intensive property.
Intensive properties are extremely important, because every possible kind of matter possesses a unique set of intensive properties that distinguishes it from every other kind of matter. Some intensive properties can be determined by simple observations: color (absorption spectrum), melting point, density, solubility, acidic or alkaline nature, and density are common examples. Even more fundamental, but less directly observable, is chemical composition.
The more intensive properties we know, the more precisely we can characterize a sample of matter.
Intensive properties are extremely important, because every possible kind of matter possesses a unique set of intensive properties that distinguishes it from every other kind of matter. In other words, intensive properties serve to characterize matter. Many of the intensive properties depend on such variables as the temperature and pressure, but the ways in which these properties change with such variables can themselves be regarded as intensive properties.
Example \(1\)
Classify each of the following as an extensive or intensive property.
1. The volume of beer in a mug
2. The percentage of alcohol in the beer
3. The number of calories of energy you derive from eating a banana
4. The number of calories of energy made available to your body when you consume 10.0 g of sugar
5. The mass of iron present in your blood
6. The mass of iron present in 5 mL of your blood
7. The electrical resistance of a piece of 22-gauge copper wire.
8. The electrical resistance of a 1-km length of 22-gauge copper wire
9. The pressure of air in a bicycle tire
Answer a
extensive; depends on size of the mug.
Answer b
intensive; same for any same-sized sample.
Answer c
extensive; depends on size and sugar content of the banana.
Answer d
intensive; same for any 10 g portion of sugar.
Answer e
extensive; depends on volume of blood in the body.
Answer f
intensive; the same for any 5 mL sample.
Answer g
extensive; depends on length of the wire.
Answer h
intensive; same for any 1 km length of the same wire.
Answer i
pressure itself is intensive, but is also dependent on the quantity of air in the tire.
The last example shows that not everything is black or white! But we often encounter matter that is not uniform throughout, whose different parts exhibit different sets of intensive properties. This brings up another distinction that we address immediately below.
How to classify matter?
One useful way of organizing our understanding of matter is to think of a hierarchy that extends down from the most general and complex to the simplest and most fundamental. The orange-colored boxes represent the central realm of chemistry, which deals ultimately with specific chemical substances, but as a practical matter, chemical science extends both above and below this region.
Alternatively, when we are thinking about specific samples of matter, it may be more useful to re-cast our classification in two dimensions:
Notice, in the bottom line of boxes above, that "mixtures" and "pure substances" can fall into either the homogeneous or heterogeneous categories.
Homogeneous and heterogeneous: it's a matter of phases
Homogeneous matter (from the Greek homo = same) can be thought of as being uniform and continuous, whereas heterogeneous matter (hetero = different) implies non-uniformity and discontinuity. To take this further, we first need to define "uniformity" in a more precise way, and this takes us to the concept of phases.
A phase is a region of matter that possesses uniform intensive properties throughout its volume. A volume of water, a chunk of ice, a grain of sand, a piece of copper— each of these constitutes a single phase, and by the above definition, is said to be homogeneous. A sample of matter can contain more than a single phase; a cool drink with ice floating in it consists of at least two phases, the liquid and the ice. If it is a carbonated beverage, you can probably see gas bubbles in it that make up a third phase.
Phase boundaries
Each phase in a multiphase system is separated from its neighbors by a phase boundary, a thin region in which the intensive properties change discontinuously. Have you ever wondered why you can easily see the ice floating in a glass of water although both the water and the ice are transparent? The answer is that when light crosses a phase boundary, its direction of travel is slightly bent, and a portion of the light gets reflected back; it is these reflected and distorted light rays emerging from that reveal the chunks of ice floating in the liquid.
If, instead of visible chunks of material, the second phase is broken into tiny particles, the light rays usually bounce off the surfaces of many of these particles in random directions before they emerge from the medium and are detected by the eye. This phenomenon, known as scattering, gives multiphase systems of this kind a cloudy appearance, rendering them translucent instead of transparent. Two very common examples are ordinary fog, in which water droplets are suspended in the air, and milk, which consists of butterfat globules suspended in an aqueous solution.
Getting back to our classification, we can say that Homogeneous matter consists of a single phase throughout its volume; heterogeneous matter contains two or more phases.
Dichotomies ("either-or" classifications) often tend to break down when closely examined, and the distinction between homogeneous and heterogeneous matter is a good example; this is really a matter of degree, since at the microscopic level all matter is made up of atoms or molecules separated by empty space! For most practical purposes, we consider matter as homogeneous when any discontinuities it contains are too small to affect its visual appearance.
How large must a molecule or an agglomeration of molecules be before it begins to exhibit properties of a being a separate phase? Such particles span the gap between the micro and macro worlds, and have been known as colloids since they began to be studied around 1900. But with the development of nanotechnology in the 1990s, this distinction has become even more fuzzy.
Pure Substances and Mixtures
The air around us, most of the liquids and solids we encounter, and all too much of the water we drink consists not of pure substances, but of mixtures. You probably have a general idea of what a mixture is, and how it differs from a pure substance; what is the scientific criterion for making this distinction?
To a chemist, a pure substance usually refers to a sample of matter that has a distinct set of properties that are common to all other samples of that substance. A good example would be ordinary salt, sodium chloride. No matter what its source (from a mine, evaporated from seawater, or made in the laboratory), all samples of this substance, once they have been purified, possess the same unique set of properties.
A pure substance is one whose intensive properties are the same in any purified sample of that same substance.
A mixture, in contrast, is composed of two or more substances, and it can exhibit a wide range of properties depending on the relative amounts of the components present in the mixture. For example, you can dissolve up to 357 g of salt in one litre of water at room temperature, making possible an infinite variety of "salt water" solutions. For each of these concentrations, properties such as the density, boiling and freezing points, and the vapor pressure of the resulting solution will be different.
Is anything really pure?
Those of us who enjoy peanut butter would never willingly purchase a brand advertised as "impure". But a Consumer Reports article published some years ago showed a table listing the number of "mouse droppings" and "insect parts" (presumably from peanut storage silos) they found in samples of all the major brands. Bon appetit!
Finally, we all prefer to drink "pure" water, but we don't usually concern ourselves with the dissolved atmospheric gases and ions such as Ca2+ and HCO3- that are present in most drinking waters. But these harmless "impurities" are always present in those "pure" spring waters.
and para-water, respectively.
The bottom line: To a chemist, the term "pure" has meaning only in the context of a particular application or process.
Operational and conceptual classifications
Since chemistry is an experimental science, we need a set of experimental criteria for placing a given sample of matter in one of these categories. There is no single experiment that will always succeed unambiguously deciding this kind of question. However, there is one principle that will always work in theory, if not in practice. This is based on the fact that the various components of a mixture can, in principle, always be separated into pure substances.
Consider a heterogeneous mixture of salt water and sand. The sand can be separated from the salt water by the mechanical process of filtration. Similarly, the butterfat contained in milk may be separated from the water by a process known as churning, in which mechanical agitation forces the butterfat droplets to coalesce into the solid mass we know as butter. These examples illustrate the general principle that heterogeneous matter may be separated into homogeneous matter by mechanical means.
Turning this around, we have an operational definition of heterogeneous matter: If, by some mechanical operation we can separate a sample of matter into two or more other kinds of matter, then our original sample was heterogeneous. To find a similar operational definition for homogeneous mixtures, consider how we might separate the two components of a solution of salt water. The most obvious way would be to evaporate off the water, leaving the salt as a solid residue. Thus a homogeneous mixture can be separated into pure substances by undergoing appropriate partial changes of state— that is, by evaporation, freezing, etc.
Note the term partial in the above sentence; in the last example, we evaporate only the water, not the salt (which would be very difficult to do anyway!) The idea is that one component of the mixture is preferentially affected by the process we are carrying out. This principle will be emphasized in the following examples.
Separating homogeneous mixtures
Some common methods of separating homogeneous mixtures into their components are outlined below.
Distillation
A mixture of two volatile liquids is partly boiled away; the first portions of the condensed vapor will be enriched in the component having the lower boiling point. Note that if all the liquid were boiled away, the distillate would be identical with the original liquid. But if, say, half of the liquid is distilled, the distillate would contain a larger fraction of the more volatile component. If the distillate is then re-distilled, it can be further enriched in the low-boiling liquid. By repeating this process many times (aided by the fractionating column above the boiling vessel), a high degree of separation can be achieved.
Fractional crystallization
A hot saturated solution containing two or more dissolved solids is allowed to cool slowly; the least-soluble material crystallizes out first, and can be separated by filtration. This process is widely employed both in the laboratory and, on a much larger scale, in industry.
Similarly, a molten mixture of several components, when slowly cooled, will first yield crystals of the material having the highest melting point. This process occurs on a huge scale in nature when molten magma from the earth's mantle rises into the lithosphere and cools underground — a process that can take up to a million years. This is how the common rock known as granite is formed. Eventually these rocks rise and become exposed on the earth's surface.
Liquid-liquid Extraction
Two mutually-insoluble liquids, one containing two or more solutes (dissolved substances), are shaken together in a separatory funnel. Each solute will concentrate in the liquid in which it is more soluble. The two solutions are then separated by opening the stopcock at the bottom, allowing the more dense solution to drain out.
Solid-liquid Extraction
In working with natural products such as plant materials, a first step is often to extract soluble substances from the plant parts. This, and similar extractions of the soluble components of complex solids, is carried out in an apparatus known as a Soxhlet extractor.
The idea is to continuously percolate an appropriate hot solvent through the material, which is contained in a porous paper "thimble". Hot vapor from the boiling flask bypasses the extraction chamber through the arm at the left (labeled "vapor" in the illustration →) and into the condenser, from which it drips down into the extraction chamber, where a portion of the soluble material mixes with the solvent. When the condensate reaches the top of the chamber, it flows out through the siphon arm, emptying its contents into the boiling flask, which becomes increasingly concentrated in the extracted material.
The advantage of this arrangement is that the percolation-and-extraction process can be repeated indefinitely (usually hours to days) without much attention.
Chromatography
As a liquid or gaseous mixture flows along a column containing an adsorbant material, the more strongly-adsorbed components tend to move more slowly and emerge later than the less-strongly adsorbed components. In this example, an extract made from plant leaves is separated into its principle components: carotene, xanthophyll, and chlorophylls A and B.
Although chromatography originated in the mid-19th Century, it was not widely employed until the 1950's. Since that time, it has encompassed a huge variety of techniques and is no longer limited to colored substances. Chromatography is now one of the most widely-employed methods for the analysis and separation of complex mixtures of liquids and gases.
Physical and Chemical Properties
Since chemistry is partly the study of the transformations that matter can undergo, we can also assign to any substance a set of chemical properties that express the various changes of composition the substance is known to undergo. Chemical properties also include the conditions of temperature, etc., required to bring about the change, and the amount of energy released or absorbed as the change takes place.
The properties that we described above are traditionally known as physical properties, and are to be distinguished from chemical properties that usually refer to changes in composition that a substance can undergo. For example, we can state some of the more distinctive physical and chemical properties of the element sodium:
Physical properties (25 °C)
Chemical properties
• appearance: a soft, shiny metal
• density: 0.97 g/cm3
• melting point: 97.5 °C
• boiling point: 960 °C
• forms an oxide Na2O and a hydride NaH
• burns in air to form sodium peroxide Na2O2
• reacts violently with water to release hydrogen gas
• dissolves in liquid ammonia to form a deep blue solution
The more closely one looks at the distinction between physical and chemical properties, the more blurred this distinction becomes. For example, the high boiling point of water compared to that of methane, CH4, is a consequence of the electrostatic attractions between O-H bonds in adjacent molecules, in contrast to those between C-H bonds; at this level, we are really getting into chemistry! So although you will likely be expected to "distinguish between" physical and chemical properties on an exam, don't take it too seriously — this turns out to be a rather dubious dichotomy, loved by teachers, but of limited usefulness! | textbooks/chem/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.01%3A_Classification_and_Properties_of_Matter.txt |
Learning Objectives
• Explain the difference between kinetic energy and potential energy.
• Define chemical energy and thermal energy.
• Define heat and work, and describe an important limitation in their interconversion.
• Describe the physical meaning of temperature.
• Explain the meaning of a temperature scale and describe how a particular scale is defined.
• Convert a temperature expressed in Fahrenheit or Celsius to the other scale.
• Describe the Kelvin temperature scale and its special significance.
• Define heat capacity and specific heat, and explain how they can be measured.
All chemical changes are accompanied by the absorption or release of heat. The intimate connection between matter and energy has been a source of wonder and speculation from the most primitive times; it is no accident that fire was considered one of the four basic elements (along with earth, air, and water) as early as the fifth century BCE. This unit will cover only the very basic aspects of the subject, just enough to get you started; there is a far more complete set of lessons on chemical energetics elsewhere.
What is Energy?
Energy is one of the most fundamental and universal concepts of physical science, but one that is remarkably difficult to define in way that is meaningful to most people. This perhaps reflects the fact that energy is not a “thing” that exists by itself, but is rather an attribute of matter (and also of electromagnetic radiation) that can manifest itself in various ways. It can be observed and measured only indirectly through its effects on matter that acquires, loses, or possesses it. Energy can take many forms: mechanical, chemical, electrical, radiation (light), and thermal. You also know that energy is conserved; it can be passed from one object or place to another, but it can never simply disappear.
In the 17th Century, the great mathematician Gottfried Leibniz (1646-1716) suggested the distinction between vis viva ("live energy") and vis mortua ("dead energy"), which later became known as kinetic energy and potential energy. Except for radiant energy that is transmitted through an electromagnetic field, most practical forms of energy we encounter are of two kinds: kinetic and potential.
• Kinetic energy is associated with the motion of an object; a body with a mass, m, and moving at a velocity, v, possesses the kinetic energy $\frac{1}{2} mv^2$. This "v-squared" part is important; if you double your speed, you consume four times as much fuel (glucose for the runner, gasoline or electricity for your car.
• Potential energy is energy a body has by virtue of its location in a force field— a gravitational, electrical, or magnetic field. For example, if an object of mass m is raised off the floor to a height h, its potential energy increases by mgh, where g is a proportionality constant known as the acceleration of gravity. Similarly, the potential energy of a particle having an electric charge q depends on its location in an electrostatic field.
Kinetic and potential energy are freely interconvertible
Pick up a book and hold it above the table top; you have just increased its potential energy in the force field of the earth's gravity. Now let it drop. Its newly-acquired potential energy begins to re-appear as kinetic energy as it accelerates downward at a velocity increasing by 9.8 m/sec every second (9.8 m sec–2 or 32 ft sec–2). At the instant it strikes the surface, the potential energy you gave supplied to the book has now been entirely converted into kinetic energy.
And what happens to that kinetic energy after the book stops moving? It is still there, but you can no longer see its effect; it has now become dispersed as thermal kinetic energy ("heat") into the molecules of the book, the table top, and, ultimately, into the surroundings, including the air.
The more you think about it, the more examples of kinetic-potential conversion you will find in everyday life. In many other instances, however, the energy of an object can be seen to repeatedly alternate between potential and kinetic forms. Left alone, the process continues indefinitely until friction has dissipated the energy into the surroundings.
Energy's graveyard: Thermal energy
Energy is conserved: it can neither be created nor destroyed. But it can, and eventually always will, disappear from our view and into the microscopic world of individual molecular particles. All molecules are in a continual state of motion, and they therefore possess kinetic energy. But unlike the motion of a massive body such as a baseball or a car that is moving along a defined trajectory, the motions of individual atoms or molecules are random and chaotic, forever changing in magnitude and direction as they collide with each other or (as in the case of a gas) with the walls of the container.
The sum total of all of this microscopic-scale randomized kinetic energy within a body is given a special name, thermal energy. Although we cannot directly see thermal energy in action, we can certainly feel it; as we will see further, it correlates directly with the temperature of an object.
The chemistry connection
Atoms and molecules are the principle actors of thermal energy, but they possess other kinds of energy as well that plays a major role in chemistry.
Bond energy
H2+ is energetically stable enough to exist as an identifiable entity, and thus fits the definition a molecule. But it is also extremely reactive, so it does not sit around for very long. It can only be observed when a high-voltage electrical discharge is passed through hydrogen gas; the blue glow one sees represents its demise as it picks up electrons and reverts to the far more stable dihydrogen molecule H2.
Consider, for example, the simplest possible molecule. This is the hydrogen molecule ion, H2+, in which a single electron simultaneously attracts two protons. These protons, having identical charges, repel each other, but this is overcome by the electron-proton attractions, leading to a net decrease in potential energy when an electron combines with two protons. This potential energy decrease is sufficient to enable H2+ to exist as a discrete molecule which we can represent as [H—H]+ in order to explicitly depict the chemical bond that joins the two atoms.
The strength of a chemical bond increases as the potential energy associated with its formation becomes more negative.
Chemical bonds also possess some kinetic energy that is associated with the "motion" of the electron as it spreads itself into the extended space it occupies in what we call the "bond". This is a quantum effect that has no classical counterpart. The kinetic energy has only half the magnitude of the potential energy and works against it; the total bond energy is the sum of the two energies.
Chemical energy
The chemical bonds in the glucose molecules store the energy that fuels our bodies.
Molecules are vehicles both for storing and transporting energy, and the means of converting it from one form to another when the formation, breaking, or rearrangement of the chemical bonds within them is accompanied by the uptake or release of energy, most commonly in the form of heat.
Chemical energy refers to the potential and kinetic energy associated with the chemical bonds in a molecule. Consider what happens when hydrogen and oxygen combine to form water. The reactants H2 and O2 contain more bond energy than H2O, so when they combine, the excess energy is liberated given off in the form of thermal energy, or "heat".
By convention, the energy content of the chemical elements in their natural state (H2 and O2 in this example) are defined as "zero". This makes calculations much easier, and gives most compounds negative "energies of formation". (see below)
Chemical energy manifests itself in many different ways:
• chemical → thermal → kinetic chemical → thermal → kinetic + radiant
• chemical → electrical → kinetic (nerve function, muscle movement)
• chemical → electrical
Energy scales are always arbitrary
You might at first think that a book sitting on the table has zero kinetic energy since it is not moving. In truth, however, the earth itself is moving; it is spinning on its axis, it is orbiting the sun, and the sun itself is moving away from the other stars in the general expansion of the universe. Since these motions are normally of no interest to us, we are free to adopt an arbitrary scale in which the velocity of the book is measured with respect to the table; on this so-called laboratory coordinate system, the kinetic energy of the book can be considered zero.
We do the same thing with potential energy. If we define the height of the table top as the zero of potential energy, then an object having a mass $m$ suspended at a height h above the table top will have a potential energy of mgh. Now let the object fall; as it accelerates in the earth's gravitational field, its potential energy changes into kinetic energy. An instant before it strikes the table top, this transformation is complete and the kinetic energy $\frac{1}{2}mv^2$ is identical with the original mgh. As the object comes to rest, its kinetic energy appears as heat (in both the object itself and in the table top) as the kinetic energy becomes randomized as thermal energy.
Energy units
Energy is measured in terms of its ability to perform work or to transfer heat. Mechanical work is done when a force f displaces an object by a distance d:
$W = f\cdot d$
The basic unit of energy is the joule. One joule is the amount of work done when a force of 1 newton acts over a distance of 1 m; thus 1 J = 1 N-m. One newton is the amount of force required to accelerate a 1 - kg mass by 1 meter per second in one second - 1 m sec–2, so the basic dimensions of the joule are kg m2 s–2. The other two units of energy that are in wide use are the calorie and the BTU (British thermal unit). These are defined in terms of the heating effect on water. For the moment, we will confine our attention to joule and calorie.
Heat and work are both measured in energy units, but they do not constitute energy itself. As we will explain below, they refer to processes by which energy is transferred to or from something— a block of metal, a motor, or a cup of water.
Heat
When a warmer body is brought into contact with a cooler body, thermal energy flows from the warmer one to the cooler until their two temperatures are identical. The warmer body loses a quantity of thermal energy ΔE, and the cooler body acquires the same amount of energy. We describe this process by saying that "ΔE joules of heat has passed from the warmer body to the cooler one." It is important, however, to understand that Heat is the transfer of energy due to a difference in temperature.
Heat does NOT flow
We often refer to a "flow" of heat, recalling the 18th-century notion that heat was an actual substance called “caloric” that could flow like a liquid. This is a misnomer; heat is a process and is not something that can be contained or stored in a body. It is important that you understand this, because the use of the term in our ordinary conversation ("the heat is terrible today") tends to make us forget this distinction.
There are basically three mechanisms by which heat can be transferred: conduction, radiation, and convection. The latter process occurs when the two different temperatures cause different parts of a fluid to have different densities.
Work
Work is the transfer of energy by any process other than heat.
Work, like energy, can take various forms: mechanical, electrical, gravitational, etc. All have in common the fact that they are the product of two factors, an intensity term and a capacity term. For example, the simplest form of mechanical work arises when an object moves a certain distance against an opposing force. Electrical work is done when a body having a certain charge moves through a potential difference.
type of work
intensity factor
capacity factor
formula
mechanical force change in distance $f\Delta x$
gravitational gravitational potential (a function of height) mass mgh
electrical potential difference quantity of charge $Q\Delta V$
Performance of work involves a transformation of energy; thus when a book drops to the floor, gravitational work is done (a mass moves through a gravitational potential difference), and the potential energy the book had before it was dropped is converted into kinetic energy which is ultimately dispersed as thermal energy.
Mechanical work is the product of the force exerted on a body and the distance it is moved: 1 N-m = 1 J.
Heat and work are best thought of as processes by which energy is exchanged, rather than as energy itself. That is, heat “exists” only when it is flowing, work “exists” only when it is being done.
When two bodies are placed in thermal contact and energy flows from the warmer body to the cooler one,we call the process “heat”. A transfer of energy to or from a system by any means other than heat is called “work”.
So you can think of heat and work as just different ways of accomplishing the same thing: the transfer of energy from one place or object to another.
To make sure you understand this, suppose you are given two identical containers of water at 25°C. Into one container you place an electrical immersion heater until the water has absorbed 100 joules of heat. The second container you stir vigorously until 100 J of work has been performed on it. At the end, both samples of water will have been warmed to the same temperature and will contain the same increased quantity of thermal energy. There is no way you can tell which contains "more work" or "more heat".
An important limitation on energy conversion
A gas engine converts the chemical energy available in its fuel into thermal energy. Only a part of this is available to perform work; the remainder is dispersed into the surroundings through the exhaust. This limitation is the essence of the Second Law of Thermodynamics which we will get to much later in this course
Thermal energy is very special in one crucial way. All other forms of energy are interconvertible: mechanical energy can be completely converted to electrical energy, and the latter can be completely converted to thermal, as in the water-heating example described above. But although work can be completely converted into thermal energy, complete conversion of thermal energy into work is impossible. A device that partially accomplishes this conversion is known as a heat engine; a steam engine, a jet engine, and the internal combustion engine in a car are well-known examples.
Temperature and its meaning
We all have a general idea of what temperature means, and we commonly associate it with "heat", which, as we noted above, is a widely misunderstood word. Both relate to what we described above as thermal energy—the randomized kinetic energy associated with the various motions of matter at the atomic and molecular levels.
Heat, you will recall, is not something that is "contained within" a body, but is rather a process in which [thermal] energy enters or leaves a body as the result of a temperature difference.
So when you warm up your cup of tea by allowing it to absorb 1000 J of heat from the stove, you can say that the water has acquired 1000 J of energy — but not of heat. If, instead, you "heat" your tea in a microwave oven, the water acquires its added energy by direct absorption of electromagnetic energy; because this process is not driven by a temperature difference, heat was not involved at al!!
Thermometry
We commonly measure temperature by means of a thermometer — a device that employs some material possessing a property that varies in direct proportion to the temperature. The most common of these properties are the density of a liquid, the thermal expansion of a metal, or the electrical resistance of a material.
The ordinary thermometer we usually think of employs a reservoir of liquid whose thermal expansion (decrease in density) causes it to rise in a capillary tube. Metallic mercury has traditionally been used for this purpose, as has an alcohol (usually isopropyl) containing a red dye.
Mercury was the standard thermometric liquid of choice for more than 200 years, but its use for this purpose has been gradually phased out owing to its neurotoxicity. Although coal-burning, disposal of fluorescent lamps, incineration and battery disposal are major sources of mercury input to the environment, broken thermometers have long been known to release hundreds of tons of mercury. Once spilled, tiny drops of the liquid metal tend to lodge in floor depressions and cracks where they can emit vapor for years.
Temperature
Temperature is a measure of the average kinetic energy of the molecules within the water. You can think of temperature as an expression of the "intensity" with which the thermal energy in a body manifests itself in terms of chaotic, microscopic molecular motion.
• Heat is the quantity of thermal energy that enters or leaves a body.
• Temperature measures the average translational kinetic energy of the molecules in a body.
This animation depicts thermal translational motions of molecules in a gas. In liquids and solids, there is vary little empty space between molecules, and they mostly just bump against and jostle one another.
You will notice that we have sneaked the the word "translational" into this definition of temperature. Translation refers to a change in location: in this case, molecules moving around in random directions. This is the major form of thermal energy under ordinary conditions, but molecules can also undergo other kinds of motion, namely rotations and internal vibrations. These latter two forms of thermal energy are not really "chaotic" and do not contribute to the temperature.
Energy is measured in joules, and temperature in degrees. This difference reflects the important distinction between energy and temperature:
• We can say that 100 g of hot water contains more energy (not heat!) than 100 g of cold water. And because energy is an extensive quantity, we know that a 10-g portion of this hot water contains only ten percent as much energy as the entire 100-g amount.
• Temperature, by contrast, is not a measure of quantity; being an intensive property, it is more of a "quality" that describes the "intensity" with which thermal energy manifests itself. So both the 100-g and 10-g portions of the hot water described above possess the same temperature.
Temperature scales
Temperature is measured by observing its effect on some temperature-dependent variable such as the volume of a liquid or the electrical resistance of a solid. In order to express a temperature numerically, we need to define a scale which is marked off in uniform increments which we call degrees. The nature of this scale — its zero point and the magnitude of a degree, are completely arbitrary.
Although rough means of estimating and comparing temperatures have been around since AD 170, the first mercury thermometer and temperature scale were introduced in Holland in 1714 by Gabriel Daniel Fahrenheit.
Fahrenheit established three fixed points on his thermometer. Zero degrees was the temperature of an ice, water, and salt mixture, which was about the coldest temperature that could be reproduced in a laboratory of the time. When he omitted salt from the slurry, he reached his second fixed point when the water-ice combination stabilized at "the thirty-second degree." His third fixed point was "found as the ninety-sixth degree, and the spirit expands to this degree when the thermometer is held in the mouth or under the armpit of a living man in good health." After Fahrenheit died in 1736, his thermometer was recalibrated using 212 degrees, the temperature at which water boils, as the upper fixed point. Normal human body temperature registered 98.6 rather than 96.
Belize and the U.S.A. are the only countries that still use the Fahrenheit scale!
In 1743, the Swedish astronomer Anders Celsius devised the aptly-named centigrade scale that places exactly 100 degrees between the two reference points defined by the freezing- and boiling points of water.
For reasons best known to Celsius, he assigned 100 degrees to the freezing point of water and 0 degrees to its boiling point, resulting in an inverted scale that nobody liked. After his death a year later, the scale was put the other way around. The revised centigrade scale was quickly adopted everywhere except in the English-speaking world, and became the metric unit of temperature. In 1948 it was officially renamed as the Celsius scale.
Temperature comparisons and conversions
When we say that the temperature is so many degrees, we must specify the particular scale on which we are expressing that temperature. A temperature scale has two defining characteristics, both of which can be chosen arbitrarily:
• The temperature that corresponds to 0° on the scale;
• The magnitude of the unit increment of temperature– that is, the size of the degree.
In order to express a temperature given on one scale in terms of another, it is necessary to take both of these factors into account.
Converting between Celsius and Fahrenheit is easy if you bear in mind that between the so-called ice- and steam points of water there are 180 Fahrenheit degrees, but only 100 Celsius degrees, making the F° 100/180 = 5/9 the magnitude of the C°.
Because the ice point is at 32 °F, the two scales are offset by this amount. If you remember this, there is no need to memorize a conversion formula; you can work it out whenever you need it. Note the distinction between “°C” (a temperature) and “C°” (a temperature increment).
Absolute temperature scales
Near the end of the 19th Century when the physical significance of temperature began to be understood, the need was felt for a temperature scale whose zero really means zero — that is, the complete absence of thermal motion. This gave rise to the absolute temperature scale whose zero point is –273.15 °C, but which retains the same degree magnitude as the Celsius scale. This was eventually renamed after Lord Kelvin (William Thompson) thus the Celsius degree became the kelvin. It is now common to express an increment such as five C° as “five kelvins”
In 1859 the Scottish engineer and physicist William J.M. Rankine proposed an absolute temperature scale based on the Fahrenheit degree. Absolute zero (0° Ra) corresponds to –459.67°F. The Rankine scale has been used extensively by those same American and British engineers who delight in expressing energies in units of BTUs and masses in pounds.
The importance of absolute temperature scales is that absolute temperatures can be entered directly in all the fundamental formulas of physics and chemistry in which temperature is a variable. Perhaps the most common example, known to all beginning students, is the ideal gas equation state.
$PV = nRT$
Heat capacity
As a body loses or gains heat, its temperature changes in direct proportion to the amount of thermal energy q transferred:
$q= C\Delta T$
The proportionality constant C is known as the heat capacity
$C = \frac{q}{\Delta T}$
If ΔT is expressed in kelvins (degrees) and q in joules, the units of C are J K–1. In other words, the heat capacity tells us how many joules of energy it takes to change the temperature of a body by 1 C°. The greater the value of C, the the smaller will be the effect of a given energy change on the temperature.
It should be clear that C is an extensive property— that is, it depends on the quantity of matter. Everyone knows that a much larger amount of energy is required to bring about a 10 C° change in the temperature of 1 L of water compared to 10 mL of water. For this reason, it is customary to express C in terms of unit quantity, such as per gram, in which case it becomes the specific heat capacity, commonly referred to as the "specific heat" and has the units J K–1g–1.
Thus if identical quantities of heat flow into two bodies having different heat capacities, the one having the smaller heat capacity will undergo the greater change in temperature. (You might find it helpful to think of heat capacity as a measure of a body's ability to resist a change of temperature when absorbing or losing heat.) Note: you are expected to know the units of specific heat. The advantage of doing so is that you need not learn a "formula" for solving specific heat problems.
Example $1$
How many joules of heat must flow into 150 mL of water at 0 °C to raise its temperature to 25 °C?
Solution
The mass of the water is (150 mL) × (1.00 g mL–1) = 150 g. The specific heat of water is 4.18 J K–1 g–1. From the definition of specific heat, the quantity of energy
q = ΔE is (150 g)(25.0 K)(4.18 J K–1 g–1) = 16700 J.
How can I rationalize this procedure? It should be obvious that the greater the mass of water and the greater the temperature change, the more heat will be required, so these two quantities go in the numerator. Similarly, the energy required will vary inversely with the specific heat, which therefore goes in the denominator.
Table $1$: Specific heat capacities of some common substances
Substance
C, J /g-K
Aluminum 0.900
Copper 0.386
Lead 0.128
Mercury 0.140
Zinc 0.387
Alcohol (ethanol) 2.4
Water 4.18
Ice (–10° C) 2.05
Gasoline (n-octane) 0.53
Glass 0.84
Carbon (graphite/diamond) 0.710 / .509
Sodium chloride 0.854
Rock (granite) 0.790
Air 1.01
Note especially the following:
• The molar heat capacities of the metallic elements are almost identical. This is the basis of the Law of Dulong and Petit, which served as an important tool for estimating the atomic weights of some elements.
• The intermolecular hydrogen bonding in water and alcohols results in anomalously high heat capacities for these liquids; the same is true for ice, compared to other solids.
• The values for graphite and diamond are consistent with the principle that solids that are more “ordered” tend to have larger heat capacities.
Example $1$:
A piece of nickel weighing 2.40 g is heated to 200.0 °C, and is then dropped into 10.0 mL of water at 15.0 °C. The temperature of the metal falls and that of the water rises until thermal equilibrium is attained and both are at 18.0 °C. What is the specific heat of the metal?
Solution
The mass of the water is (10 mL) × (1.00 g mL–1) = 10 g. The specific heat of water is 4.18 1 J K–1 g–1 and its temperature increased by 3.0 C°, indicating that it absorbed (10 g)(3 K)(4.18 J K–1 g–1) = 125 J of energy. The metal sample lost this same quantity of energy, undergoing a temperature drop of 182 C° as the result. The specific heat capacity of the metal is:
(125 J) / (2.40 g)(182 K) = 0.287 J K–1 g–1.
Notice that no "formula" is required here as long as you know the units of specific heat; you simply place the relevant quantities in the numerator or denominator to make the units come out correctly. | textbooks/chem/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.02%3A_Energy_Heat_and_Temperature.txt |
Learning Objectives
• Describe the names and abbreviations of the SI base units and the SI decimal prefixes.
• Define the liter and the metric ton in these units.
• Explain the meaning and use of unit dimensions; state the dimensions of volume.
• State the quantities that are needed to define a temperature scale, and show how these apply to the Celsius, Kelvin, and Fahrenheit temperature scales.
• Explain how a Torricellian barometer works.
The natural sciences begin with observation, and this usually involves numerical measurements of quantities such as length, volume, density, and temperature. Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. All measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning, and must be understood when converting between different unit systems.
Units of Measure
Have you ever estimated a distance by “stepping it off”— that is, by counting the number of steps required to take you a certain distance? Or perhaps you have used the width of your hand, or the distance from your elbow to a fingertip to compare two dimensions. If so, you have engaged in what is probably the first kind of measurement ever undertaken by primitive mankind.
Leonardo da Vinci - Vitruvian Man
The results of a measurement are always expressed on some kind of a scale that is defined in terms of a particular kind of unit. The first scales of distance were likely related to the human body, either directly (the length of a limb) or indirectly (the distance a man could walk in a day). As civilization developed, a wide variety of measuring scales came into existence, many for the same quantity (such as length), but adapted to particular activities or trades. Eventually, it became apparent that in order for trade and commerce to be possible, these scales had to be defined in terms of standards that would allow measures to be verified, and, when expressed in different units (bushels and pecks, for example), to be correlated or converted.
Over the centuries, hundreds of measurement units and scales have developed in the many civilizations that achieved some literate means of recording them. Some, such as those used by the Aztecs, fell out of use and were largely forgotten as these civilizations died out. Other units, such as the various systems of measurement that developed in England, achieved prominence through extension of the Empire and widespread trade; many of these were confined to specific trades or industries. The examples shown here are only some of those that have been used to measure length or distance. The history of measuring units provides a fascinating reflection on the history of industrial development.
The most influential event in the history of measurement was undoubtedly the French Revolution and the Age of Enlightenment that followed. This led directly to the metric system that attempted to do away with the confusing multiplicity of measurement scales by reducing them to a few fundamental ones that could be combined in order to express any kind of quantity. The metric system spread rapidly over much of the world, and eventually even to England and the rest of the U.K. when that country established closer economic ties with Europe in the latter part of the 20th Century. The United States is presently the only major country in which “metrication” has made little progress within its own society, probably because of its relative geographical isolation and its vibrant internal economy.
The SI Units
Science, being a truly international endeavor, adopted metric measurement very early on; engineering and related technologies have been slower to make this change, but are gradually doing so. Even the within the metric system, however, a variety of units were employed to measure the same fundamental quantity; for example, energy could be expressed within the metric system in units of ergs, electron-volts, joules, and two kinds of calories. This led, in the mid-1960s, to the adoption of a more basic set of units, the Systeme Internationale (SI) units that are now recognized as the standard for science and, increasingly, for technology of all kinds.
In principle, any physical quantity can be expressed in terms of only seven base units. Each base unit is defined by a standard which is described in the NIST Web site.
kelvin
The seven base units in the SI system
Observable Base Unit Abbreviation
length meter m
mass kilogram kg
time second s
temperature (absolute) Kelvin K
amount of substance mole mol
electric current ampere A
luminous intensity candela cd
A few special points about some of these units are worth noting
• The base unit of mass is unique in that a decimal prefix (see below) is built-in to it; that is, it is not the gram, as you might expect.
• The base unit of time is the only one that is not metric. Numerous attempts to make it so have never garnered any success; we are still stuck with the 24:60:60 system that we inherited from ancient times. (The ancient Egyptians of around 1500 BC invented the 12-hour day, and the 60:60 part is a remnant of the base-60 system that the Sumerians used for their astronomical calculations around 100 BCE.)
• Of special interest to Chemistry is the mole, the base unit for expressing the quantity of matter. The number is also known as Avogadro’s number and is exactly $6.02214076 \times 10^{23}$ of anything.
Owing to the wide range of values that quantities can have, it has long been the practice to employ prefixes such as milli and mega to indicate decimal fractions and multiples of metric units. As part of the SI standard, this system has been extended and formalized.
deci
The SI decimal prefixes
prefix abbreviation multiplier -- prefix abbreviation multiplier
peta P 1015 deca d 10–1
tera T 1012 centi c 10–2
giga G 109 milli m 10–3
mega M 106 micro μ 10–6
kilo k 103 nano n 10–9
hecto h 102 pico p 10–12
deca da 10 femto f 10–15
There is a category of units that are “honorary” members of the SI in the sense that it is acceptable to use them along with the base units defined above. These include such mundane units as the hour, minute, and degree (of angle), etc., but the three shown here are of particular interest to chemistry, and you will need to know them.
liter (litre) L 1 L = 1 dm3 = 10–3 m3
metric ton t 1 t = 103 kg
united atomic mass unit u 1 u = 1.66054×10–27 kg
SI-Derived Units and Dimensional Analysis
Most of the physical quantities we actually deal with in science and also in our daily lives, have units of their own: volume, pressure, energy and electrical resistance are only a few of hundreds of possible examples. It is important to understand, however, that all of these can be expressed in terms of the SI base units; they are consequently known as derived units. In fact, most physical quantities can be expressed in terms of one or more of the following five fundamental units:
mass length time electric charge temperature
M L T Q Θ (theta)
Dimensional analysis is an important tool in working with and converting units in calculations. Consider, for example, the unit of volume, which we denote as $V$. To measure the volume of a rectangular box, we need to multiply the lengths as measured along the three coordinates:
$V = x · y · z \label{eq20}$
We say, therefore, that volume has the dimensions of length-cubed:
$dim.V = L^3$
Thus the units of volume will be m3 (in the SI) or cm3, ft3 (English), etc. Moreover, any formula that calculates a volume must contain within it the L3dimension; thus the volume of a sphere is 4/3 πr3.
Units and their Ranges in Chemistry
In this section, we will look at some of the quantities that are widely encountered in Chemistry, and at the units in which they are commonly expressed. In doing so, we will also consider the actual range of values these quantities can assume, both in nature in general, and also within the subset of nature that chemistry normally addresses. In looking over the various units of measure, it is interesting to note that their unit values are set close to those encountered in everyday human experience
Ranges of Mass and Weight in Chemistry
These two quantities are widely confused. Although they are often used synonymously in informal speech and writing, they have different dimensions: weight is the force exerted on a mass by the local gravitational field:
$f = m a = m g$
where g is the acceleration of gravity. While the nominal value of the latter quantity is 9.80 m s–2 at the Earth’s surface, its exact value varies locally. Because it is a force, the proper SI unit of weight is the newton, but it is common practice (except in physics classes!) to use the terms "weight" and "mass" interchangeably, so the units kilograms and grams are acceptable in almost all ordinary laboratory contexts.
Please note that in this diagram and in those that follow, the numeric scale represents the logarithm of the number shown. For example, the mass of the electron is 10–30 kg.
The range of masses spans 90 orders of magnitude, more than any other unit. The range that chemistry ordinarily deals with has greatly expanded since the days when a microgram was an almost inconceivably small amount of material to handle in the laboratory; this lower limit has now fallen to the atomic level with the development of tools for directly manipulating these particles. The upper level reflects the largest masses that are handled in industrial operations, but in the recently developed fields of geochemistry and environmental chemistry, the range can be extended indefinitely. Flows of elements between the various regions of the environment (atmosphere to oceans, for example) are often quoted in t
Range of Distances Encountered in Chemistry
Chemists tend to work mostly in the moderately-small part of the distance range. Those who live in the lilliputian world of crystal- and molecular structures and atomic radii find the picometer a convenient currency, but one still sees the older non-SI unit called the Ångstrom used in this context;
$1\,\unicode{x212B} = 10^{–10}\,\text{m} = 100\,\text{pm}.$
Nanotechnology, the rage of the present era, also resides in this realm. The largest polymeric molecules and colloids define the top end of the particulate range; beyond that, in the normal world of doing things in the lab, the centimeter and occasionally the millimeter commonly rule.
For humans, time moves by the heartbeat; beyond that, it is the motions of our planet that count out the hours, days, and years that eventually define our lifetimes. Beyond the few thousands of years of history behind us, those years-to-the-powers-of-tens that are the fare for such fields as evolutionary biology, geology, and cosmology, cease to convey any real meaning to us. Perhaps this is why so many people are not very inclined to accept the validity of these sciences.
Most of what actually takes place in the chemist’s test tube operates on a far shorter time scale, although there is no limit to how slow a reaction can be; the upper limits of those we can directly study in the lab are in part determined by how long a graduate student can wait around before moving on to gainful employment.
Looking at the microscopic world of atoms and molecules themselves, the time scale again shifts us into an unreal world where numbers tend to lose their meaning. You can gain some appreciation of the duration of a nanosecond by noting that this is about how long it takes a beam of light to travel between your two outstretched hands. In a sense, the material foundations of chemistry itself are defined by time: neither a new element nor a molecule can be recognized as such unless it lasts around sufficiently long enough to have its “picture” taken through measurement of its distinguishing properties.
Range of Temperatures in Chemistry
Temperature, the measure of thermal intensity, spans the narrowest range of any of the base units of the chemist’s measure. The reason for this is tied into temperature’s meaning as an indicator of the intensity of thermal kinetic energy. Chemical change occurs when atoms are jostled into new arrangements, and the increasing weakness of these motions brings most chemistry to a halt as absolute zero is approached. At the upper end of the scale, thermal motions become sufficiently vigorous to shake molecules into atoms, and eventually, as in stars, strip off the electrons, leaving an essentially reaction-less gaseous fluid, or plasma, of bare nuclei (ions) and electrons.
We all know that temperature is expressed in degrees. What we frequently forget is that the degree is really an increment of temperature, a fixed fraction of the distance between two defined reference points on a temperature scale.
Range of Pressures in Chemistry
Pressure is the measure of the force exerted on a unit area of surface. Its SI units are therefore newtons per square meter, but we make such frequent use of pressure that a derived SI unit, the pascal, is commonly used:
$1\, \text{Pa} = 1\, \text{N}\, \text{m}^{–2}$
Pressure of the Atmosphere
The concept of pressure first developed in connection with studies relating to the atmosphere and vacuum that were first carried out in the 17th century. The molecules of a gas are in a state of constant thermal motion, moving in straight lines until experiencing a collision that exchanges momentum between pairs of molecules and sends them bouncing off in other directions.
This leads to a completely random distribution of the molecular velocities both in speed and direction— or it would in the absence of the Earth’s gravitational field which exerts a tiny downward force on each molecule, giving motions in that direction a very slight advantage. In an ordinary container this effect is too small to be noticeable, but in a very tall column of air the effect adds up: the molecules in each vertical layer experience more downward-directed hits from those above it. The resulting force is quickly randomized, resulting in an increased pressure in that layer which is then propagated downward into the layers below.
At sea level, the total mass of the sea of air pressing down on each 1 m2 of surface is about 1034 g, or 10340 kg m–2. The force (weight) that the Earth’s gravitational acceleration $g$ exerts on this mass is:
\begin{align} f &= ma \nonumber \[4pt] &= mg \nonumber \[4pt] &= (10340\text{ kg})(9.81\text{ m s}^{–2}) \nonumber \[4pt] &= 1.013 \times 10^5\text{ kg m s}^{–2} \nonumber \[4pt] &= 1.013 \times 10^5 \text{ newtons} \end{align}
resulting in a pressure of
$1.013 \times 10^5\, \text{N}\, \text{m}^{–2} = 1.013 \times 10^5 \text{ Pa}.$
The actual pressure at sea level varies with atmospheric conditions, so it is customary to define standard atmospheric pressure as 1 atm = 1.013 × 105 Pa or 101 kPa.
Although the standard atmosphere (atm) is not an SI unit, it is still widely employed. In meteorology, the bar, exactly 1.000 × 105 = 0.967 atm, is often used.
The Barometer
In the early 17th century, the Italian physicist and mathematician Evangalisto Torricelli invented a device to measure atmospheric pressure. The Torricellian barometer consists of a vertical glass tube closed at the top and open at the bottom. It is filled with a liquid, traditionally mercury, and is then inverted, with its open end immersed in the container of the same liquid. The liquid level in the tube will fall under its own weight until the downward force is balanced by the vertical force transmitted hydrostatically to the column by the downward force of the atmosphere acting on the liquid surface in the open container. Torricelli was also the first to recognize that the space above the mercury constituted a vacuum, and is credited with being the first to create a vacuum.
One standard atmosphere will support a column of mercury that is 76 cm high, so the “millimeter of mercury”, now more commonly known as the torr, has long been a common pressure unit in the sciences: 1 atm = 760 torr. | textbooks/chem/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.03%3A_The_Measure_of_Matter.txt |
Learning Objectives
• Give an example of a measured numerical value, and explain what distinguishes it from a "pure" number.
• Give examples of random and systematic errors in measurements.
• Find the mean value of a series of similar measurements.
• State the principal factors that affect the difference between the mean value of a series of measurements, and the "true value" of the quantity being measured.
• Calculate the absolute and relative precisions of a given measurement, and explain why the latter is generally more useful.
• Distinguish between the accuracy and the precision of a measured value, and on the roles of random and systematic error.
In science, there are numbers and there are "numbers". What we ordinarily think of as a "number" and will refer to here as a pure number is just that: an expression of a precise value. The first of these you ever learned were the counting numbers, or integers; later on, you were introduced to the decimal numbers, and the rational numbers, which include numbers such as 1/3 and π (pi) that cannot be expressed as exact decimal values. The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something– the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as the kind of pure numbers described above.
Confusing? Suppose our instrument has an indicator such as you see here. The pointer moves up and down so as to display the measured value on this scale. What number would you write in your notebook when recording this measurement? Clearly, the value is somewhere between 130 and 140 on the scale, but the graduations enable us to be more exact and place the value between 134 and 135. The indicator points more closely to the latter value, and we can go one more step by estimating the value as perhaps 134.8, so this is the value you would report for this measurement.
Now here’s the important thing to understand: although “134.8” is itself a number, the quantity we are measuring is almost certainly not 134.8 — at least, not exactly. The reason is obvious if you note that the instrument scale is such that we are barely able to distinguish between 134.7, 134.8, and 134.9. In reporting the value 134.8 we are effectively saying that the value is probably somewhere with the range 134.75 to 134.85. In other words, there is an uncertainty of ±0.05 unit in our measurement.
All measurements of quantities that can assume a continuous range of values (lengths, masses, volumes, etc.) consist of two parts: the reported value itself (never an exactly known number), and the uncertainty associated with the measurement. By “error”, we do not mean just outright mistakes, such as incorrect use of an instrument or failure to read a scale properly; although such gross errors do sometimes happen, they usually yield results that are sufficiently unexpected to call attention to themselves.
Scale-reading error
When you measure a volume or weight, you observe a reading on a scale of some kind, such as the one illustrated above. Scales, by their very nature, are limited to fixed increments of value, indicated by the division marks. The actual quantities we are measuring, in contrast, can vary continuously, so there is an inherent limitation in how finely we can discriminate between two values that fall between the marked divisions of the measuring scale. Scale-reading error is often classified as random error (see below), but it occurs so commonly that we treat it separately here.
The same problem remains if we substitute an instrument with a digital display; there will always be a point at which some value that lies between the two smallest divisions must arbitrarily toggle between two numbers on the readout display. This introduces an element of randomness into the value we observe, even if the "true" value remains unchanged. The more sensitive the measuring instrument, the less likely it is that two successive measurements of the same sample will yield identical results. In the example we discussed above, distinguishing between the values 134.8 and 134.9 may be too difficult to do in a consistent way, so two independent observers may record different values even when viewing the same reading.
Parallax error
One form of scale-reading error that often afflicts beginners in the science laboratory is failure to properly align the eye with the part of the scale you are reading. This gives rise to parallax error. Parallax refers to the change in the apparent position of an object when viewed from different points.
The most notorious example encountered in the introductory chemistry laboratory is failure to read the volume of a liquid properly in a graduated cylinder or burette. Getting all of their students trained to make sure their eye is level with the bottom of the meniscus is the lab instructors' hope and despair.
Proper use of a measuring device can help reduce the possibility of parallax error. For example, a length scale should be in direct contact with the object (left), not above it as on the right.
Analog meters (those having pointer needles) are most accurate when read at about 2/3 of the length of the scale. Analog-type meters, unlike those having digital readouts, are also subject to parallax error. Those intended for high-accuracy applications often have a mirrored arc along the scale in which a reflection of the pointer needle can be seen if the viewer is not properly aligned with the instrument.
• Random (indeterminate) error: Each measurement is also influenced by a myriad of minor events, such as building vibrations, electrical fluctuations, motions of the air, and friction in any moving parts of the instrument. These tiny influences constitute a kind of "noise" that also has a random character. Whether we are conscious of it or not, all measured values contain an element of random error.
• Systematic error: Suppose that you weigh yourself on a bathroom scale, not noticing that the dial reads “1.5 kg” even before you have placed your weight on it. Similarly, you might use an old ruler with a worn-down end to measure the length of a piece of wood. In both of these examples, all subsequent measurements, either of the same object or of different ones, will be off by a constant amount. Unlike random error, which is impossible to eliminate, these systematic error (also known as determinate error) is usually quite easy to avoid or compensate for, but only by a conscious effort in the conduct of the observation, usually by proper zeroing and calibration of the measuring instrument. However, once systematic error has found its way into the data, it is can be very hard to detect.
Accuracy and precision
We tend to use these two terms interchangeably in our ordinary conversation, but in the context of scientific measurement, they have very different meanings:
• Accuracy refers to how closely the measured value of a quantity corresponds to its “true” value.
• Precision expresses the degree of reproducibility, or agreement between repeated measurements.
Accuracy, of course, is the goal we strive for in scientific measurements. Unfortunately, however, there is no obvious way of knowing how closely we have achieved it; the “true” value, whether it be of a well-defined quantity such as the mass of a particular object, or an average that pertains to a collection of objects, can never be known — and thus we can never recognize it if we are fortunate enough to find it.
Four Scenarios
A target on a dart board serves as a convenient analogy. The results of four sets of measurements (or four dart games) are illustrated below. Each set is made up of ten observations (or throws of darts.) Each red dot corresponds to the point at which a dart has hit the target — or alternatively, to the value of an individual observation. For measurements, assume the true value of the quantity being measured lies at the center of each target. Now consider the following four sets of results:
Right on! You win the dart game, and get an A grade on your measurement results.
Your results are beautifully replicable, but your measuring device may not have been calibrated properly or your observations suffer from a systematic error of some kind. Accuracy: F, Precision, A; overall grade C.
Extremely unlikely, and probably due to pure luck; the only reason for the accurate mean is that your misses mostly canceled out. Grade D.
Pretty sad; consider switching to music or politics — or have your eyes examined.
Note
When we make real measurements, there is no dart board or target that enables one to immediately judge the quality of the result. If we make only a few observations, we may be unable distinguish between any of these scenarios.
The "true value" of a desired measurement can be quite elusive, and may not even be definable at all. This is a very common difficulty in both the social sciences (as in opinion surveys), in medicine (evaluating the efficacy of a drug or other treatment), and in all other natural sciences. The proper treatment of such problems is to make multiple observations of individual instances of what is being measured, and then use statistical methods to evaluate the results. In this introductory unit on measurement, we will defer discussion of concepts such as standard deviation and confidence intervals which become essential in courses at the second-year level and beyond. We will restrict our treatment here to the elementary considerations that are likely to be needed in a typical first-year course.
How many measurements do I need?
One measurement may be enough. If you wish to measure your height to the nearest centimeter or inch, or the volume of a liquid cooking ingredient to the nearest 1/8 “cup”, you don't ordinarily worry about random error. The error will still be present, but its magnitude will be such a small fraction of the value that it will not significantly affect whatever we are trying to achieve. Thus random error is not something we are concerned about in our daily lives. In the scientific laboratory, there are many contexts in which a single observation of a volume, mass, or instrument reading makes perfect sense; part of the "art" of science lies in making an informed judgment of how exact a given measurement must be. If we are measuring a directly observable quantity such as the weight of a solid or volume of a liquid, then a single measurement, carefully done and reported to a precision that is consistent with that of the measuring instrument, will usually be sufficient.
However more measurements are needed when there is no clearly-defined "true" value. A collection of objects (or of people) is known in statistics as a population. There is often a need to determine some quantity that describes a collection of objects. For example, a pharmaceutical researcher will need to determine the time required for half of a standard dose of a certain drug to be eliminated by the body, or a manufacturer of light bulbs might want to know how many hours a certain type of light bulb will operate before it burns out. In these cases a value for any individual sample can be determined easily enough, but since no two samples (patients or light bulbs) are identical, we are compelled to repeat the same measurement on multiple objects. And naturally, we get a variety of results, usually referred to as scatter. Even for a single object, there may be no clearly defined "true" value.
Suppose that you wish to determine the diameter of a certain type of coin. You make one measurement and record the results. If you then make a similar measurement along a different cross-section of the coin, you will likely get a different result. The same thing will happen if you make successive measurements on other coins of the same kind.
Here we are faced with two kinds of problems. First, there is the inherent limitation of the measuring device: we can never reliably measure more finely than the marked divisions on the ruler. Secondly, we cannot assume that the coin is perfectly circular; careful inspection will likely reveal some distortion resulting from a slight imperfection in the manufacturing process. In these cases, it turns out that there is no single, true value of the quantity we are trying to measure.
Mean, median, and range of a series of observations
There are a variety of ways to express the average, or central tendency of a series of measurements, with mean (more precisely, arithmetic mean) being most commonly employed. Our ordinary use of the term "average" also refers to the mean. These concepts are usually all you need as a first step in the analysis of data you are likely to collect in a first-year chemistry laboratory course.
The mean and its meaning
In our ordinary speech, the term "average" is synonymous with "mean". In statistics, however, "average" is a more general term that can refer to median, mode, and range, as well as to mean. When we obtain more than one result for a given measurement (either made repeatedly on a single sample, or more commonly, on different samples of the same material), the simplest procedure is to report the mean, or average value. The mean is defined mathematically as the sum of the values, divided by the number of measurements:
$x_m = \dfrac{\sum_{i=1}^n x_i}{n}$
If you are not familiar with this notation, don’t let it scare you! It's no different from the average that you are likely already familiar with. Take a moment to see how it expresses the previous sentence; if there are $n$ measurements, each yielding a value xi , then we sum over all $i$ and divide by $n$ to get the mean value $x_m$. For example, if there are only two measurements, x1 and x1, then the mean is
$x_m = \dfrac{x_1 + x_2}{2}$
The general problem of determining the uncertainty of a calculated result turns out to be rather more complicated than you might think, and will not be treated here. There are, however, some very simple rules that are sufficient for most practical purposes.
Absolute and Relative Uncertainty
If you weigh out 74.1 mg of a solid sample on a laboratory balance that is accurate to within 0.1 milligram, then the actual weight of the sample is likely to fall somewhere in the range of 74.0 to 74.2 mg; the absolute uncertainty in the weight you observe is 0.2 mg, or ±0.1 mg. If you use the same balance to weigh out 3.2914 g of another sample, the actual weight is between 3.2913 g and 3.2915 g, and the absolute uncertainty is still ±0.1 mg.
Although the absolute uncertainties in these two examples are identical, we would probably consider the second measurement to be more precise because the uncertainty is a smaller fraction of the measured value. The relative uncertainties of the two results would be
0.2 ÷ 74.1 = 0.0027 (about 3 parts in 1000 (PPT), or 0.3%)
0.0002 ÷ 3.2913 = 0.000084 (about 0.8 PPT , or 0.008 %)
Relative uncertainties are widely used to express the reliability of measurements, even those for a single observation, in which case the uncertainty is that of the measuring device. Relative uncertainties can be expressed as parts per hundred (percent), per thousand (PPT), per million, (PPM), and so on.
Questions
1. Addition and subtraction, both numbers have uncertainties
The simplest method is to just add the absolute uncertainties.
Example: (6.3 ± 0.05 cm) – (2.1 ± 0.05 cm) = 4.2 ± 0.10 cm
However, this tends to over-estimate the uncertainty by assuming the worst possible case in which the error in one of the quantities is at its maximum positive value, while that of the other quantity is at its maximum minimum value.
Statistical theory informs us that a more realistic value for the uncertainty of a sum or difference is to add the squares of each absolute uncertainty, and then take the square root of this sum. Applying this to the above values, we have
[(.05)2 + (.05)2]½ = 0.07, so the result is 4.2 ± 0.07 cm.
2. Multiplication or division, both numbers have uncertainties.
Convert the absolute uncertainties into relative uncertainties, and add these. Or better, add their squares and take the square root of the sum.
Problem Example 3
Estimate the absolute error in the density calculated by dividing (12.7 ± .05 g) by (10.0 ± 0.02 mL).
Solution: Relative uncertainty of the mass: 0.05 / 12.7 = 0.0039 = 0.39%
Relative uncertainty of the volume: 0.02 / 10.0 = 0.002 = 0.2%
Relative uncertainty of the density: [(.39)2 + (0.2)2]½ = 0.44 %
Mass ÷ volume: (12.7 g) ÷ (10.0 mL) = 1.27 g mL–1
Absolute uncertainty of the density: (± 0.044) x (1.27 g mL–1) = ±0.06 g mL–1
3. Multiplication or division by a pure number
Trivial case; multiply or divide the uncertainty by the pure number. | textbooks/chem/General_Chemistry/Chem1_(Lower)/02%3A_Essential_Background/2.04%3A_The_Meaning_of_Measure.txt |
The natural sciences begin with observation, and this usually involves numerical measurements of quantities such as length, volume, density, and temperature. Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. All measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning, and must be understood when converting between different unit systems.
• 3.1: Units and Dimensions
The natural sciences begin with observation, and this usually involves numerical measurements of quantities such as length, volume, density, and temperature. Most of these quantities have units of some kind associated with them, and these units must be retained when you use them in calculations. All measuring units can be defined in terms of a very small number of fundamental ones that, through "dimensional analysis", provide insight into their derivation and meaning.
• 3.2: The Meaning of Measure
The "true value" of a measured quantity, if it exists at all, will always elude us; the best we can do is learn how to make meaningful use of the numbers we read off of our measuring devices. The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something– the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as the kind of
• 3.3: Significant Figures and Rounding off
The numerical values we deal with in science (and in many other aspects of life) represent measurements whose values are never known exactly. Our pocket-calculators or computers don't know this; they treat the numbers we punch into them as "pure" mathematical entities, with the result that the operations of arithmetic frequently yield answers that are physically ridiculous even though mathematically correct.
• 3.4: Reliability of a measurement
In this day of pervasive media, we are continually being bombarded with data of all kinds— public opinion polls, advertising hype, government reports etc. Often. the purveyors of this information are hoping to “sell” us on a product (known as “spin”.) In Science, we do not have this option: we collect data and make measurements in order to get closer to whatever “truth” we are seeking, but it's not really "science" until others can have confidence in the reliability of our measurements.
• 3.5: Drawing Conclusions from Data
This final lesson on measurement will examine these questions and introduce you to some of the methods of dealing with data. This stuff is important not only for scientists, but also for any intelligent citizen who wishes to independently evaluate the flood of numbers served up by advertisers, politicians, "experts", and yes— by other scientists.
03: Measuring Matter
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the italicized terms in the context of this topic.
• Describe the names and abbreviations of the SI base units and the SI decimal prefixes.
• Define the liter and the metric ton in these units.
• Explain the meaning and use of unit dimensions; state the dimensions of volume.
• State the quantities that are needed to define a temperature scale, and show how these apply to the Celsius, Kelvin, and Fahrenheit temperature scales.
• Explain how a Torricellian barometer works.
Have you ever estimated a distance by “stepping it off”— that is, by counting the number of steps required to take you a certain distance? Or perhaps you have used the width of your hand, or the distance from your elbow to a fingertip to compare two dimensions. If so, you have engaged in what is probably the first kind of measurement ever undertaken by primitive mankind. The results of a measurement are always expressed on some kind of a scale that is defined in terms of a particular kind of unit. The first scales of distance were likely related to the human body, either directly (the length of a limb) or indirectly (the distance a man could walk in a day).
As civilization developed, a wide variety of measuring scales came into existence, many for the same quantity (such as length), but adapted to particular activities or trades. Eventually, it became apparent that in order for trade and commerce to be possible, these scales had to be defined in terms of standards that would allow measures to be verified, and, when expressed in different units (bushels and pecks, for example), to be correlated or converted.
Over the centuries, hundreds of measurement units and scales have developed in the many civilizations that achieved some literate means of recording them. Some, such as those used by the Aztecs, fell out of use and were largely forgotten as these civilizations died out. Other units, such as the various systems of measurement that developed in England, achieved prominence through extension of the Empire and widespread trade; many of these were confined to specific trades or industries. The examples shown here are only some of those that have been used to measure length or distance. The history of measuring units provides a fascinating reflection on the history of industrial development.
The most influential event in the history of measurement was undoubtedly the French Revolution and the Age of Rationality that followed. This led directly to the metric system that attempted to do away with the confusing multiplicity of measurement scales by reducing them to a few fundamental ones that could be combined in order to express any kind of quantity. The metric system spread rapidly over much of the world, and eventually even to England and the rest of the U.K. when that country established closer economic ties with Europe in the latter part of the 20th Century. The United States is presently the only major country in which “metrication” has made little progress within its own society, probably because of its relative geographical isolation and its vibrant internal economy.
Science, being a truly international endeavor, adopted metric measurement very early on; engineering and related technologies have been slower to make this change, but are gradually doing so. Even the within the metric system, however, a variety of units were employed to measure the same fundamental quantity; for example, energy could be expressed within the metric system in units of ergs, electron-volts, joules, and two kinds of calories. This led, in the mid-1960s, to the adoption of a more basic set of units, the Systeme Internationale (SI) units that are now recognized as the standard for science and, increasingly, for technology of all kinds.
The SI base Units
In principle, any physical quantity can be expressed in terms of only seven base units. Each base unit is defined by a standard which is described in the NIST Web site.
The SI base units
length meter m
mass kilogram kg
time second s
temperature (absolute) kelvin K
amount of substance mole mol
electric current ampere A
luminous intensity candela cd
A few special points about some of these units are worth noting:
• The base unit of mass is unique in that a decimal prefix (see below) is built-in to it; that is, it is not the gram, as you might expect.
• The base unit of time is the only one that is not metric. Numerous attempts to make it so have never garnered any success; we are still stuck with the 24:60:60 system that we inherited from ancient times. (The ancient Egyptians of around 1500 BC invented the 12-hour day, and the 60:60 part is a remnant of the base-60 system that the Sumerians used for their astronomical calculations around 100 BCE.)
• Of special interest to Chemistry is the mole, the base unit for expressing the quantity of matter. Although the number is not explicitly mentioned in the official definition, chemists define the mole as Avogadro’s number (approximately $6.02 \times 10^{23}$) of anything.
The SI decimal prefixes
Owing to the wide range of values that quantities can have, it has long been the practice to employ prefixes such as milli and mega to indicate decimal fractions and multiples of metric units. As part of the SI standard, this system has been extended and formalized.
prefix abbreviation multiplier prefix abbreviation multiplier
exa E 1018 deci d 10–1
peta P 1015 centi c 10–2
tera T 1012 milli m 10–3
giga G 109 micro μ 10–6
mega M 106 nano n 10–9
kilo k 103 pico p 10–12
hecto h 102 femto f 10–15
deca da 10 atto a 10–18
For a more complete table, see the NIST page on SI prefixes
Non-SI Units
There is a category of units that are “honorary” members of the SI in the sense that it is acceptable to use them along with the base units defined above. These include such mundane units as the hour, minute, and degree (of angle), etc., but the three shown here are of particular interest to chemistry, and you will need to know them.
• liter ($L$) $1\, L = 1\, dm^3 = 10^{–3} m^3 \nonumber$
• metric ton ($t$) $1\, t = 10^3 kg \nonumber$
• atomic mass unit ($u$) $1\, u = 1.66054×10^{–27}\, kg \nonumber$
Most of the physical quantities we actually deal with in science and also in our daily lives, have units of their own: volume, pressure, energy and electrical resistance are only a few of hundreds of possible examples. It is important to understand, however, that all of these can be expressed in terms of the SI base units; they are consequently known as derived units.
In fact, most physical quantities can be expressed in terms of one or more of the following five fundamental units:
mass M length L time T electric charge Q temperature Θ (theta)
Consider, for example, the unit of volume, which we denote as V. To measure the volume of a rectangular box, we need to multiply the lengths as measured along the three coordinates:
$V = x · y · z$
We say, therefore, that volume has the dimensions of length-cubed:
$dim.V = L^3$
Thus the units of volume will be m3 (in the SI) or cm3, ft3 (English), etc. Moreover, any formula that calculates a volume must contain within it the L3 dimension; thus the volume of a sphere is 4/3 πr3.
Consider, for example, the unit of volume, which we denote as V. To measure the volume of a rectangular box, we need to multiply the lengths as measured along the three coordinates: V = x · y · z We say, therefore, that volume has the dimensions of length-cubed: dim.V = L3 Thus the units of volume will be m3 (in the SI) or cm3, ft3 (English), etc. Moreover, any formula that calculates a volume must contain within it the L3 dimension; thus the volume of a sphere is 4/3 πr3.
Example $1$: Energy Units
Find the dimensions of energy.
Solution
When mechanical work is performed on a body, its energy increases by the amount of work done, so the two quantities are equivalent and we can concentrate on work. The latter is the product of the force applied to the object and the distance it is displaced. From Newton’s law, force is the product of mass and acceleration, and the latter is the rate of change of velocity, typically expressed in meters per second per second. Combining these quantities and their dimensions yields the result shown here.
Units and Their Ranges in Chemistry
In this section, we will look at some of the quantities that are widely encountered in Chemistry, and at the units in which they are commonly expressed. In doing so, we will also consider the actual range of values these quantities can assume, both in nature in general, and also within the subset of nature that chemistry normally addresses. In looking over the various units of measure, it is interesting to note that their unit values are set close to those encountered in everyday human experience
Mass and Weight
These two quantities are widely confused. Although they are often used synonymously in informal speech and writing, they have different dimensions: weight is the force exerted on a mass by the local gravational field:
$f = m a = m g$
where g is the acceleration of gravity. While the nominal value of the latter quantity is 9.80 m s–2 at the Earth’s surface, its exact value varies locally. Because it is a force, the SI unit of weight is properly the newton, but it is common practice (except in physics classes!) to use the terms "weight" and "mass" interchangeably, so the units kilograms and grams are acceptable in almost all ordinary laboratory contexts.
The range of masses spans 90 orders of magnitude, more than any other unit. The range that chemistry ordinarily deals with has greatly expanded since the days when a microgram was an almost inconceivably small amount of material to handle in the laboratory; this lower limit has now fallen to the atomic level with the development of tools for directly manipulating these particles. The upper level reflects the largest masses that are handled in industrial operations, but in the recently developed fields of geochemistry and environmental chemistry, the range can be extended indefinitely. Flows of elements between the various regions of the environment (atmosphere to oceans, for example) are often quoted in teragrams.
Length
Chemists tend to work mostly in the moderately-small part of the distance range. Those who live in the lilliputian world of crystal- and molecular structures and atomic radii find the picometer a convenient currency, but one still sees the older non-SI unit called the Ångstrom used in this context; 1Å = 10–10 m = 100pm. Nanotechnology, the rage of the present era, also resides in this realm. The largest polymeric molecules and colloids define the top end of the particulate range; beyond that, in the normal world of doing things in the lab, the centimeter and occasionally the millimeter commonly rule.
Time
For humans, time moves by the heartbeat; beyond that, it is the motions of our planet that count out the hours, days, and years that eventually define our lifetimes. Beyond the few thousands of years of history behind us, those years-to-the-powers-of-tens that are the fare for such fields as evolutionary biology, geology, and cosmology, cease to convey any real meaning for us. Perhaps this is why so many people are not very inclined to accept their validity.
Most of what actually takes place in the chemist’s test tube operates on a far shorter time scale, although there is no limit to how slow a reaction can be; the upper limits of those we can directly study in the lab are in part determined by how long a graduate student can wait around before moving on to gainful employment. Looking at the microscopic world of atoms and molecules themselves, the time scale again shifts us into an unreal world where numbers tend to lose their meaning. You can gain some appreciation of the duration of a nanosecond by noting that this is about how long it takes a beam of light to travel between your two outstretched hands. In a sense, the material foundations of chemistry itself are defined by time: neither a new element nor a molecule can be recognized as such unless it lasts around sufficiently long enough to have its “picture” taken through measurement of its distinguishing properties.
Temperature
Temperature, the measure of thermal intensity, spans the narrowest range of any of the base units of the chemist’s measure. The reason for this is tied into temperature’s meaning as a measure of the intensity of thermal kinetic energy. Chemical change occurs when atoms are jostled into new arrangements, and the weakness of these motions brings most chemistry to a halt as absolute zero is approached. At the upper end of the scale, thermal motions become sufficiently vigorous to shake molecules into atoms, and eventually, as in stars, strip off the electrons, leaving an essentially reaction-less gaseous fluid, or plasma, of bare nuclei (ions) and electrons.
We all know that temperature is expressed in degrees. What we frequently forget is that the degree is really an increment of temperature, a fixed fraction of the distance between two defined reference points on a temperature scale.
Pressure
Pressure is the measure of the force exerted on a unit area of surface. Its SI units are therefore newtons per square meter, but we make such frequent use of pressure that a derived SI unit, the pascal, is commonly used:
1 Pa = 1 N m–2
The concept of pressure first developed in connection with studies relating to the atmosphere and vacuum that were first carried out in the 17th century. The molecules of a gas are in a state of constant thermal motion, moving in straight lines until experiencing a collision that exchanges momentum between pairs of molecules and sends them bouncing off in other directions. This leads to a completely random distribution of the molecular velocities both in speed and direction— or it would in the absence of the Earth’s gravitational field which exerts a tiny downward force on each molecule, giving motions in that direction a very slight advantage. In an ordinary container this effect is too small to be noticeable, but in a very tall column of air the effect adds up: the molecules in each vertical layer experience more downward-directed hits from those above it. The resulting force is quickly randomized, resulting in an increased pressure in that layer which is then propagated downward into the layers below.
At sea level, the total mass of the sea of air pressing down on each 1-cm2 of surface is about 1034 g, or 10340 kg m–2. The force (weight) that the Earth’s gravitional acceleration g exerts on this mass is
f = ma = mg = (10340 kg m–2)(9.81 m s–2) = 1.013 × 105 kg m s–2 = 1.013 × 105 newtons
resulting in a pressure of
$1.013 × 10^5 N\, m^{–2} = 1.013 × 10^5 pa.$
The actual pressure at sea level varies with atmospheric conditions, so it is customary to define standard atmospheric pressure as 1 atm = 1.013105 pa or 101 kpa. Although the standard atmosphere is not an SI unit, it is still widely employed. In meteorology, the bar, exactly 1.000 × 105 = 0.967 atm, is often used.
In the early 17th century, the Italian physicist and mathematician Evangalisto Torricelli invented a device to measure atmospheric pressure. The Torricellian barometer consists of a vertical glass tube closed at the top and open at the bottom. It is filled with a liquid, traditionally mercury, and is then inverted, with its open end immersed in the container of the same liquid. The liquid level in the tube will fall under its own weight until the downward force is balanced by the vertical force transmitted hydrostatically to the column by the downward force of the atmosphere acting on the liquid surface in the open container. Torricelli was also the first to recognize that the space above the mercury constituted a vacuum, and is credited with being the first to create a vacuum.
One standard atmosphere will support a column of mercury that is 76 cm high, so the “millimeter of mercury”, now more commonly known as the torr, has long been a common pressure unit in the sciences: 1 atm = 760 torr. | textbooks/chem/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.01%3A_Units_and_Dimensions.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
• Give an example of a measured numerical value, and explain what distinguishes it from a "pure" number.
• Give examples of random and systematic errors in measurements.
• Find the mean value of a series of similar measurements.
• State the principal factors that affect the difference between the mean value of a series of measurements, and the "true value" of the quantity being measured.
• Calculate the absolute and relative precisions of a given measurement, and explain why the latter is generally more useful.
• Distinguish between the accuracy and the precision of a measured value, and on the roles of random and systematic error.
The exact distance between the upper lip and the tip of the dorsal fin will forever be hidden in a fog of uncertainty. The angle at which we hold the calipers and the force with which we close them on the object will never be exactly reproducible. A more fundamental limitation occurs whenever we try to compare a continuously-varying quantity such as distance with the fixed intervals on a measuring scale; between 59 and 60 mils there is the same infinity of distances that exists between 59 and 60 miles!
Image by Stephen Winsor; used with permission of the artist.
The "true value" of a measured quantity, if it exists at all, will always elude us; the best we can do is learn how to make meaningful use (and to avoid mis-use!) of the numbers we read off of our measuring devices.
Uncertainty is Certain!
In science, there are numbers and there are "numbers". What we ordinarily think of as a "number" and will refer to here as a pure number is just that: an expression of a precise value. The first of these you ever learned were the counting numbers, or integers; later on, you were introduced to the decimal numbers, and the rational numbers, which include numbers such as 1/3 and π (pi) that cannot be expressed as exact decimal values.
The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something– the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as the kind of pure numbers described above.
Confusing? Suppose our instrument has an indicator such as you see here. The pointer moves up and down so as to display the measured value on this scale. What number would you write in your notebook when recording this measurement? Clearly, the value is somewhere between 130 and 140 on the scale, but the graduations enable us to be more exact and place the value between 134 and 135. The indicator points more closely to the latter value, and we can go one more step by estimating the value as perhaps 134.8, so this is the value you would report for this measurement.
Now here’s the important thing to understand: although “134.8” is itself a number, the quantity we are measuring is almost certainly not 134.8— at least, not exactly. The reason is obvious if you note that the instrument scale is such that we are barely able to distinguish between 134.7, 134.8, and 134.9. In reporting the value 134.8 we are effectively saying that the value is probably somewhere with the range 134.75 to 134.85. In other words, there is an uncertainty of ±0.05 unit in our measurement.
All measurements of quantities that can assume a continuous range of values (lengths, masses, volumes, etc.) consist of two parts: the reported value itself (never an exactly known number), and the uncertainty associated with the measurement. By “error”, we do not mean just outright mistakes, such as incorrect use of an instrument or failure to read a scale properly; although such gross errors do sometimes happen, they usually yield results that are sufficiently unexpected to call attention to themselves.
Errors in Measurements
All measurements are subject to error which contributes to the uncertainty of the result. By “error”, we do not mean just outright mistakes, such as incorrect use of an instrument or failure to read a scale properly; although such gross errors do sometimes happen, they usually yield results that are sufficiently unexpected to call attention to themselves.
When you measure a volume or weight, you observe a reading on a scale of some kind. Scales, by their very nature, are limited to fixed increments of value, indicated by the division marks. The actual quantities we are measuring, in contrast, can vary continuously, so there is an inherent limitation in how finely we can discriminate between two values that fall between the marked divisions of the measuring scale. The same problem remains if we substitute an instrument with a digital display; there will always be some point at which some value that lies between the two smallest divisions must arbitrarily toggle between two numbers on the readout display. This introduces an element of randomness into the value we observe, even if the "true" value remains unchanged.
The more sensitive the measuring instrument, the less likely it is that two successive measurements of the same sample will yield identical results. In the example we discussed above, distinguishing between the values 134.8 and 134.9 may be too difficult to do in a consistent way, so two independent observers may record different values even when viewing the same reading. Each measurement is also influenced by a myriad of minor events, such as building vibrations, electrical fluctuations, motions of the air, and friction in any moving parts of the instrument. These tiny influences constitute a kind of "noise" that also has a random character. Whether we are conscious of it or not, all measured values contain an element of random error.
Each measurement is also influenced by a myriad of minor events, such as building vibrations, electrical fluctuations, motions of the air, and friction in any moving parts of the instrument. These tiny influences constitute a kind of "noise" that also has a random character. Whether we are conscious of it or not, all measured values contain an element of random error.
Suppose that you weigh yourself on a bathroom scale, not noticing that the dial reads “1.5 kg” even before you have placed your weight on it. Similarly, you might use an old ruler with a worn-down end to measure the length of a piece of wood. In both of these examples, all subsequent measurements, either of the same object or of different ones, will be off by a constant amount. Unlike random error, which is impossible to eliminate, these systematic errors are usually quite easy to avoid or compensate for, but only by a conscious effort in the conduct of the observation, usually by proper zeroing and calibration of the measuring instrument. However, once systematic error has found its way into the data, it is can be very hard to detect.
The Difference Between Accuracy and Precision
We tend to use these two terms interchangeably in our ordinary conversation, but in the context of scientific measurement, they have very different meanings:
• Accuracy refers to how closely the measured value of a quantity corresponds to its “true” value.
• Precision expresses the degree of reproducibility, or agreement between repeated measurements.
Accuracy, of course, is the goal we strive for in scientific measurements. Unfortunately, however, there is no obvious way of knowing how closely we have achieved it; the “true” value, whether it be of a well-defined quantity such as the mass of a particular object, or an average that pertains to a collection of objects, can never be known– and thus we can never recognize it if we are fortunate enough to find it.
Note carefully that when we make real measurements, there is no dart board or target that enables one to immediately judge the quality of the result. If we make only a few observations, we may be unable distinguish between any of these scenarios. Thus we cannot distinguish between the four scenarios illustrated above by simply examining the results of the two measurements. We can, however, judge the precision of the results, and then apply simple statistics to estimate how closely the mean value is likely to reflect the true value in the absence of systematic error.
More than one answer in Replicate Measurements
If you wish to measure your height to the nearest centimeter or inch, or the volume of a liquid cooking ingredient to the nearest “cup”, you can probably do so without having to worry about random error. The error will still be present, but its magnitude will be such a small fraction of the value that it will not be detected. Thus random error is not something we worry about too much in our daily lives.
If we are making scientific observations, however, we need to be more careful, particularly if we are trying to exploit the full sensitivity of our measuring instruments in order to achieve a result that is as reliable as possible. If we are measuring a directly observable quantity such as the weight or volume of an object, then a single measurement, carefully done and reported to a precision that is consistent with that of the measuring instrument, will usually be sufficient.
More commonly, however, we are called upon to find the value of some quantity whose determination depends on several other measured values, each of which is subject to its own sources of error. Consider a common laboratory experiment in which you must determine the percentage of acid in a sample of vinegar by observing the volume of sodium hydroxide solution required to neutralize a given volume of the vinegar. You carry out the experiment and obtain a value. Just to be on the safe side, you repeat the procedure on another identical sample from the same bottle of vinegar. If you have actually done this in the laboratory, you will know it is highly unlikely that the second trial will yield the same result as the first. In fact, if you run a number of replicate (that is, identical in every way) determinations, you will probably obtain a scatter of results.
To understand why, consider all the individual measurements that go into each determination; the volume of the vinegar sample, your judgment of the point at which the vinegar is neutralized, and the volume of solution used to reach this point. And how accurately do you know the concentration of the sodium hydroxide solution, which was made up by dissolving a measured weight of the solid in water and then adding more water until the solution reaches some measured volume. Each of these many observations is subject to random error; because such errors are random, they can occasionally cancel out, but for most trials we will not be so lucky– hence the scatter in the results.
A similar difficulty arises when we need to determine some quantity that describes a collection of objects. For example, a pharmaceutical researcher will need to determine the time required for half of a standard dose of a certain drug to be eliminated by the body, or a manufacturer of light bulbs might want to know how many hours a certain type of light bulb will operate before it burns out. In these cases a value for any individual sample can be determined easily enough, but since no two samples (patients or light bulbs) are identical, we are compelled to repeat the same measurement on multiple samples, and once again, are faced with a scattering of results.
As a final example, suppose that you wish to determine the diameter of a certain type of coin. You make one measurement and record the results. If you then make a similar measurement along a different cross-section of the coin, you will likely get a different result. The same thing will happen if you make successive measurements on other coins of the same kind.
Here we are faced with two kinds of problems. First, there is the inherent limitation of the measuring device: we can never reliably measure more finely than the marked divisions on the ruler. Secondly, we cannot assume that the coin is perfectly circular; careful inspection will likely reveal some distortion resulting from a slight imperfection in the manufacturing process. In these cases, it turns out that there is no single, true value of either quantity we are trying to measure.
Mean, Median, and Range of a Series of Observations
There are a variety of ways to express the average, or central tendency of a series of measurements, with mean (more precisely, arithmetic mean) being most commonly employed. Our ordinary use of the term "average" also refers to the mean. When we obtain more than one result for a given measurement (either made repeatedly on a single sample, or more commonly, on different samples), the simplest procedure is to report the mean, or average value. The mean is defined mathematically as the sum of the values, divided by the number of measurements:
$x_m = \dfrac{\displaystyle \sum_i x_i}{n} \label{mean}$
If you are not familiar with this notation, don’t let it scare you! Take a moment to see how it expresses the previous sentence; if there are $n$ measurements, each yielding a value xI, then we sum over all $i$ and divide by $n$ to get the mean value $x_m$. For example, if there are only two measurements, $x_1$ and $x_1$, then the mean is $(x_1 + x_2)/2$.
Example $1$
Calculate the mean value of the set of eight measurements illustrated here.
Solution
There are eight data points (10.4 was found in three trials, 10.5 in two), so $n=8$. The mean is (via Equation \ref{mean}):
$\dfrac{10.2+10.3+(3 x 10.4) + 10.5+10.5+10.8}{8} = 10.4. \nonumber$
Range
The range of a data set is the difference between its smallest and largest values. As such, its value reflects the precision of the result. For example, the following data sets have the same average, but the one having the smaller range is clearly more precise.
If you arrange the list of measured values in order of their magnitude, the median is the one that has as many values above it as below it.
Examples: for the data set [22 23 23 24 26 28] the mode would be 23.
For an odd number of values n, the median is the [(n+1)/2]th member of the set. Thus for [22 23 23 24 24 27], (n+1)/2 =3, so 23 is the median value.
Mode
This refers to the value that is observed most frequently in a series of measurements. If two or more values tie for the highest frequency, then there can be multiple modes. Mode is most useful in describing larger data sets.
Example: for the data set [22 23 23 24 26 26] the modes are 23 and 24.
The more observations, the more reliable the mean value. If this is not immediately obvious, think about it this way. You would not want to predict the outcome of the next election on the basis of interviews with only two or three voters; you would want a sample of ten to twenty at a minimum, and if the election is an important national one, a fair sample would require hundreds to thousands of people distributed over the entire geographic area and representing a variety of socio-economic groups. Similarly, you would want to test a large number of light bulbs in order to estimate the mean lifetime of bulbs of that type.
Statistical theory tells us that the more samples we have, the greater will be the chance that the mean of the results will correspond to the “true” value, which in this case would be the mean obtained if samples could be taken from the entire population (of people or of light bulbs.)
This point can be better appreciated by examining the two sets of data shown here. The set on the left consists of only three points (shown in orange), and gives a mean that is quite far removed from the "true" value, which is arbitrarily chosen for this example.
In the data set on the right, composed of nine measurements, the deviation of the mean from the true value is much smaller.
Deviation of the mean from the "true value" becomes smaller when more measurements are made.
Plots and points
A similar problem arises when you try to fit a curve to a series of plotted points. Suppose, for example, that curve 1 (red) represents the true relationship between the quantities indicated on the y-axis (dependent variable) and those on the x-axis (independent variable). This curve is derived from the seven points indicated on the plot.
Contrast this curve with the false straight-line relationships that might be obtained if only four or three points had been recorded.
Absolute and Relative Uncertainty
If you weigh out 74.1 mg of a solid sample on a laboratory balance that is accurate to within 0.1 milligram, then the actual weight of the sample is likely to fall somewhere in the range of 74.0 to 74.2 mg; the absolute uncertainty in the weight you observe is 0.2 mg, or ±0.1 mg. If you use the same balance to weigh out 3.2914 g of another sample, the actual weight is between 3.2913 g and 3.2915 g, and the absolute uncertainty is still ±0.1 mg. Thus the absolute uncertainty is is unrelated to the magnitude of the observed value.
When expressing the uncertainty of a value given in scientific notation, the exponential part should include both the value itself and the uncertainty. An example of the proper form would be (3.19 ± 0.02) × 104 m.
Although the absolute uncertainties in these two examples are identical, we would probably consider the second measurement to be more precise because the uncertainty is a smaller fraction of the measured value. A quantity calculated in this way is known as the relative uncertainty.
Example $1$
Calculate the relative uncertainties of the following absolute uncertainties:
1. 74.1 ± 0.1 mg,
2. 3.2914 ± 0.1 mg.
Solution
1. $\dfrac{0.2\, mg}{74.1\, mg} = 0.0027\, \text{or} \, 0.003 \nonumber$ (note that the quotient is dimensionless) this can be expressed as 0.3% (3 parts per hundred) or 3 parts per thousand.
2. $\dfrac{0.0002 \,g}{3.2913\, g} = 8.4 \times 10^{-5} \, \text{or roughly} \,8 \times 10^{-5} \nonumber$, which we can express as $8 \times 10^{-3}\%$ (0.008 parts per hundred), or (8E–5 / 10) = 8E–6 = 8 PPM.
Relative uncertainties are widely used to express the reliability of measurements, even those for a single observation, in which case the uncertainty is that of the measuring device. Relative uncertainties can be expressed as parts per hundred (percent), per thousand (PPT), per million, (PPM), and so on.
Propagation of Error
We are often called upon to find the value of some quantity whose determination depends on several other measured values, each of which is subject to its own sources of error.
Consider a common laboratory experiment in which you must determine the percentage of acid in a sample of vinegar by observing the volume of sodium hydroxide solution required to neutralize a given volume of the vinegar. You carry out the experiment and obtain a value. Just to be on the safe side, you repeat the procedure on another identical sample from the same bottle of vinegar. If you have actually done this in the laboratory, you will know it is highly unlikely that the second trial will yield the same result as the first. In fact, if you run a number of replicate (that is, identical in every way) determinations, you will probably obtain a scatter of results.
To understand why, consider all the individual measurements that go into each determination; the volume of the vinegar sample, your judgement of the point at which the vinegar is neutralized, and the volume of solution used to reach this point. And how accurately do you know the concentration of the sodium hydroxide solution, which was made up by dissolving a measured weight of the solid in water and then adding more water until the solution reaches some measured volume. Each of these many observations is subject to random error; because such errors are random, they can occasionally cancel out, but for most trials we will not be so lucky — hence the scatter in the results.
Rules for estimating errors in calculated results
Suppose you measure the mass and volume of a sample, and are required to calculate its density by dividing one quantity by the other:
$d = m / V. \nonumber$
Both components of this quotient have uncertainties associated with them, and you wish to attach an uncertainty to the calculated density. The general problem of determining the uncertainty of a calculated result turns out to be rather more complicated than you might think, and will not be treated here. There are, however, some very simple rules that are sufficient for most practical purposes.
• Addition and subtraction, both numbers have uncertainties: The simplest method is to just add the absolute uncertainties.
• Multiplication or division, both numbers have uncertainties: Convert the absolute uncertainties into relative uncertainties, and add these. Or better, add their squares and take the square root of the sum.
• Multiplication or division by a pure number: Trivial case; multiply or divide the uncertainty by the pure number.
Example $2$: Addition and Subtraction of Numbers with Uncertainties
$(6.3 ± 0.05 \,cm) – (2.1 ± 0.05 \,cm) = 4.2 ± 0.10\,cm \nonumber$
However, this tends to over-estimate the uncertainty by assuming the worst possible case in which the error in one of the quantities is at its maximum positive value, while that of the other quantity is at its maximum minimum value.
Statistical theory informs us that a more realistic value for the uncertainty of a sum or difference is to add the squares of each absolute uncertainty, and then take the square root of this sum. Applying this to the above values, we have
$\sqrt{(0.05)^2 + (0.05)^2} = 0.07 \nonumber$
so the result is 4.2 ± 0.07 cm.
Example $3$
Estimate the absolute error in the density calculated by dividing (12.7 ± 0.05\, g) by (10.0 ± 0.02\, mL).
Solution
Relative uncertainty of the mass:
$\dfrac{0.05}{12.7} = 0.0039 = 0.39\% \nonumber$
Relative uncertainty of the volume:
$\dfrac{0.02}{10.0} = 0.002 = 0.2\% \nonumber$
Relative uncertainty of the density:
$\sqrt{ (0.39)^2 + (0.2)^2} = 0.44 \% \nonumber$
Mass ÷ volume:
$(12.7\, g) ÷ (10.0 \,mL) = 1.27 \,g \,mL^{–1} \nonumber$
Absolute uncertainty of the density:
$(± 0.044) \times (1.27 \,g \,mL^{–1}) = ±0.06\, g\, mL^{–1} \nonumber$ | textbooks/chem/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.02%3A__The_Meaning_of_Measure.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
• Give an example of a measurement whose number of significant digits is clearly too great, and explain why.
• State the purpose of rounding off, and describe the information that must be known to do it properly.
• Round off a number to a specified number of significant digits.
• Explain how to round off a number whose second-most-significant digit is 9.
• Carry out a simple calculation that involves two or more observed quantities, and express the result in the appropriate number of significant figures.
The numerical values we deal with in science (and in many other aspects of life) represent measurements whose values are never known exactly. Our pocket-calculators or computers don't know this; they treat the numbers we punch into them as "pure" mathematical entities, with the result that the operations of arithmetic frequently yield answers that are physically ridiculous even though mathematically correct. The purpose of this unit is to help you understand why this happens, and show you what to do about it.
Digits: significant and otherwise
• "The population of our city is 157,872."
• "The number of registered voters as of Jan 1 was 27,833."
Consider the two statements shown above. Which of these would you be justified in dismissing immediately? Certainly not the second one, because it probably comes from a database which contains one record for each voter, so the number is found simply by counting the number of records.
The first statement cannot possibly be correct. Even if a city’s population could be defined in a precise way (Permanent residents? Warm bodies?), how can we account for the minute-by minute changes that occur as people are born and die, or move in and move away?
What is the difference between the two population numbers stated above? The first one expresses a quantity that cannot be known exactly– that is, it carries with it a degree of uncertainty. It is quite possible that the last census yielded precisely 157,872 records, and that this might be the “population of the city” for legal purposes, but it is surely not the “true” population. To better reflect this fact, one might list the population (in an atlas, for example) as 157,900 or even 158,000. These two quantities have been rounded off to four and three significant figures, respectively, and the have the following meanings:
• 157900 (the significant digits are underlined here) implies that the population is believed to be within the range of about 157850 to about 157950. In other words, the population is 157900±50. The “plus-or-minus 50” appended to this number means that we consider the absolute uncertainty of the population measurement to be 50 – (–50) = 100. We can also say that the relative uncertainty is 100/157900, which we can also express as 1 part in 1579, or 1/1579 = 0.000633, or about 0.06 percent.
• The value 158000 implies that the population is likely between about 157500 and 158500, or 158000±500. The absolute uncertainty of 1000 translates into a relative uncertainty of 1000/158000 or 1 part in 158, or about 0.6 percent.
Which of these two values we would report as “the population” will depend on the degree of confidence we have in the original census figure; if the census was completed last week, we might round to four significant digits, but if it was a year or so ago, rounding to three places might be a more prudent choice. In a case such as this, there is no really objective way of choosing between the two alternatives.
This illustrates an important point: the concept of significant digits has less to do with mathematics than with our confidence in a measurement. This confidence can often be expressed numerically (for example, the height of a liquid in a measuring tube can be read to ±0.05 cm), but when it cannot, as in our population example, we must depend on our personal experience and judgment.
So, what is a significant digit? According to the usual definition, it is all the numerals in a measured quantity (counting from the left) whose values are considered as known exactly, plus one more whose value could be one more or one less:
• In “157900” (four significant digits), the leftmost three digits are known exactly, but the fourth digit, “9” could well be “8” if the “true value” is within the implied range of 157850 to 157950.
• In “158000” (three significant digits), the leftmost two digits are known exactly, while the third digit could be either “7” or “8” if the true value is within the implied range of 157500 to 158500.
Although rounding off always leads to the loss of numeric information, what we are getting rid of can be considered to be “numeric noise” that does not contribute to the quality of the measurement.
The purpose in rounding off is to avoid expressing a value to a greater degree of precision than is consistent with the uncertainty in the measurement.
Implied Uncertainty and Round-off error
If you know that a balance is accurate to within 0.1 mg, say, then the uncertainty in any measurement of mass carried out on this balance will be ±0.1 mg. Suppose, however, that you are simply told that an object has a length of 0.42 cm, with no indication of its precision. In this case, all you have to go on is the number of digits contained in the data. Thus the quantity “0.42 cm” is specified to 0.01 unit in 0 42, or one part in 42 . The implied relative uncertainty in this figure is 1/42, or about 2%. The precision of any numeric answer calculated from this value is therefore limited to about the same amount.
It is important to understand that the number of significant digits in a value provides only a rough indication of its precision, and that information is lost when rounding off occurs.
Suppose, for example, that we measure the weight of an object as 3.28 g on a balance believed to be accurate to within ±0.05 gram. The resulting value of 3.28±.05 gram tells us that the true weight of the object could be anywhere between 3.23 g and 3.33 g. The absolute uncertainty here is 0.1 g (±0.05 g), and the relative uncertainty is 1 part in 32.8, or about 3 percent.
How many significant digits should there be in the reported measurement? Since only the leftmost “3” in “3.28” is certain, you would probably elect to round the value to 3.3 g. So far, so good. But what is someone else supposed to make of this figure when they see it in your report? The value “3.3 g” suggests an implied uncertainty of 3.3±0.05 g, meaning that the true value is likely between 3.25 g and 3.35 g. This range is 0.02 g below that associated with the original measurement, and so rounding off has introduced a bias of this amount into the result. Since this is less than half of the ±0.05 g uncertainty in the weighing, it is not a very serious matter in itself. However, if several values that were rounded in this way are combined in a calculation, the rounding-off errors could become significant.
The standard rules for rounding off are well known. Before we set them out, let us agree on what to call the various components of a numeric value.
• The most significant digit is the leftmost digit (not counting any leading zeros which function only as placeholders and are never significant digits.)
• If you are rounding off to n significant digits, then the least significant digit is the nth digit from the most significant digit.The least significant digit can be a zero.
• The first non-significant digit is the n+1th digit.
• If the first non-significant digit is less than 5, then the least significant digit remains unchanged.
• If the first non-significant digit is greater than 5, the least significant digit is incremented by 1.
• If the first non-significant digit is 5, the least significant digit can either be incremented or left unchanged (see below!)
• All non-significant digits are removed.
Students are sometimes told to increment the least significant digit by 1 if it is odd, and to leave it unchanged if it is even. One wonders if this reflects some idea that even numbers are somehow “better” than odd ones! (The ancient superstition is just the opposite, that only the odd numbers are "lucky".)
In fact, you could do it equally the other way around, incrementing only the even numbers. If you are only rounding a single number, it doesn’t really matter what you do. However, when you are rounding a series of numbers that will be used in a calculation, if you treated each first-nonsignificant 5 in the same way, you would be over- or underestimating the value of the rounded number, thus accumulating round-off error. Since there are equal numbers of even and odd digits, incrementing only the one kind will keep this kind of error from building up.
You could do just as well, of course, by flipping a coin!
Table: Examples of rounding-off
number to round /
no. of sig. digits
result
comment
34.216 / 3 34.2 First non-significant digit (1) is less than 5,
so number is simply truncated.
2.252 / 2 2.2 or 2.3 First non-significant digit is 5, so least sig. digit can either remain unchanged or be incremented.
39.99 / 3 40.0 Crossing "decimal boundary", so all numbers change.
85,381 / 3 85,400 The two zeros are just placeholders
0.04597 / 3 0.0460 The two leading zeros are not significant digits.
Rounding Up The Nines
Suppose that an object is found to have a weight of 3.98 ± 0.05 g. This would place its true weight somewhere in the range of 3.93 g to 4.03 g. In judging how to round this number, you count the number of digits in “3.98” that are known exactly, and you find none! Since the “4” is the leftmost digit whose value is uncertain, this would imply that the result should be rounded to one significant figure and reported simply as 4 g. An alternative would be to bend the rule and round off to two significant digits, yielding 4.0 g. How can you decide what to do?
In a case such as this, you should look at the implied uncertainties in the two values, and compare them with the uncertainty associated with the original measurement.
rounded value
implied max
implied min
absolute uncertainty
relative uncertainty
3.98 g 3.985 g 3.975 g ±.005 g or 0.01 g 1 in 400, or 0.25%
4 g 4.5 g 3.5 g ±.5 g or 1 g 1 in 4, 25%
4.0 g 4.05 g 3.95 g ±.05 g or 0.1 g 1 in 40, 2.5%
Clearly, rounding off to two digits is the only reasonable course in this example.
The same kind of thing could happen if the original measurement was 9.98 ± 0.05 g. Again, the true value is believed to be in the range of 10.03 g to 9.93 g. The fact that no digit is certain here is an artifact of decimal notation. The absolute uncertainty in the observed value is 0.1 g, so the value itself is known to about 1 part in 100, or 1%. Rounding this value to three digits yields 10.0 g with an implied uncertainty of ±.05 g, or 1 part in 100, consistent with the uncertainty in the observed value.
Observed values should be rounded off to the number of digits that most accurately conveys the uncertainty in the measurement.
• Usually, this means rounding off to the number of significant digits in in the quantity; that is, the number of digits (counting from the left) that are known exactly, plus one more.
• When this cannot be applied (as in the example above when addition of subtraction of the absolute uncertainty bridges a power of ten), then we round in such a way that the relative implied uncertainty in the result is as close as possible to that of the observed value.
Rounding off the Results of Calculations
When carrying out calculations that involve multiple steps, you should avoid doing any rounding until you obtain the final result. In science, we frequently need to carry out calculations on measured values. For example, you might use your pocket calculator to work out the area of a rectangle:
Your calculator is of course correct as far as the pure numbers go, but you would be wrong to write down 1.57676 cm2 as the answer. Two possible options for rounding off the calculator answer are shown below:
Rounded Value
Precision
1.58 1 part in 158, or 0.6%
1.6 1 part in 16, or 6 %
It is clear that neither option is entirely satisfactory; rounding to 3 significant digits leaves the answer too precisely specified, whereas following the rule and rounding to 2 digits has the effect of throwing away some precision. In this case, it could be argued that rounding to three digits is justified because the implied relative uncertainty in the answer, 0.6%, is more consistent with those of the two factors.
The above example is intended to point out that the rounding-off rules, although convenient to apply, do not always yield the most desirable result. When in doubt, it is better to rely on relative implied uncertainties.
Addition and Subtraction
When adding or subtracting, we go by the number of decimal places rather than by the number of significant digits. Identify the quantity having the smallest number of decimal places, and use this number to set the number of decimal places in the answer.
Multiplication and Division
The result must contain the same number of significant figures as in the value having the least number of significant figures.
Logarithms and Antilogarithms
Express the base-10 logarithm of a value using the same number of significant figures as is present in the normalized form of that value. Similarly, for antilogarithms (numbers expressed as powers of 10), use the same number of significant figures as are in that power.
mean?
If a number is expressed in the form a × 10b ("scientific notation") with the additional restriction that the coefficient a is no less than 1 and less than 10, the number is in its normalized form.
More Rounding Examples
The following examples will illustrate the most common problems you are likely to encounter in rounding off the results of calculations. They deserve your careful study!
calculator result
rounded
remarks
1.6
Rounding to two significant figures yields an implied uncertainty of 1/16 or 6%, three times greater than that in the least-preciseely known factor. This is a good illustration of how rounding can lead to the loss of information.
1.9E6
The "3.1" factor is specified to 1 part in 31, or 3%. In the answer 1.9, the value is expressed to 1 part in 19, or 5%. These precisions are comparable, so the rounding-off rule has given us a reasonable result.
A certain book has a thickness of 117 mm; find the height of a stack of 24 identical books:
2810 mm
The “24” and the “1” are exact, so the only uncertain value is the thickness of each book, given to 3 significant digits. The trailing zero in the answer is only a placeholder.
10.4
In addition or subtraction, look for the term having the smallest number of decimal places, and round off the answer to the same number of places.
23 cm
[see below]
The last of the examples shown above represents the very common operation of converting one unit into another. There is a certain amount of ambiguity here; if we take "9 in" to mean a distance in the range 8.5 to 9.5 in, then the uncertainty is ±0.5 in, which is 1 part in 18, or about ± 6%. The relative uncertainty in the answer must be the same, since all the values are multiplied by the same factor, 2.54 cm/in. In this case we are justified in writing the answer to two significant digits, yielding an uncertainty of about ±4 cm; if we had used the answer "20 cm" (one significant digit), its implied uncertainty would be ±5 cm, or ±25%.
When the appropriate number of significant digits is in question, calculating the relative uncertainty can help you decide. | textbooks/chem/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.03%3A__Significant_Figures_and_Rounding_off.txt |
Learning Objectives
• Explain the distinction between the mean value of a series of measurements and the population mean.
• What quantity besides the mean value do we need in order to evaluate the quality of a series of measurements?
• Explain the meaning and significance of the dispersion of the mean, and state what factor controls it.
• Explain the distinction between determinate and indeterminate error.
• Describe the purpose process of using a blank and control value when making a series of measurements. What principal assumption must be made in doing this?
In this day of pervasive media, we are continually being bombarded with data of all kinds— public opinion polls, advertising hype, government reports and statements by politicians. Very frequently, the purveyors of this information are hoping to “sell” us on a product, an idea, or a way of thinking about someone or something, and in doing so, they are all too often willing to take advantage of the average person’s inability to make informed judgments about the reliability of the data, especially when it is presented in a particular context (popularly known as “spin”.) In Science, we do not have this option: we collect data and make measurements in order to get closer to whatever “truth” we are seeking, but it's not really "science" until others can have confidence in the reliability of our measurements.
Attributes of a measurement
The kinds of measurements we will deal with here are those in which a number of separate observations are made on individual samples taken from a larger population.
Population, when used in a statistical context, does not necessarily refer to people, but rather to the set of all members of the group of objects under consideration.
For example, you might wish to determine the amount of nicotine in a manufacturing run of one million cigarettes. Because no two cigarettes are likely to be exactly identical, and even if they were, random error would cause each analysis to yield a different result, the best you can do would be to test a representative sample of, say, twenty to one hundred cigarettes. You take the average (mean) of these values, and are then faced with the need to estimate how closely this sample mean is likely to approximate the population mean. The latter is the “true value” we can never know; what we can do, however, is make a reasonable estimate of the likelihood that the sample mean does not differ from the population mean by more than a certain amount.
The attributes we can assign to an individual set of measurements of some quantity x within a population are listed below. It is important that you learn the meaning of these terms:
Number of measurements
This quantity is usually represented by n.
Mean
The mean value xm (commonly known as the average), defined as
Don't let this notation scare you! It just means that you add up all the values and divide by the number of values.
Median
The median value, which we will not deal with in this brief presentation, is essentially the one in the middle of the list resulting from writing the individual values in order of increasing or decreasing magnitude.
Range
The range is the difference between the largest and smallest value in the set.
Problem example:
Find the mean value and range of the set of measurements depicted here.
Solution: This set contains 8 measurements. The range is
(10.7 – 10.3) = 0.4, and the mean value is
6 More than one answer: dispersion of the mean
"Dispersion" means "spread-outedness". If you make a few measurements and average them, you get a certain value for the mean. But if you make another set of measurements, the mean of these will likely be different. The greater the difference between the means, the greater is their dispersion.
Suppose that instead of taking the five measurements as in the above example, we had made only two observations which, by chance, yielded the values that are highlighted here. This would result in a sample mean of 10.45. Of course, any number of other pairs of values could equally well have been observed, including multiple occurances of any single value, such as 10.6.
Shown at the left are the results of two possible pairs of observations, each giving rise to its own sample mean. Assuming that all observations are subject only to random error, it is easy to see that successive pairs of experiments could yield many other sample means. The range of possible sample means is known as the dispersion of the mean.
It is clear that both of the two sample means cannot correspond to the population mean, whose value we are really trying to discover. In fact, it is quite likely that neither sample mean is the “correct” one in this sense. It is a fundamental principle of statistics, however, that the more observations we make in order to obtain a sample mean, the smaller will be the dispersion of the sample means that result from repeated sets of the same number of observations. (This is important; please read the preceding sentence at least three times to make sure you understand it!)
How the dispersion of the mean depends on the number of observations
The difference between the sample mean (blue) and the population mean (the "true value", green) is the error of the measurement. It is clear that this error diminishes as the number of observations is made larger.
What is stated above is just another way of saying what you probably already know: larger samples produce more reliable results. This is the same principle that tells us that flipping a coin 100 times will be more likely to yield a 50:50 ratio of heads to tails than will be found if only ten flips (observations) are made.
The reason for this inverse relation between the sample size and the dispersion of the mean is that if the factors giving rise to the different observed values are truly random, then the more samples we observe, the more likely will these errors cancel out. It turns out that if the errors are truly random, then as you plot the number of occurrences of each value, the results begin to trace out a very special kind of curve.
The significance of this is much greater than you might at first think, because the Gaussian curve has special mathematical properties that we can exploit, through the methods of statistics, to obtain some very useful information about the reliability of our data. This will be the major topic of the next lesson in this set.
For now, however, we need to establish some important principles regarding measurement error.
7 Systematic error
The scatter in measured results that we have been discussing arises from random variations in the myriad of events that affect the observed value, and over which the experimenter has no or only limited control. If we are trying to determine the properties of a collection of objects (nicotine content of cigarettes or lifetimes of lamp bulbs), then random variations between individual members of the population are an ever-present factor. This type of error is called random or indeterminate error, and it is the only kind we can deal with directly by means of statistics.
There is, however, another type of error that can afflict the measuring process. It is known as systematic or determinate error, and its effect is to shift an entire set of data points by a constant amount. Systematic error, unlike random error, is not apparent in the data itself, and must be explicitly looked for in the design of the experiment.
One common source of systematic error is failure to use a reliable measuring scale, or to misread a scale. For example, you might be measuring the length of an object with a ruler whose left end is worn, or you could misread the volume of liquid in a burette by looking at the top of the meniscus rather than at its bottom, or not having your eye level with the object being viewed against the scale, thus introducing parallax error.
8 Blanks and controls
Many kinds of measurements are made by devices that produce a response of some kind (often an electric current) that is directly proportional to the quantity being measured. For example, you might determine the amount of dissolved iron in a solution by adding a reagent that reacts with the iron to give a red color, which you measure by observing the intensity of green light that passes through a fixed thickness of the solution. In a case such as this, it is common practice to make two additional kinds of measurements:
One measurement is done on a solution as similar to the unknowns as possible except that it contains no iron at all. This sample is called the blank. You adjust a control on the photometer to set its reading to zero when examining the blank.
The other measurement is made on a sample containing a known concentration of iron; this is usually called the control. You adjust the sensitivity of the photometer to produce a reading of some arbitrary value (50, say) with the control solution. Assuming the photometer reading is directly proportional to the concentration of iron in the sample (this might also have to be checked, in which case a calibration curve must be constructed), the photometer reading can then be converted into iron concentration by simple proportion.
9 The standard deviation
Consider the two pairs of observations depicted here:
Notice that the sample means happen to have the same value of “40” (pure luck!), but the difference in the precisions of the two measurements makes it obvious that the set shown on the right is more reliable. How can we express this fact in a succinct way? We might say that one experiment yields a value of 40 ±20, and the other 40 ±5. Although this information might be useful for some purposes, it is unable to provide an answer to such questions as "how likely would another independent set of measurements yield a mean value within a certain range of values?" The answer to this question is perhaps the most meaningful way of assessing the "quality" or reliability of experimental data, but obtaining such an answer requires that we employ some formal statistics.
Deviations from the mean
We begin by looking at the differences between the sample mean and the individual data values used to compute the mean. These differences are known as deviations from the mean, xixm. These values are depicted below; note that the only difference from the plots above is placement of the mean value at 0 on the horizontal axis.
The variance and its square root
Next, we need to find the average of these deviations. Taking a simple average, however, will not distinguish between these two particular sets of data, because both deviations average out to zero. We therefore take the average of the squares of the deviations (squaring makes the signs of the deviations disappear so they cannot cancel out). Also, we compute the average by dividing by one less than the number of measurements, that is, by n–1 rather than by n. The result, usually denoted by S2, is known as the variance:
Finally, we take the square root of the variance to obtain the standard deviation S:
This is the most important formula in statistics; it is so widely used that most scientific calculators provide built-in means of calculating S from the raw data.
Problem example: Calculate the variance and standard deviation for each of the two data sets shown above.
Solution: Substitution into the two formulas yields the following results:
data values 20, 60 35,45
sample mean 40 40
variance S2
standard deviation 28 7.1
Comment: Notice how the contrasting values of S reflect the difference in the precisions of the two data sets— something that is entirely lost if only the two means are considered.
Now that we have developed the very important concept of standard deviation, we can employ it in the next section to answer practical questions about how to interpret the results of a measurement. | textbooks/chem/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.04%3A_Reliability_of_a_measurement.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the italicized terms in the context of this topic.
• What is the deviation from the population mean, why can we not know its value, and why is it neverthless a fundamentally important quantity in statistics?
• Sketch out a Gaussian curve, and label the two axes, showing on the x-axis the deviation from the mean in terms of standard deviations. Shade in the area corresponding to the 95.4-percent confidence level,
• State the meaning of a confidence interval and how it relates to the standard deviation on a plot of the Gaussian curve.
• State the meaning of a confidence interval and how it relates to the standard deviation on a plot of the Gaussian curve.
• What is the distinction between a confidence interval and the confidence level?
• Describe the circumstances when a Student's t statistic is useful.
• Describe some of the major problems that can cause statistics to be erroneous or misleading.
OK, you have collected your data, so what does it mean? This question commonly arises when measurements made on different samples yield different values. How well do measurements of mercury concentrations in ten cans of tuna reflect the composition of the factory's entire output? Why can't you just use the average of these measurements? How much better would the results of 100 such tests be? This final lesson on measurement will examine these questions and introduce you to some of the methods of dealing with data. This stuff is important not only for scientists, but also for any intelligent citizen who wishes to independently evaluate the flood of numbers served up by advertisers, politicians, "experts", and yes— by other scientists.
The Standard Deviation
Each of these sets has the same mean value of 40, but the "quality" of the set shown on the right is greater because the data points are less scattered; the precision of the result is greater.
The quantitative measure of this precision is given by the standard deviation
whose value works out to 28 and 7 for the two sets illustrated above. A data set containing only two values is far too small for a proper statistical analysis— you would not want to judge the average mercury content of canned tuna on the basis of only two samples, for instance. Suppose, then, for purposes of illustration, that we have accumulated many more data points but the standard deviations of the two sets remain at 28 and 7 as before. What conclusions can we draw about how close the mean value of 40 is likely to come to the "true value" (the population mean μ) in each case?
Although we cannot ordinarily know the value of μ, we can assign to each data point xi a quantity (xi – xm) which we call the deviation from the [population] mean, an index of how far each data point differs from the elusive “true value”. We now divide this deviation from the mean by the standard deviation of the entire data set:
If we plot the values of z that correspond to each data point, we obtain the following curves for the two data sets we are using as examples:
Bear in mind that we cannot actually plot these curves from our experimental data points because we don't know the value of the population mean μ (if we did, there would be no need to make the measurements in the first place!), and we are unlikely to have enough data points to obtain a smooth curve anyway.
We won’t attempt to prove it here, but the mathematical properties of a Gaussian curve are such that its shape depends on the scale of units along the x-axis and on the standard deviation of the corresponding data set. In other words, if we know the standard deviation of a data set, we can construct a plot of z that shows how the measurements would be distributed
• if the number of observations is very large
• if the different values are due only to random error
An important corollary to the second condition is that if the data points do not approximate the shape of this curve, then it is likely that the sample is not representative, or that some complicating factor is involved. The latter often happens when a teacher plots a set of student exam scores, and gets a curve having two peaks instead of one— representing perhaps the two sub-populations of students who devote their time to studying and partying.
This minor gem was devised by the statistician W.J. Youdan and appears in The visual display of quantitative information, an engaging book by Edward R. Tufte (Graphics Press, Cheshire CT, 1983).
Confidence intervals
Clearly, the sharper and more narrow the standard error curve for a set of measurement, the more likely it will be that any single observed value approximates the true value we are trying to find. Because the shape of the curve is determined by S, we can make quantitative predictions about the reliability of our data from its standard deviation. In particular, if we plot z as a function of the number of standard deviations from the mean (rather than as the number of absolute deviations from the mean as was done above), the shape of the curve depends only on the value of S. That is, the dependence on the particular units of measurement is removed.
Moreover, it can be shown that if all measurement error is truly random, 68.3 percent (about two-thirds) of the data points will fall within one standard deviation of the population mean, while 95.4 percent of the observations will differ from the population mean by no more than two standard deviations. This is extremely important, because it allows us to express the reliability of a measurement quantitatively, in terms ofconfidence intervals.
You might occasionally see or hear a news report stating that the results of a certain public opinion poll are considered reliable to within, say, 5%, “nineteen times out of twenty”. This is just another way of saying that the confidence interval in the poll is 95%, the standard deviation is about 2.5% of the stated result, and that there is no more than a 5% chance that an identical poll carried out on another set of randomly-selected individuals from the same population would yield a different result. This is as close to “the truth” as we can get in scientific measurements.
Note carefully: Confidence interval(CI) and confidence level (CL) are not the same!
A given CI (denoted by the shaded range of 18-33 ppm in the diagram) is always defined in relation to some particular CL; specifying the first without the second is meaningless. If the CI illustrated here is at the 90% CL, then a CI for a higher CL would be wider, while that for a smaller CL would encompass a smaller range of values.
The units of CI are those of the measurement (e.g., ppm); CL itself is usually expressed in percent.
How the confidence level depends on the number of measurements
The more measurements we make, the more likely will their average value approximate the true value. The width of the confidence interval (expressed in the actual units of measurement) is directly proportional to the standard deviation S and to the value of z (both of these terms are defined above). The confidence interval of a single measurement in terms these quantities and of the observed sample mean is given by:
CI = xm + z S
If n replicate measurements are made, the confidence interval becomes smaller:
This relation is often used “in reverse”, that is, to determine how many replicate measurements n must be carried out in order to obtain a value within a desired confidence interval.
As we pointed out above, any relation involving the quantity z (which the standard error curve is a plot of) is of limited use unless we have some idea of the value of the population mean μ. If we make a very large number of measurements (100 to 1000, for example), then we can expect that our observed sample mean approximates μ quite closely, so there is no difficulty.
The shaded area in each plot shows the fraction of measurements that fall within two standard deviations (2S) of the "true" value (that is, the population mean μ). It is evident that the width of the confidence interval diminishes as the number of measurements becomes greater. This is basically a result of the fact that relatively large random errors tend to be less common than smaller ones, and are therefore less likely to cancel out if only a small number of measurements is made.
Dealing with small data sets
OK, so larger data sets are better than small ones. But what if it is simply not practical to measure the mercury content of 10,000 cans of tuna? Or if you were carrying out a forensic examination of a tiny chip of paint, you might have only enough sample (or enough time) to do two or three replicate analyses. There are two common ways of dealing with such a difficulty.
One way of getting around this is to use pooled data; that is, to rely on similar prior determinations, carried out on other comparable samples, to arrive at a standard deviation that is representative of this particular type of determination. The other common way of dealing with small numbers of replicate measurements is to look up, in a table, a quantity t, whose value depends on the number of measurements and on the desired confidence level. For example, for a confidence level of 95%, t would be 4.3 for three samples and 2.8 for five. The magnitude of the confidence interval is then given by
CI = ± t S
This procedure is not black magic, but is based on a careful analysis of the way that the Gaussian curve becomes distorted as the number of samples diminishes. Why was the t-test invented in a brewery? And why does it have such a funny name?
Using statistical tests to make decisions
Once we have obtained enough information on a given sample to evaluate parameters such as means and standard deviations, we are often faced with the necessity of comparing that sample (or the population it represents) with another sample or with some kind of a standard. The following sections paraphrase some of the typical questions that can be decided by statistical tests based on the quantities we have defined above. It is important to understand, however, that because we are treating the questions statistically, we can only answer them in terms of statistics— that is, to a given confidence level.
The usual approach is to begin by assuming that the answer to any of the questions given below is “no” (this is called the null hypothesis), and then use the appropriate statistical test to judge the validity of this hypothesis to the desired confidence level. Because our purpose here is to show you what can be done rather than how to do it, the following sections do not present formulas or example calculations, which are covered in most textbooks on analytical chemistry. You should concentrate here on trying to understand why questions of this kind are of importance.
“Should I throw this measurement out?”
That is, is it likely that something other than ordinary indeterminate error is responsible for this suspiciously different result? Anyone who collects data of almost any kind will occasionally be faced with this question. Very often, ordinary common sense will be sufficient, but if you need some help, two statistical tests, called the Qtest and the T test, are widely employed for this purpose.
We won’t describe them here, but both tests involve computing a quantity (Q orT) for a particular result by means of a simple formula, and then consulting a table to determine the likelihood that the value being questioned is a member of the population represented by the other values in the data set.
“Does this method yield reliable results?"
This must always be asked when trying a new method for the first time; it is essentially a matter of testing for determinate error. The answer can only be had by running the same procedure on a sample whose composition is known. The deviation of the mean value of the “known” xmfrom its true value μ is used to compute a Student's t for the desired confidence level. You then apply this value of t to the measurements on your unknown samples.
“Are these two samples identical?”
You wish to compare the means xm1 and xm2 from two sets of measurements in order to assess whether their difference could be due to indeterminate error. Suppose, for example, that you are comparing the percent of chromium in a sample of paint removed from a car's fender with a sample found on the clothing of a hit-and-run victim. You run replicate analyses on both samples, and obtain different mean values, but the confidence intervals overlap. What are the chances that the two samples are in fact identical, and that the difference in the means is due solely to indeterminate error?
A fairly simple formula, using Student’s t, the standard deviation, and the numbers of replicate measurements made on both samples, provides an answer to this question, but only to a specified confidence level. If this is a forensic investigation that you will be presenting in court, be prepared to have your testimony demolished by the opposing lawyer if the CL is less than 99%.
“What is the smallest quantity I can detect?”
This is just a variant of the preceding question. Estimation of the detection limit of a substance by a given method begins with a set of measurements on a blank, that is, a sample in which the substance of question is assumed to be absent, but is otherwise as similar as possible to the actual samples to be tested. We then ask if any difference between the mean of the blank measurements and of the sample replicates can be attributed to indeterminate error at a given confidence level.
For example, a question that arises at every world Olympics event, is what is the minimum level of a drug metabolite that can be detected in an athlete's urine? Many sensitive methods are subject to random errors that can lead to a non-zero result even in a sample known to be entirely free of what is being tested for. So how far from "zero" must the mean value of a test be in order to be certain that the drug was present in a particular sample? A similar question comes up very frequently in environmental pollution studies.
How to Lie with Statistics
How to lie with statistics is the title of an amusing book by Darrell Huff (Norton, 1954). Some of Irving Geiss’s illustrations for this book appear below. See also
Throwing away “wrong” answers.
It occasionally happens that a few data values are so greatly separated from the rest that they cannot reasonably be regarded as representative. If these “outliers” clearly fall outside the range of reasonable statistical error, they can usually be disregarded as likely due to instrumental malfunctions or external interferences such as mechanical jolts or electrical fluctuations.
Some care must be exercised when data is thrown away however; There have been a number of well-documented cases in which investigators who had certain anticipations about the outcome of their experiments were able to bring these expectations about by removing conflicting results from the data set on the grounds that these particular data “had to be wrong”
Beware of too-small samples
The probability of ten successive flips of a coin yielding 8 heads is given by
... indicating that it is not very likely, but can be expected to happen about eight times in a thousand runs. But there is no law of nature that says it cannot happen on your first run, so it would clearly be foolish to cry “Eureka” and stop the experiment after one— or even a few tries. Or to forget about the runs that did not turn up 8 heads!
Perils of dubious "correlations"
The fact that two sets of statistics show the same trend does not prove they are connected, even in cases where a logical correlation could be argued. Thus it has been suggested that according to the two plots below, "In relative terms, the global temperature seems to be tracking the average global GDP quite nicely over the last 70 years."
The difference between confidence levels of 90% and 95% may not seem like much, but getting it wrong can transform science into junk science — a not-unknown practice by special interests intent on manipulating science to influence public policy; see the excellent 2008 book by David Michaels "Doubt is Their Product: How Industry's Assault on Science Threatens Your Health". | textbooks/chem/General_Chemistry/Chem1_(Lower)/03%3A_Measuring_Matter/3.05%3A_Drawing_Conclusions_from_Data.txt |
The chapters in this unit are absolutely essential for anyone embarking on the serious study of Chemistry. The material covered here will be needed in virtually every topic you will encounter in the remainder of your first-year course, as well as in subsequent Chemistry courses — so you might as well master it now!
• 4.1: Atoms, Elements, and the Nucleus
The parallel concepts of the element and the atom constitute the very foundations of chemical science. The concept of the element is a macroscopic one that relates to the world that we can observe with our senses. The atom is the microscopic realization of this concept; that is, it is the actual physical particle that is unique to each chemical element. Their very small size has long prevented atoms from being observable by direct means, so their existence was not universally accepted until the
• 4.2: Avogadro's Number and the Mole
The chemical changes we observe always involve discrete numbers of atoms that rearrange themselves into new configurations. These numbers are far too large in magnitude for us to count , but they are still numbers, and we need to have a way to deal with them. We also need a bridge between these numbers, which we are unable to measure directly, and the weights of substances, which we do measure and observe. The mole concept provides this bridge, and is key to all of quantitative chemistry.
• 4.3: Formulas and Their Meaning
At the heart of chemistry are substances — elements or compounds— which have adefinite composition which is expressed by a chemical formula. In this unit you will learn how to write and interpret chemical formulas both in terms of moles and masses, and to go in the reverse direction, in which we use experimental information about the composition of a compound to work out a formula.
• 4.4: Chemical Equations and Stoichiometry
A chemical equation expresses the net change in composition associated with a chemical reaction by showing the number of moles of reactants and products. But because each component has its own molar mass, equations also implicitly define the way in which the masses of products and reactants are related. In this unit we will concentrate on understanding and making use of these mass relations.
• 4.5: Introduction to Chemical Nomenclature
Chemical nomenclature is far too big a topic to treat comprehensively, and it would be a useless diversion to attempt to do so in a beginning course; most chemistry students pick up chemical names and the rules governing them as they go along. But we can hardly talk about chemistry without mentioning some chemical substances, all of which do have names— and often, more than one!
• 4.6: Significant Figures and Rounding
The numerical values we deal with in science (and in many other aspects of life) represent measurements whose values are never known exactly. Our pocket-calculators or computers don't know this; they treat the numbers we punch into them as "pure" mathematical entities, with the result that the operations of arithmetic frequently yield answers that are physically ridiculous even though mathematically correct.
Thumbnail: Spinning Buckminsterfullerene (\(\ce{C60}\)). (CC BY-SA 3.0; unported; Sponk).
04: The Basics of Chemistry
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Give a chemical definition of element, and comment on the distinction between the terms atom and element.
• You should know the names and symbols of the more common elements, including those whose symbols are derived from their Latin names.
• Describe, in your own words, the Laws of Chemical Change: mass conservation, constant composition, and multiple proportions.
• Explain how these laws follow from Dalton's atomic theory.
• Describe Rutherford's alpha-ray scattering experiment and how it led to the present model of the atom.
• Define atomic number and mass number, and explain the relation between them.
• Define isotope and nuclide, and write the symbol for a nuclide of a given element with a given number of neutrons.
• Explain the purpose of a mass spectrometer and its general principle of operation.
• Describe the atomic weight scale.
• Find the molecular weight or formula weight from a chemical formula.
• Define the unified atomic mass unit, and write out the mass numbers of the proton, neutron, and electron.
The parallel concepts of the element and the atom constitute the very foundations of chemical science. As such, the concept of the element is a macroscopic one that relates to the world that we can observe with our senses. The atom is the microscopic realization of this concept; that is, it is the actual physical particle that is unique to each chemical element. Their very small size has long prevented atoms from being observable by direct means, so their existence was not universally accepted until the late 19th Century. The fact that we still hear the mention of the "atomic theory of matter" should not imply that there is now any doubt about the existence of atoms. Few theories in the history of science have been as thoroughly validated and are as well understood.
Although the word atom usually refers to a specific kind of particle (an "atom of magnesium", for example), our everyday use of element tends to be more general, referring not only to a substance composed of a particular type of atom ("bromine is one of the few elements that are liquids at room temperature"), but also to atoms in a collective sense ("magnesium is one of the elements having two electrons in its outer shell").
The underlying concept of atoms as the basic building blocks of matter has been around for a long time. As early as 600 BCE, the Gujarati (Indian) philosopher Acharya Kanad wrote that "Every object of creation is made of atoms which in turn connect with each other to form molecules". A couple of centuries later in 460 BCE, the Greek philosopher Democritus reasoned that if you keep breaking a piece of matter into smaller and smaller fragments, there will be some point at which the pieces cannot be made any smaller. He called these "basic matter particles"— in other words, atoms. But this was just philosophy; it would not become science until 1800 when John Dalton showed how the atomic concept followed naturally from the results of quantitative experiments based on weight measurements.
Elements
The element is the fundamental unit of chemical identity. The concept of the element is very ancient. It was developed in many different civilizations in an attempt to rationalize the variety of the world and to understand the nature of change, such as, change that occurs when a piece of wood rots, or is burnt to produce charcoal or ash. Most well known to us are the four elements "earth, air, fire and water" that were popularized by Greek philosophers (principally Empedocoles and Aristotle) in the period 500-400 BCE.
To these, Vedic (Hindu) philosophers of India added space, while the ancient Chinese concept of Wu Xing regarded earth, metal, wood, fire and water as fundamental. These basic elements were not generally considered to exist as the actual materials we know as earth, water, etc., but rather to represent the "principles" or essences that the elements conveyed to the various kinds of matter we encounter in the world.
Eventually, practical experience (largely connected with the extraction of metals from ores) and the beginnings of scientific experimentation in the 18th Century led to our modern concept of the chemical element. An element is a substance: the simplest form to which any other chemical substance can be reduced through appropriate thermal or chemical treatment. "Simplest", in the context of experimentation at the time, was defined in terms of weight; cinnabar (mercuric sulfide) can be broken down into two substances, mercury and sulfur, which themselves cannot be reduced to any lighter forms.
Although Lavoisier got many of these right, he did manage to include a few things that do not quite fit into our modern idea of what constitutes a chemical element. There are two such mistakes in the top section of the table that you should be able to identify even if your French is less than tip-top— can you find them?
Lavoisier's other misassignment of the elements in the bottom section was not really his fault. Chalk, magnesia, barytes, alumina and silica are highly stable oxygen-containing compounds; the high temperatures required to break them down could not be achieved in Lavoisier's time (magnesia is what fire brick is made of). The proper classification of these substances was delayed until further experimentation revealed their true nature. Ten of the chemical elements have been known since ancient times and five more were discovered through the 17th Century.
Some frequently-asked questions about elements
• How many elements are there? Ninety-two elements have been found in nature. Around 25 more have been made artificially, but all of these decay into lighter elements, with some of them disappearing in minutes or even seconds.
• Where do the elements come from? The present belief is that helium and a few other very light elements were formed within about three minutes of the "big bang", and that the next 23 elements (up through iron) are formed mostly by nuclear fusion processes within stars, in which lighter nuclei combine into successively heavier elements. Elements heavier than iron cannot be formed in this way, and are produced only during the catastrophic collapse of massive stars (supernovae explosions).
• How do the elements vary in abundance? Quite markedly, and very differently in different bodies in the cosmos. Most of the atoms in the universe still consist of hydrogen, with helium being a distant second. On Earth, oxygen, silicon, and aluminum are most abundant. These profiles serve as useful guides for constructing models for the formation of the earth and other planetary bodies.
• The system of element symbols we use today was established by the Swedish chemist Jons Jacob Berzelius in 1814. Prior to that time, graphical alchemical symbols were used, which were later modified and popularized by John Dalton. Fortunately for English speakers, the symbols of most of the elements serve as mnemonics for their names, but this is not true for the seven metals known from antiquity, whose symbols are derived from their Latin names. The other exception is tungsten (a name derived from Swedish), whose symbol W reflects the German name which is more widely used.
• How are the elements organized? Two general organizing principles developed in the 19th Century: one was based on the increasing relative weights (atomic weights) of the elements, yielding a list that begins this way:
H He Li Be B C N O F Ne Na Mg Al Si P S Cl Ar K Ca...
Atoms become real
Throughout most of history the idea that matter is composed of minute particles had languished as a philosophical abstraction known as atomism, and no clear relation between these "atoms" and the chemical "elements" had been established. This began to change in the early 1800's when the development of balances that permitted reasonably precise measurements of the weight changes associated with chemical reactions ushered in a new and fruitful era of experimental chemistry. This resulted in the recognition of several laws of chemical change that laid the groundwork for the atomic theory of matter.
Laws of Chemical Change
Recall that a "law", in the context of science, is just a relationship, discovered through experimentation, that is sufficiently well established to be regarded as beyond question for most practical purposes. Because it is the nature of scientists to question the "unquestionable", it occasionally happens that exceptions do arise, in which case the law must undergo appropriate modification.
Conservation of mass-energy is usually considered the most fundamental of law of nature. It is also a good example of a law that had to be modified; it was known simply as Conservation of Mass until Einstein showed that energy and mass are interchangeable. However, the older term is perfectly acceptable within the field of ordinary chemistry in which energy changes are too small to have a measurable effect on mass relations. Within the context of chemistry, conservation of mass can be thought of as "conservation of atoms". Chemical change just shuffles them around into new arrangements.
Mass conservation had special significance in understanding chemical changes involving gases, which were for some time not always regarded as real matter at all. (Owing to their very small densities, carrying out actual weight measurements on gases is quite difficult to do, and was far beyond the capabilities of the early experimenters.) Thus when magnesium metal is burned in air, the weight of the solid product always exceeds that of the original metal, implying that the process is one in which the metal combines with what might have been thought to be a "weightless" component of the air, which we now know to be oxygen.
More importantly, this experimental result tells us something very important about the mass of the oxygen atom relative to that of the magnesium atom.
The Law of Definite Proportions, also known as the law of constant composition, states that the proportion by weight of the element present in any pure substance is always the same. This enables us to generalize the relationship we illustrated above.
Example $1$: Law of Definite Proportions
How many kilograms of metallic magnesium could theoretically be obtained by decomposing 0.400 kg of magnesium oxide into its elements?
Solution
The mass ratio of Mg to O in this compound is
$\dfrac{1}{1.66} = 0.602 \nonumber$
so 0.400 kg of the oxide contains
$(0.400\; kg) \times 0.602 = 0.241\; \text{kg of Mg} \nonumber .$
The fact that we are concerned with the reverse of the reaction cited above is irrelevant.
The Law of Multiple Proportions address the fact that Many combinations of elements can react to form more than one compound. In such cases, this law states that the weights of one element that combine with a fixed weight of another of these elements are integer multiples of one another. It's easy to say this, but please make sure that you understand how it works. Nitrogen forms a very large number of oxides, five of which are shown here.
• Lineshows the ratio of the relative weights of the two elements in each compound. These ratios were calculated by simply taking the molar mass of each element, and multiplying by the number of atoms of that element per mole of the compound. Thus for NO2, we have (1 × 14) : (2 × 16) = 13:32. (These numbers were not known in the early days of Chemistry because atomic weights (i.e., molar masses) of most elements were not reliably known.)
• The numbers in Lineare just the mass ratios of O:N, found by dividing the corresponding ratios in line 1. But someone who depends solely on experiment would work these out by finding the mass of O that combines with unit mass (1 g) of nitrogen.
• Line is obtained by dividing the figures the previous line by the smallest O:N ratio in the line above, which is the one for N2O. Note that just as the law of multiple proportions says, the weight of oxygen that combines with unit weight of nitrogen work out to small integers.
• Of course we just as easily could have illustrated the law by considering the mass of nitrogen that combines with one gram of oxygen; it works both ways!
Example $2$: Law of Multiple Proportions
Nitrogen and hydrogen form many compounds, some of which involve other elements as well. The mass of hydrogen that combines with 1.00 g of nitrogen to form three of these compounds are: urea, 0.1428 g; ammonia, 0.0714 g; ammonium chloride, 0.2857 g. Show that this data is consistent with the Law of Multiple Proportions.
Solution
The "fixed weight" we are considering here is the nitrogen. Inspection of the numbers above shows that the ammonia contains the smallest weight ratio H:N = 0.0714, while the weight ratio of H:N in urea is twice this number, and that in ammonium chloride is four times 0.0714. Thus the H:N ratios are themselves stand in the ratio of 2:1:4, respectively, and the Law is confirmed.
Dalton's Interpretation Established Atomic Theory
The idea that matter is composed of tiny "atoms" of some kind had been around for at least 2000 years. Dalton's accomplishment was to identify atoms with actual chemical elements.
If Nobel prizes had existed in the early 1800's, the English school teacher/ meteorologist/ chemist John Dalton (1766-1844) would certainly have won one for showing how the experimental information available at that time, as embodied in the laws of chemical change that we have just described, are fully consistent with the hypothesis that atoms are the smallest units of chemical identity. These points of Dalton's atomic theory provided satisfactory explanations of all the laws of chemical change noted above:
Dalton's explanation of the Law of Conservation of Mass was that it is really a consequence of "conservation of atoms" which are presumed to be indestructible by chemical means. In chemical reactions, the atoms are simply rearranged, but never destroyed.
Dalton's Explanation of the law of constant composition was that if compounds are made up of definite numbers of atoms, each of which has its own characteristic mass, then the relative mass of each element in a compound must always be the same. Thus the elements must always be present in a pure sample of a compound in the same proportions by mass.
A given set of elements can usually form two or more compounds in which the numbers of atoms of some of the elements are different. Because these numbers must be integers (you can't have "half" an atom!), the mass of one element combined with a fixed mass of any other elements in any two such compounds can differ only by integer numbers. Thus, for the series of nitrogen-hydrogen compounds cited in the Problem Example above, we have the following relations:
Compound Formula weight ratio H:N ratio to 0.0714
urea
CO(NH2)2
0.1428
2
ammonia
NH3
0.0714
1
ammonium chloride
NH4Cl
0.2857
4
Although Dalton's atomic theory was immediately found to be a useful tool for organizing chemical knowledge, it was some time before it became accepted as a true representation of the world. Thus, as late as 1887, one commentator observed
"Atoms are round bits of wood invented by Mr. Dalton."
These wooden balls have evolved into computer-generated images derived from the atomic force microscope (AFM), an exquisitely sensitive electromechanical device in which the distance between the tip of a submicroscopic wire probe and the surface directly below it is recorded as the probe moves along a surface to which atoms are adsorbed. The general principle of the AFM is quite simple, but its realization in an actual device can appear somewhat intimidating! This highly specialized atomic force microscope is one of several similar devices described at this Argonne National Laboratory.
Relative Masses
Dalton's atomic theory immediately led to the realization that although atoms are far too small to be studied directly, their relative masses can be estimated by observing the weights of elements that combine to form similar compounds. These weights are sometimes referred to as combining weights. There is one difficulty, however: we need to know the formulas of the compounds we are considering in order to make valid comparisons. For example, we can find the relative masses of two atoms X and Y that combine with oxygen only if we assume that the values of n in the two formulas $XO_n$ and $YO_n$ are the same. But the very relative masses we are trying to find must be known in order to determine these formulas.
The way to work around this was to focus on binary (two-element) compounds that were assumed to have simple atom ratios such as 1:1, 1:2, etc., and to hope that enough 1:1 compounds would be found to provide a starting point for comparing the various pairs of combining weights. Compounds of oxygen, known as oxides, played an especially important role here, partly because almost all of the chemical elements form compounds with oxygen, and most of them do have very simple formulas.
--
The first proof that water is composed of hydrogen and oxygen was because of the discovery, in 1800, that an electric current could decompose water into these elements. Notice the 2:1 volumes of the two gases displacing the water at the tops of the tubes.
Of these oxygen compounds, the one with hydrogen — ordinary water — had been extensively studied. Earlier experiments had given the composition of water is 87.4 percent oxygen and 12.6 percent hydrogen by weight. This means that if the formula of water is assumed [incorrectly] to be HO, then the mass ratio of the two kinds of atoms must be O:H = 87.4/12.6 = 6.9. Later work corrected this figure to 8, but the wrong assumption about the formula of water would remain to plague chemistry for almost fifty years until studies on gas volumes proved that water is H2O.
Dalton fully acknowledged the tentative nature of weight ratios based on assumed simple formulas such as HO for water, but was nevertheless able to compile in 1810 a list of the relative weights of the atoms of some of the elements he investigated by observing weight changes in chemical reactions.
hydrogen
nitrogen
carbon
oxygen
phosphorus
sulfur
iron
zinc
copper
lead
1
5
5.4
7
9
13
50
56
56
95
Because hydrogen is the lightest element, it was assigned a relative weight of unity. By assigning definite relative masses to atoms of the different elements, Dalton had given reality to the concept of the atom and established the link between atom and element. Once the correct chemical formulas of more compounds became known, more precise combining-weight studies eventually led to the relative weights of the atoms we know today as the atomic weights, which we discuss farther on.
The Nuclear atom
The precise physical nature of atoms finally emerged from a series of elegant experiments carried out between 1895 and 1915. The most notable of these achievements was Ernest Rutherford's famous 1911 alpha-ray scattering experiment, which established that
• Almost all of the mass of an atom is contained within a tiny (and therefore extremely dense)nucleus which carries a positive electric charge whose value identifies each element and is known as the atomic number of the element.
• Almost all of the volume of an atom consists of empty space in which electrons, the fundamental carriers of negative electric charge, reside. The extremely small mass of the electron (1/1840 the mass of the hydrogen nucleus) causes it to behave as a quantum particle, which means that its location at any moment cannot be specified; the best we can do is describe its behavior in terms of the probability of its manifesting itself at any point in space. It is common (but somewhat misleading) to describe the volume of space in which the electrons of an atom have a significant probability of being found as the electron cloud. The latter has no definite outer boundary, so neither does the atom. The radius of an atom must be defined arbitrarily, such as the boundary in which the electron can be found with 95% probability. Atomic radii are typically 30-300 pm.
Protons and Neutrons
The nucleus is itself composed of two kinds of particles. Protons are the carriers of positive electric charge in the nucleus; the proton charge is exactly the same as the electron charge, but of opposite sign. This means that in any [electrically neutral] atom, the number of protons in the nucleus (often referred to as the nuclear charge) is balanced by the same number of electrons outside the nucleus.
Ions
Because the electrons of an atom are in contact with the outside world, it is possible for one or more electrons to be lost, or some new ones to be added. The resulting electrically-charged atom is called an ion.
The other nuclear particle is the neutron. As its name implies, this particle carries no electrical charge. Its mass is almost the same as that of the proton. Most nuclei contain roughly equal numbers of neutrons and protons, so we can say that these two particles together account for almost all the mass of the atom.
Atomic Number (Z)
What single parameter uniquely characterizes the atom of a given element? It is not the atom's relative mass, as we will see in the section on isotopes below. It is, rather, the number of protons in the nucleus, which we call the atomic number and denote by the symbol Z. Each proton carries an electric charge of +1, so the atomic number also specifies the electric charge of the nucleus. In the neutral atom, the Z protons within the nucleus are balanced by Z electrons outside it.
Moseley searched for a measurable property of each element that increases linearly with atomic number. He found this in a class of X-rays emitted by an element when it is bombarded with electrons. The frequencies of these X-rays are unique to each element, and they increase uniformly in successive elements. Moseley found that the square roots of these frequencies give a straight line when plotted against Z; this enabled him to sort the elements in order of increasing atomic number.
You can think of the atomic number as a kind of serial number of an element, commencing at 1 for hydrogen and increasing by one for each successive element. The chemical name of the element and its symbol are uniquely tied to the atomic number; thus the symbol "Sr" stands for strontium, whose atoms all have Z = 38.
Mass number (A)
This is just the sum of the numbers of protons and neutrons in the nucleus. It is sometimes represented by the symbol A, so
$A = Z + N$
in which Z is the atomic number and N is the neutron number.
Nuclides and their Symbols
The term nuclide simply refers to any particular kind of nucleus. For example, a nucleus of atomic number 7 is a nuclide of nitrogen. Any nuclide is characterized by the pair of numbers (Z ,A). The element symbol depends on Z alone, so the symbol 26Mg is used to specify the mass-26 nuclide of magnesium, whose name implies Z=12. A more explicit way of denoting a particular kind of nucleus is to add the atomic number as a subscript. Of course, this is somewhat redundant, since the symbol Mg always implies Z=12, but it is sometimes a convenience when discussing several nuclides.
Two nuclides having the same atomic number but different mass numbers are known as isotopes. Most elements occur in nature as mixtures of isotopes, but twenty-three of them (including beryllium and fluorine, shown in the table) are monoisotopic. For example, there are three natural isotopes of magnesium: 24Mg (79% of all Mg atoms), 25Mg (10%), and 26Mg (11%); all three are present in all compounds of magnesium in about these same proportions.
Approximately 290 isotopes occur in nature. The two heavy isotopes of hydrogen are especially important— so much so that they have names and symbols of their own:
Deuterium accounts for only about 15 out of every one million atoms of hydrogen. Tritium, which is radioactive, is even less abundant. All the tritium on the earth is a by-product of the decay of other radioactive elements.
Atomic Weights
Atoms are of course far too small to be weighed directly; weight measurements can only be made on the massive (but unknown) numbers of atoms that are observed in chemical reactions. The early combining-weight experiments of Dalton and others established that hydrogen is the lightest of the atoms, but the crude nature of the measurements and uncertainties about the formulas of many compounds made it difficult to develop a reliable scale of the relative weights of atoms. Even the most exacting weight measurements we can make today are subject to experimental uncertainties that limit the precision to four significant figures at best.
Weighing atoms: Mass Spectrometry
An alternative way of examining the behavior of individual atomic particles became evident in 1912, when J.J. Thomson and F.W. Aston showed that a stream of gaseous neon atoms, broken up by means of an electrical discharge, yielded two kinds of subatomic particles having opposite electrical charges, as revealed by their deflections in externally-applied magnetic and electrostatic fields. (The deflections themselves could be observed by the spots the particles made when they impinged on a photographic plate.) This, combined with the finding made a year earlier by Wilhelm Wien that the degree of deflection of a particle in these fields is proportional to the ratio of its electric charge to its mass, opened the way to characterizing these otherwise invisible particles.
Neutral atoms, having no charge, cannot be accelerated along a path so as to form a beam, nor can they be deflected. They can, however, be made to acquire electric charges by directing an electron beam at them, and this was the basis of the first mass spectrometer developed by Thomson's former student F.W. Aston (1877-1945, 1922 Nobel Prize) in 1919. This enabled him to quickly identify 212 of the 287 naturally occurring isotopes.
The mass spectrometer has become one of the most widely used laboratory instruments. Mass spectrometry is now mostly used to identify molecules. Ionization usually breaks a molecule up into fragments having different charge-to-mass ratios, each molecule resulting in a unique "fingerprint" of particles whose origin can be deduced by a jigsaw puzzle-like reconstruction. For many years, "mass-spec" had been limited to small molecules, but with the development of novel ways of creating ions from molecules, it has now become a major tool for analyzing materials and large biomolecules, including proteins.
The scale of relative weights (the atomic weight scale) we now use is based on $\ce{^{12}_6C}$, whose relative mass is defined as exactly 12. Atomic weights are the ratios of the weights of an element to the weight of an identical number of $\ce{^{12}_6C}$ atoms. Being ratios, atomic weights are dimensionless.
From 1850 to 1961, the atomic weight scale was defined relative to oxygen = 16.
Example $3$: A Zillion Atoms
A certain number (call it "one zillion") of oxygen atoms weighs 1.200 g. What will be the weight of an equal number of lithium atoms?
Solution
From the atomic weight table, the mass ratio of Li/O = 6.94/16.00, so the weight of one zillion lithium atoms will be
$(1.200\; g) \times \dfrac{6.94}{16.00} =0.570\; g \nonumber$
You can visualize the atomic weight scale as a long line of numbers that runs from 1 to around 280. The beginning of the scale looks like this:
You will notice that the relative masses of the different elements (shown in the upper part) are not all integers. If the nuclei all differ by integral numbers of protons and neutrons that have virtually identical masses, we would expect the atomic weights to be integers. Some are very close to integers (the reason they are not exactly integral will be explained in the next section), but many are nowhere near integral. This puzzling observation eventually led to the concept of isotopes.
The atomic weights that are determined experimentally and listed in tables are weighted averages of these isotopic mixtures.
Example $4$: Estimating Average Atomic Weight
Estimate the average atomic weight of magnesium from the isotopic abundance data shown in the above mass spectrometry plot.
Solution
We just take the weighted average of the mass numbers:
(0.7899 × 24) + (0.1000 × 25) + (0.1101 × 26) = 24.32
Note: The measured atomic weight of Mg (24.305) is slightly smaller than this because atomic masses of nuclear components are not strictly additive, as will be explained further below.
When there are only two significantly abundant isotopes, you can estimate the relative abundances from the mass numbers and the average atomic weight. The following is a favorite exam problem:
Example $5$: Relative Abundances
The average atomic weight of chlorine is 35.45 and the element has two stable isotopes $\ce{^{35}_{17}Cl}$ and $\ce{^{37}_{17}Cl}$. Estimate the relative abundances of these two isotopes.
Solution
Here you finally get to put your high-school algebra to work! If we let x represent the fraction of $\ce{^{35}Cl}$, then (1-x) gives the fraction of $\ce{^{37}Cl}$. The weighted average atomic weight is then
35x + 37(1-x) = 35.45
Solving for x gives 2x = 1.55, x = 0.775, so the abundances are 77.5% Cl35 and 22.5% Cl37.
Example $6$: Mass Spectra
Elemental chlorine, Cl2, is made up of the two isotopes mentioned in the previous example. How many peaks would you expect to observe in the mass spectrum of Cl2?
Solution
The mass spectrometer will detect a peak for each possible combination of the two isotopes in dichlorine: 35Cl-35Cl, 35Cl-37Cl, and 37Cl-37Cl.
Tables of Atomic Weights
are updated every few years as better data becomes available.
One peculiarity you might notice is that the number of significant figures varies from element to element. It tends to be highest for monoisotopic elements, as you can see here for beryllium and fluorine. For some elements, the isotopic abundances vary slightly, depending on the source; this variance reduces the useful precision of a value.
Atomic weights, molecular weights and formula weights
Molecules are composed of atoms, so a molecular weight is just the sum of the atomic weights of the elements it contains.
Example $7$: Molecular Weight of Sulfuric Acid
What is the molecular weight of sulfuric acid, $H_2SO_4$?
Solution
The atomic weights of hydrogen and of oxygen are 1.01 and 16.00, respectively (you should have these common values memorized.) From a table, you can find that the atomic weight of sulfur is 32.06. Adding everything up, we have
$(2 \times 1.01) + 32.06 + (4 \times 16.00) = 98.08$
Because some solids are not made up of discrete molecules (sodium chloride, NaCl, and silica, SiO2 are common examples), the term formula weight is often used in place of molecular weight. In general, the terms molecular weight and formula weight are interchangeable.
Isotopic Fractionation
The isotopes of a given element are so similar in their chemical behavior that what small differences may exist can be considered negligible for most practical purposes. However, heavier isotopes do tend to react or evaporate slightly more slowly than lighter ones, so that given enough time, various geochemical processes can result in an enrichment of one isotope over the other, an effect known as geochemical isotopic fractionation.
What differences do exist are most evident in the lighter elements, and especially in hydrogen, whose three isotopes differ in mass by relatively large amounts. Thus "heavy water", D2O (2H2O) is not decomposed by electrolysis quite as rapidly as is 1H2O, so it becomes enriched in the the un-decomposed portion of the water in an electrolysis apparatus. Its boiling point is 101.7°C and it freezes at 3.8°. Animals will die if they drink heavy water in place of ordinary water.
The minute differences between the behaviors of most isotopes constitute an invaluable tool for research in geochemistry. For example, the tiny fraction of water molecules containing O18 evaporates more slowly than the lighter (and far more abundant) H2O16. But the ratio of O18 to O16 in the water that evaporates depends on the temperature at which this process occurs. By observing this ratio in glacial ice cores and in marine carbonate deposits, it is possible to determine the average temperature of the earth at various times in the past.
Atomic masses
Here again is the beginning of the atomic weight scale that you saw above:
You understand by now that atomic weights are relative weights, based on a scale defined by 6C12 = 12. But what is the absolute weight of an atom, expressed in grams or kilograms? In other words, what actual mass does each unit on the atomic weight scale represent?
The answer is 1.66053886 × 10–27 kg. This quantity (whose value you do not need to memorize) is known as the unified atomic mass unit, denoted by the abbreviation u or amu. The unified atomic mass unit is defined as 1/12 of the mass of one atom of carbon-12. Fortunately, you do not need to memorize this value, because you can easily calculate its value from Avogadro's number,NA, which you are expected to know:
$1\, u = \dfrac{1}{N_A} \;g = \dfrac{1}{1000\; N_A} \;Kg$
Note: Definition of Atomic Mass Unit
The unified atomic mass unit is defined as 1/12 of the mass of one atom of carbon-12.
Masses of the Subatomic Particles
Atoms are composed of protons, neutrons, and electrons, whose properties are shown below:
Table $1$: Masses of Subatomic Particles
particle
mass, g
mass, u
charge
symbol
electron 9.1093897 × 10–28 5.48579903 × 10–4 1– $\ce{_{–1}^{0}e}$
proton 1.6726231 × 10–24 1.007276470 1+ $\ce{_1^0H^{+}}$ or $\ce{_1^0p}$
neutron 1.6749286 × 10–24 1.008664904 0 $\ce{_1^0n}$
Two important points should be noted from Table $1$:
• The mass of the electron is negligible compared to that of the two nuclear particles;
• The proton and neutron have masses that are almost, but not exactly, identical.
Nuclear Masses
As we mentioned in one of the problem examples above, the mass of a nucleus is always slightly different from the masses of the nucleons (protons and neutrons) of which it is composed. The difference, known as the mass defect, is related to the energy associated with the formation of the nucleus through Einstein's famous formula e = mc2. This is the one instance in chemistry in which conservation of mass-energy, rather than of mass alone, must be taken into account. But there is no need for you to be concerned with this in this part of the course.
For all practical purposes, until you come to the section of the course on nuclear chemistry, you can consider that the proton and neutron have masses of about 1 u, and that the mass of an atom (in u) is just the sum of the neutron and proton numbers. | textbooks/chem/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.01%3A_Atoms_Elements_and_the_Nucleus.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Define Avogadro's number and explain why it is important to know.
• Define the mole. Be able to calculate the number of moles in a given mass of a substance, or the mass corresponding to a given number of moles.
• Define molecular weight, formula weight, and molar mass; explain how the latter differs from the first two.
• Be able to find the number of atoms or molecules in a given weight of a substance.
• Find the molar volume of a solid or liquid, given its density and molar mass.
• Explain how the molar volume of a metallic solid can lead to an estimate of atomic diameter.
The chemical changes we observe always involve discrete numbers of atoms that rearrange themselves into new configurations. These numbers are HUGE— far too large in magnitude for us to count or even visualize, but they are still numbers, and we need to have a way to deal with them. We also need a bridge between these numbers, which we are unable to measure directly, and the weights of substances, which we do measure and observe. The mole concept provides this bridge, and is central to all of quantitative chemistry.
Counting Atoms: Avogadro's Number
Owing to their tiny size, atoms and molecules cannot be counted by direct observation. But much as we do when "counting" beans in a jar, we can estimate the number of particles in a sample of an element or compound if we have some idea of the volume occupied by each particle and the volume of the container. Once this has been done, we know the number of formula units (to use the most general term for any combination of atoms we wish to define) in any arbitrary weight of the substance. The number will of course depend both on the formula of the substance and on the weight of the sample. However, if we consider a weight of substance that is the same as its formula (molecular) weight expressed in grams, we have only one number to know: Avogadro's number.
Avogadro's number
Avogadro's number is known to ten significant digits:
$N_A = 6.022141527 \times 10^{23}.$
However, you only need to know it to three significant figures:
$N_A \approx 6.02 \times 10^{23}. \label{3.2.1}$
So $6.02 \times 10^{23}$ of what? Well, of anything you like: apples, stars in the sky, burritos. However, the only practical use for $N_A$ is to have a more convenient way of expressing the huge numbers of the tiny particles such as atoms or molecules that we deal with in chemistry. Avogadro's number is a collective number, just like a dozen. Students can think of $6.02 \times 10^{23}$ as the "chemist's dozen".
Before getting into the use of Avogadro's number in problems, take a moment to convince yourself of the reasoning embodied in the following examples.
Example $1$: Mass ratio from atomic weights
The atomic weights of oxygen and carbon are 16.0 and 12.0 atomic mass units ($u$), respectively. How much heavier is the oxygen atom in relation to carbon?
Solution
Atomic weights represent the relative masses of different kinds of atoms. This means that the atom of oxygen has a mass that is
$\dfrac{16\, \cancel{u}}{12\, \cancel{u}} = \dfrac{4}{3} ≈ 1.33 \nonumber$
as great as the mass of a carbon atom.
Example $2$: Mass of a single atom
The absolute mass of a carbon atom is 12.0 unified atomic mass units ($u$). How many grams will a single oxygen atom weigh?
Solution
The absolute mass of a carbon atom is 12.0 $u$ or
$12\,\cancel{u} \times \dfrac{1.6605 \times 10^{–24}\, g}{1 \,\cancel{u}} = 1.99 \times 10^{–23} \, g \text{ (per carbon atom)} \nonumber$
The mass of the oxygen atom will be 4/3 greater (from Example $1$):
$\left( \dfrac{4}{3} \right) 1.99 \times 10^{–23} \, g = 2.66 \times 10^{–23} \, g \text{ (per oxygen atom)} \nonumber$
Alternatively we can do the calculation directly like with carbon:
$16\,\cancel{u} \times \dfrac{1.6605 \times 10^{–24}\, g}{1 \,\cancel{u}} = 2.66 \times 10^{–23} \, g \text{ (per oxygen atom)} \nonumber$
Example $3$: Relative masses from atomic weights
Suppose that we have $N$ carbon atoms, where $N$ is a number large enough to give us a pile of carbon atoms whose mass is 12.0 grams. How much would the same number, $N$, of oxygen atoms weigh?
Solution
We use the results from Example $1$ again. The collection of $N$ oxygen atoms would have a mass of
$\dfrac{4}{3} \times 12\, g = 16.0\, g. \nonumber$
Exercise $1$
What is the numerical value of $N$ in Example $3$?
Answer
Using the results of Examples $2$ and $3$.
$N \times 1.99 \times 10^{–23} \, g \text{ (per carbon atom)} = 12\, g \nonumber$
or
$N = \dfrac{12\, \cancel{g}}{1.99 \times 10^{–23} \, \cancel{g} \text{ (per carbon atom)}} = 6.03 \times 10^{23} \text{atoms} \nonumber$
There are a lot of atoms in 12 g of carbon.
Things to understand about Avogadro's number
• It is a number, just as is "dozen", and thus is dimensionless.
• It is a huge number, far greater in magnitude than we can visualize
• Its practical use is limited to counting tiny things like atoms, molecules, "formula units", electrons, or photons.
• The value of NA can be known only to the precision that the number of atoms in a measurable weight of a substance can be estimated. Because large numbers of atoms cannot be counted directly, a variety of ingenious indirect measurements have been made involving such things as Brownian motion and X-ray scattering.
• The current value was determined by measuring the distances between the atoms of silicon in an ultrapure crystal of this element that was shaped into a perfect sphere. (The measurement was made by X-ray scattering.) When combined with the measured mass of this sphere, it yields Avogadro's number. However, there are two problems with this:
• The silicon sphere is an artifact, rather than being something that occurs in nature, and thus may not be perfectly reproducible.
• The standard of mass, the kilogram, is not precisely known, and its value appears to be changing. For these reasons, there are proposals to revise the definitions of both NA and the kilogram.
Moles and their Uses
The mole (abbreviated mol) is the the SI measure of quantity of a "chemical entity", which can be an atom, molecule, formula unit, electron or photon. One mole of anything is just Avogadro's number of that something. Or, if you think like a lawyer, you might prefer the official SI definition:
Definition: The Mole
The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12
Avogadro's number (Equation \ref{3.2.1}) like any pure number, is dimensionless. However, it also defines the mole, so we can also express NA as 6.02 × 1023 mol–1; in this form, it is properly known as Avogadro's constant. This construction emphasizes the role of Avogadro's number as a conversion factor between number of moles and number of "entities".
Example $4$: number of moles in N particles
How many moles of nickel atoms are there in 80 nickel atoms?
Solution
$\dfrac{80 \;atoms}{6.02 \times 10^{23} \; atoms\; mol^{-1}} = 1.33 \times 10^{-22} mol \nonumber$
Is this answer reasonable? Yes, because 80 is an extremely small fraction of $N_A$.
Molar Mass
The atomic weight, molecular weight, or formula weight of one mole of the fundamental units (atoms, molecules, or groups of atoms that correspond to the formula of a pure substance) is the ratio of its mass to 1/12 the mass of one mole of C12 atoms, and being a ratio, is dimensionless. But at the same time, this molar mass (as many now prefer to call it) is also the observable mass of one mole (NA) of the substance, so we frequently emphasize this by stating it explicitly as so many grams (or kilograms) per mole: g mol–1.
It is important always to bear in mind that the mole is a number and not a mass. But each individual particle has a mass of its own, so a mole of any specific substance will always correspond to a certain mass of that substance.
Example $5$: Boron content of borax
Borax is the common name of sodium tetraborate, $\ce{Na2B4O7}$.
1. how many moles of boron are present in 20.0 g of borax?
2. how many grams of boron are present in 20.0 g of borax?
Solution
The formula weight of $\ce{Na2B4O7}$ so the molecular weight is:
$(2 \times 23.0) + (4 \times 10.8) + (7 \times 16.0) = 201.2 \nonumber$
1. 20 g of borax contains (20.0 g) ÷ (201 g mol–1) = 0.10 mol of borax, and thus 0.40 mol of B.
2. 0.40 mol of boron has a mass of (0.40 mol) × (10.8 g mol–1) = 4.3 g.
Example $6$: Magnesium in chlorophyll
The plant photosynthetic pigment chlorophyll contains 2.68 percent magnesium by weight. How many atoms of Mg will there be in 1.00 g of chlorophyll?
Solution
Each gram of chlorophyll contains 0.0268 g of Mg, atomic weight 24.3.
• Number of moles in this weight of Mg: (.0268 g) / (24.2 g mol–1) = 0.00110 mol
• Number of atoms: (0.00110 mol) × (6.02E23 mol–1) = $6.64 \times 10^{20}$
Is this answer reasonable? (Always be suspicious of huge-number answers!) Yes, because we would expect to have huge numbers of atoms in any observable quantity of a substance.
Molar Volume
This is the volume occupied by one mole of a pure substance. Molar volume depends on the density of a substance and, like density, varies with temperature owing to thermal expansion, and also with the pressure. For solids and liquids, these variables ordinarily have little practical effect, so the values quoted for 1 atm pressure and 25°C are generally useful over a fairly wide range of conditions. This is definitely not the case with gases, whose molar volumes must be calculated for a specific temperature and pressure.
Example $7$: Molar Volume of a Liquid
Methanol, CH3OH, is a liquid having a density of 0.79 g per milliliter. Calculate the molar volume of methanol.
Solution
The molar volume will be the volume occupied by one molar mass (32 g) of the liquid. Expressing the density in liters instead of mL, we have
$V_M = \dfrac{32\; g\; mol^{–1}}{790\; g\; L^{–1}}= 0.0405 \;L \;mol^{–1} \nonumber$
The molar volume of a metallic element allows one to estimate the size of the atom. The idea is to mentally divide a piece of the metal into as many little cubic boxes as there are atoms, and then calculate the length of each box. Assuming that an atom sits in the center of each box and that each atom is in direct contact with its six neighbors (two along each dimension), this gives the diameter of the atom. The manner in which atoms pack together in actual metallic crystals is usually more complicated than this and it varies from metal to metal, so this calculation only provides an approximate value.
Example $8$: Radius of a Strontium Atom
The density of metallic strontium is 2.60 g cm–3. Use this value to estimate the radius of the atom of Sr, whose atomic weight is 87.6.
Solution
The molar volume of Sr is:
$\dfrac{87.6 \; g \; mol^{-1}}{2.60\; g\; cm^{-3}} = 33.7\; cm^3\; mol^{–1}$
The volume of each "box" is"
$\dfrac{33.7\; cm^3 mol^{–1}} {6.02 \times 10^{23}\; mol^{–1}} = 5.48 \times 10^{-23}\; cm^3$
The side length of each box will be the cube root of this value, $3.79 \times 10^{–8}\; cm$. The atomic radius will be half this value, or
$1.9 \times 10^{–8}\; cm = 1.9 \times 10^{–10}\; m = 190 pm$
Note: Your calculator probably has no cube root button, but you are expected to be able to find cube roots; you can usually use the xy button with y=0.333. You should also be able estimate the magnitude of this value for checking. The easiest way is to express the number so that the exponent is a multiple of 3. Take $54 \times 10^{-24}$, for example. Since 33=27 and 43 = 64, you know that the cube root of 55 will be between 3 and 4, so the cube root should be a bit less than 4 × 10–8.
So how good is our atomic radius? Standard tables give the atomic radius of strontium is in the range 192-220 pm. | textbooks/chem/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.02%3A_Avogadro%27s_Number_and_the_Mole.txt |
Learning Objectives
Make sure you thoroughly understand the following essential concepts that have been presented above.
• Explain why the symbol of an element often differs from the formula of the element..
• Define an ion, and explain the meaning of its formula.
• Find the simplest ("empirical") formula of a substance from a more complex molecular formula. Explain the meaning of the formula of an ionic solid such as NaCl.
• Define molecular weight, formula weight, and molar mass. Calculate any of these from any chemical formula.
• Given a chemical formula, express the mole ratios of any two elements, or the mole fraction of one of its elements.
• Find the percentage composition of a compound from its formula.
• Calculate the mass ratio of any two elements present in a compound from its formula.
• Find the empirical formula of a binary compound from the mole ratio of its two elements, expressed as a decimal number.
• Find the empirical formula of a binary compound from the mass ratio of its two elements.
• Find the empirical formula of a compound from its mass- or percentage composition.
At the heart of chemistry are substances — elements or compounds— which have adefinite composition which is expressed by a chemical formula. In this unit you will learn how to write and interpret chemical formulas both in terms of moles and masses, and to go in the reverse direction, in which we use experimental information about the composition of a compound to work out a formula.
The formula of a compound specifies the number of each kind of atom present in one molecular unit of a compound. Since every unique chemical substance has a definite composition, every such substance must be describable by a chemical formula.
Example \(1\): Writing a Molecular Formula
The well-known alcohol ethanol is composed of molecules containing two atoms of carbon, five atoms of hydrogen, and one atom of oxygen. What is its molecular formula?
Solution
Just write the symbol of each element, following by a subscript indicating the number of atoms if more than one is present. Thus: C2H5O
Note that:
• The number of atoms of each element in a molecular formula is written as a subscript;
• When only a single atom of an element in a molecular formula is present, the subscript is omitted.
• In the case of organic (carbon-containing) compounds, it is customary to place the symbols of the elements C, H, (and if present,) O, N in this order in the formula.
Formulas of Elements and Ions
The symbol of an element is the one- or two-letter combination that represents the atom of a particular element, such as Au (gold) or O (oxygen). The symbol can be used as an abbreviation for an element name (it is easier to write "Mb" instead of "molybdenum"!) In more formal chemical use, an element symbol can also stand for one atom, or, depending on the context, for one mole of atoms of the element.
Some of the non-metallic elements exist in the form of molecules containing two or more atoms of the element. These molecules are described by formulas such as N2, S6, and P4. Some of these elements can form more than one kind of molecule; the best-known example of this is oxygen, which can exist as O2 (the common form that makes up 21% of the molecules in air), and also as O3, an unstable and highly reactive molecule known as ozone. The soccer-ball-shaped carbon molecules sometimes called buckyballs have the formula C60.
Allotropes
Different molecular forms of the same element (such as \(\ce{O_2}\) and \(\ce{O_3})\) are called allotropes.
Ions are atoms or molecules that carry an electrical charge. These charges are represented as superscripts in the ionic formulas. Thus:
\(\ce{Cl^{-}}\) the chloride ion, with one negative charge per atom
\(\ce{S^{2-}}\) the sulfide ion carries two negative charges
\(\ce{HCO3^{2–}}\) the carbonate ion— a molecular ion
\(\ce{NH4^{+}}\) the ammonium ion
Note that the number of charges (in units of the electron charge) should always precede the positive or negative sign, but this number is omitted when the charge is ±1.
Formulas of Extended Solids
In solid CdCl2, the Cl and Cd atoms are organized into sheets that extend indefinitely. Each atom is surrounded by six atoms of the opposite kind, so one can arbitrarily select any Cl–Cd–Cl as the "molecular unit". One such CdCl2 unit is indicated by the two red-colored bonds in the diagram, but it does not constitute a discrete "molecule" of CdCl2.
Many apparently "simple" solids exist only as ionic solids (such as NaCl) or as extended solids (such as CuCl2) in which no discrete molecules can be identified. The formulas we write for these compounds simply express relative numbers of the different kinds of atoms in the compound in the smallest possible integer numbers. These are identical with the empirical or "simplest" formulas that we discuss further on.
Many minerals and most rocks contain varying ratios of certain elements and can only be precisely characterized at the structural level. Because these are usually not pure substances, the "formulas" conventionally used to describe them have limited meanings. For example the common rock olivine, which can be considered a solid solution of Mg2SiO4 and Fe2SiO4, can be represented by (Mg,Fe)2SiO4. This implies that the ratio of the metals to SiO4 is constant, and that magnesium is usually present in greater amount than iron.
Empirical Formulas
Empirical formulas give the relative numbers of the different elements in a sample of a compound, expressed in the smallest possible integers. The term empirical refers to the fact that formulas of this kind are determined experimentally; such formulas are also commonly referred to as empirical formulas.
Example \(2\): Empirical formula from molecular formula
Glucose (the "fuel" your body runs on) is composed of molecular units having the formula C6H12O6. What is the empirical formula of glucose?
Solution
The glucose molecule contains twice as many atoms of hydrogen as carbons or oxygens, so we divide through by 6 to get CH2O.
Note: this empirical formula, which applies to all 6-carbon sugars, indicates that these compounds are "composed" of carbon and water, which explains why sugars are known as carbohydrates.
Some solid compounds do not exist as discrete molecular units, but are built up as extended two- or three-dimensional lattices of atoms or ions. The compositions of such compounds are commonly described by their empirical formulas. In the very common case of ionic solids, such a formula also expresses the minimum numbers of positive and negative ions required to produce an electrically neutral unit, as in NaCl or CuCl2.
Example \(3\): Molecular formula from ionic charges
1. Write the formula of ferric bromide, given that the ferric (iron-III) ion is Fe3+ and the bromide ion carries a single negative charge.
2. Write the formula of bismuth sulfide, formed when the ions Bi3+ and S2–combine.
Solution:
1. Three Br ions are required to balance the three positive charges of Fe3+, hence the formula FeBr3.
2. The only way to get equal numbers of opposite charges is to have six of each, so the formula will be Bi2S3.
What formulas do not tell us
The formulas we ordinarily write convey no information about the compound's structure— that is, the order in which the atoms are connected by chemical bonds or are arranged in three-dimensional space. This limitation is especially significant in organic compounds, in which hundreds if not thousands of different molecules may share the same empirical formula. For example, both ethanol and dimethyl ether both have the empirical formula C2H6O, however the structural formulas reveal the very different nature of these two molecules:
More Complex Formulas
It is often useful to write formulas in such as way as to convey at least some information about the structure of a compound. For example, the formula of the solid (NH4)2CO3 is immediately identifiable as ammonium carbonate, and essentially a compound of ammonium and carbonate ions in a 2:1 ratio, whereas the simplest or empirical formula N2H8CO3 obscures this information.
Similarly, the distinction between ethanol and dimethyl ether can be made by writing the formulas as C2H5OH and CH3–O–CH3, respectively. Although neither of these formulas specifies the structures precisely, anyone who has studied organic chemistry can work them out, and will immediately recognize the –OH (hydroxyl) group which is the defining characteristic of the large class of organic compounds known as alcohols. The –O– atom linking two carbons is similarly the defining feature of ethers.
Several related terms are used to express the mass of one mole of a substance.
• Molecular weight :This is analogous to atomic weight: it is the relative weight of one formula unit of the compound, based on the carbon-12 scale. The molecular weight is found by adding atomic weights of all the atoms present in the formula unit. Molecular weights, like atomic weights, are dimensionless; i.e., they have no units.
• Formula weight :The same thing as molecular weight. This term is sometimes used in connection with ionic solids and other substances in which discrete molecules do not exist.
• Molar mass: The mass (in grams, kilograms, or any other unit) of one mole of particles or formula units. When expressed in grams, the molar mass is numerically the same as the molecular weight, but it must be accompanied by the mass unit.
Example \(4\): Formula weight and molar mass
1. Calculate the formula weight of copper(II) chloride, \(\ce{CuCl2}\).
2. How would you express this same quantity as a molar mass?
Solution
1. The atomic weights of Cu and Cl are, respectively 63.55 and 35.45; the sum of each atomic weight, multiplied by the numbers of each kind of atom in the formula unit, yields: \[ 63.55 + 2(25.35) = 134.45.\]
2. The masses of one mole of Cu and Cl atoms are, respectively, 63.55 g and 35.45 g; the mass of one mole of CuCl2 units is: \[(63.55 g) + 2(25.35 g) =134.45 g.\]
Interpreting formulas in terms of mole ratios and mole fractions
The information contained in formulas can be used to compare the compositions of related compounds as in the following example:
Example \(5\): mole ratio calculation
The ratio of hydrogen to carbon is often of interest in comparing different fuels. Calculate these ratios for methanol (CH3OH) and ethanol (C2H5OH).
Solution
The H:C ratios for the two alcohols are 4:1 = 4.0 for methanol and 6:2 (3.0) for ethanol.
Alternatively, one sometimes uses mole fractions to express the same thing. The mole fraction of an element M in a compound is just the number of atoms of M divided by the total number of atoms in the formula unit.
Example \(6\): mole fraction and mole percent
Calculate the mole fraction and mole-percent of carbon in ethanol (C2H5OH).
Solution
The formula unit contains nine atoms, two of which are carbon. The mole fraction of carbon in the compound is 2/9 = .22. Thus 22 percent of the atoms in ethanol are carbon.
Interpreting formulas in terms of masses of the elements
Since the formula of a compound expresses the ratio of the numbers of its constituent atoms, a formula also conveys information about the relative masses of the elements it contains. But in order to make this connection, we need to know the relative masses of the different elements.
Example \(7\): mass of each element in a given mass of compound
Find the masses of carbon, hydrogen and oxygen in one mole of ethanol (C2H5OH).
Solution
Using the atomic weights (molar masses) of these three elements, we have
• carbon: (2 mol)(12.0 g mol–1) = 24 g of C
• hydrogen: (6 mol)(1.01 g mol–1) = 6 g of H
• oxygen: (1 mol)(16.0 g mol–1) = 16 g of O
The mass fraction of an element in a compound is just the ratio of the mass of that element to the mass of the entire formula unit. Mass fractions are always between 0 and 1, but are frequently expressed as percent.
Example \(8\): Mass fraction & Mass percent of an element in a Compound
Find the mass fraction and mass percentage of oxygen in ethanol (C2H5OH)
Solution
Using the information developed in the preceding example, the molar mass of ethanol is (24 + 6 + 16)g mol–1 = 46 g mol–1. Of this, 16 g is due to oxygen, so its mass fraction in the compound is (16 g)/(46 g) = 0.35 which corresponds to 35%.
Finding the percentage composition of a compound from its formula is a fundamental calculation that you must master; the technique is exactly as shown above. Finding a mass fraction is often the first step in solving related kinds of problems:
Example \(9\): Mass of an Element in a Given mass of Compound
How many tons of potassium are contained in 10 tons of KCl?
Solution
The mass fraction of K in KCl is 39.1/74.6=.524; 10 tons of KCl contains(39.1/74.6) × 10 tons of K, or 5.24 tons of K. (Atomic weights: K = 39.1, Cl = 35.5. )
Note that there is no need to deal explicitly with moles, which would require converting tons to kg.
Example \(10\): Mass of compound containing given mass of an element
How many grams of KCl will contain 10 g of potassium?
Solution
The mass ratio of KCl/K is 74.6 ÷ 39.1; 10 g of potassium will be present in (74.6/39.1) × 10 grams of KCl, or 19 grams.
Mass ratios of two elements in a compound can be found directly from the mole ratios that are expressed in formulas.
Example \(11\): Mass ratio of elements from formula
Molten magnesium chloride (MgCl2) can be decomposed into its elements by passing an electric current through it. How many kg of chlorine will be released when 2.5 kg of magnesium is formed? (Mg = 24.3, Cl = 35.5)
Solution
The mass ratio of Cl/Mg is (35.5 ×2)/24.3, or 2.9; thus 2.9 kg of chlorine will be produced for every kg of Mg, or (2.9 × 2.5) = 7.2 kg of chlorine for 2.5 kg of Mg. (Note that is is not necessary to know the formula of elemental chlorine (Cl2) in order to solve this problem.)
Empirical formulas from Experimental data
As was explained above, the empirical formula (empirical formula) is one in which the relative numbers of the various elements are expressed in the smallest possible whole numbers. Aluminum chloride, for example, exists in the form of structural units having the composition Al2Cl6; the empirical formula of this substance is AlCl3. Some methods of analysis provide information about the relative numbers of the different kinds of atoms in a compound. The process of finding the formula of a compound from an analysis of its composition depends on your ability to recognize the decimal equivalents of common integer ratios such as 2:3, 3:2, 4:5, etc.
Example \(12\): Empirical formula from mole ratio
Analysis of an aluminum compound showed that 1.7 mol of Al is combined with 5.1 mol of chlorine. Write the empirical formula of this compound.
Solution
The formula Al1.7Cl5.1 expresses the relative numbers of moles of the two elements in the compound. It can be converted into the empirical formula by dividing both subscripts by the smaller one, yielding AlCl3 .
More commonly, an arbitrary mass of a compound is found to contain certain masses of its elements. These must be converted to moles in order to find the formula.
Example \(13\): Empirical formula from combustion masses
In a student lab experiment, it was found that 0.5684 g of magnesium burns in air to form 0.9426 g of magnesium oxide. Find the empirical formula of this compound. Atomic weights: Mg = 24.305, O=16.00.
Solution
Express this ratio as 0.375 g of C to 1.00 g of O.
• moles of carbon: (.375 g)/(12 g/mol) = 0.03125 mol C;
• moles of oxygen: (1.00 g)/(16 g/mol) = 0.0625 mol O
• mole ratio of C/O = 0.03125/0.0625 = 0.5;
this corresponds to the formula C0.5O, which we express in integers as CO2.
Example \(14\): Empirical formula from element masses
A 4.67-g sample of an aluminum compound was found to contain 0.945 g of Al and 3.72 g of Cl. Find the empirical formula of this compound. Atomic weights: Al = 27.0, Cl=35.45.
Solution
The sample contains (0.945 g)/(27.0 g mol–1) = .035 mol of aluminum and (3.72 g)(35.45) = 0.105 mol of chlorine. The formula Al.035Cl.105 expresses the relative numbers of moles of the two elements in the compound. It can be converted into the empirical formula by dividing both subscripts by the smaller one, yielding AlCl3.
The composition of a binary (two-element) compound is sometimes expressed as a mass ratio. The easiest approach here is to treat the numbers that express the ratio as masses, thus turning the problem into the kind described immediately above.
Example \(15\): Empirical Formula from element mass ratio
A compound composed of only carbon and oxygen contains these two elements in a mass ratio C:H of 0.375. Find the empirical formula.
Solution
Express this ratio as 0.375 g of C to 1.00 g of O.
• moles of carbon: (.375 g)/(12 g/mol) = .03125 mol C;
• moles of oxygen: (1.00 g)/(16 g/mol) = .0625 mol O
• mole ratio of C/O = .03125/.0625 = 0.5;
this corresponds to the formula C0.5O, which we express in integers as CO2.
The composition-by-mass of a compound is most commonly expressed as weight percent (grams per 100 grams of compound). The first step is again to convert these to relative numbers of moles of each element in a fixed mass of the compound. Although this fixed mass is completely arbitrary (there is nothing special about 100 grams!), the ratios of the mole amounts of the various elements are not arbitrary: these ratios must be expressible as integers, since they represent ratios of integral numbers of atoms.
Example \(16\): Empirical Formula from mass-percent composition
Find the empirical formula of a compound having the following mass-percent composition. Atomic weights are given in parentheses: 36.4 % Mn (54.9), 21.2 % S (32.06), 42.4 % O (16.0)
Solution
100 g of this compound contains:
• Mn: (36.4 g) / (54.9 g mol–1) = 0.663 mol
• S: (21.2 g) / (32.06 g mol–1) = 0.660 mol
• O: (42.4 g) / (16.0 g mol–1) = 2.65 mol
The formula Mn .663S.660 O 2.65 expresses the relative numbers of moles of the three elements in the compound. It can be converted into the empirical formula by dividing all subscripts by the smallest one, yielding Mn 1.00S1.00 O 4.01which we write as MnSO4.
Note: because experimentally-determined masses are subject to small errors, it is usually necessary to neglect small deviations from integer values.
Example \(17\): Empirical Formula from mass-percent composition
Find the empirical formula of a compound having the following mass-percent composition. Atomic weights are given in parentheses: 27.6 % Mn (54.9), 24.2 % S (32.06), 48.2 % O (16.0).
Solution
A preliminary formula based on 100 g of this compound can be written as
or
Mn.503S.754 O3.01
Dividing through by the smallest subscript yields Mn1S1.5O6 . Inspection of this formula suggests that multiplying each subscript by 2 yields the all-integer formula Mn2S3O12 .
Notes on experimental methods
One of the most fundamental operations in chemistry consists of breaking down a compound into its elements (a process known as analysis) and then determining the empirical formula from the relative amounts of each kind of atom present in the compound. In only a very few cases is it practical to carry out such a process directly: thus heating mercury(II) sulfide results in its direct decomposition:
\[\ce{2 HgS -> 2Hg + O2}.\]
Similarly, electrolysis of water produces the gases H2 and O2 in a 2:1 volume ratio.
Most elemental analyses must be carried out indirectly, however. The most widely used of these methods has traditionally been the combustion analysis of organic compounds. An unknown hydrocarbon CaHbOc can be characterized by heating it in an oxygen stream so that it is completely decomposed into gaseous CO2 and H2O. These gases are passed through tubes containing substances which absorb each gas selectively. By careful weighing of each tube before and after the combustion process, the values of a and b for carbon and hydrogen, respectively, can be calculated. The subscript c for oxygen is found by subtracting the calculated masses of carbon and hydrogen from that of the original sample.
Since the 1970s, it has been possible to carry out combustion analyses with automated equipment. This one can also determine nitrogen and sulfur:
Measurements of mass or weight have long been the principal tool for understanding chemical change in a quantitative way. Balances and weighing scales have been in use for commercial and pharmaceutical purposes since the beginning of recorded history, but these devices lacked the 0.001-g precision required for quantitative chemistry and elemental analysis carried out on the laboratory scale.
It was not until the mid-18th century that the Scottish chemist Joseph Black invented the equal arm analytical balance. The key feature of this invention was a lightweight, rigid beam supported on a knife-edged fulcrum; additional knife-edges supported the weighing pans. The knife-edges greatly reduced the friction that limited the sensitivity of previous designs; it is no coincidence that accurate measurements of combining weights and atomic weights began at about this time.
Analytical balances are enclosed in a glass case to avoid interference from air currents, and the calibrated weights are handled with forceps to prevent adsorption of moisture or oils from bare fingers.
Anyone who was enrolled in college-level general chemistry up through the 1960's will recall the training (and tedium) associated with these devices. These could read directly to 1 milligram and allow estimates to ±0.1 mg. Later technical refinements added magnetic damping of beam swinging, pan brakes, and built-in weight sets operated by knobs. The very best research-grade balances achieved precisions of 0.001 mg.
Beginning in the 1970's, electronic balances have come into wide use, with single-pan types being especially popular. A single-pan balance eliminates the need for comparing the weight of the sample with that of calibrated weights. Addition of a sample to the pan causes a displacement of a load cellwhich generates a compensating electromagnetic field of sufficient magnitude to raise the pan to its original position. The current required to accomplish this is sensed and converted into a weight measurement. The best research-grade electronic balances can read to 1 microgram, but 0.1-mg sensitivities are more common for student laboratory use. | textbooks/chem/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.03%3A_Formulas_and_Their_Meaning.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas
• Given the formulas of reactants and products, write a balanced chemical equation for the reaction.
• Given the relative solubilities, write a balanced net ionic equation for a reaction between aqueous solutions of two ionic compounds.
• Write appropriate chemical conversion factors to calculate the masses of all components of a chemical reaction when the mass of any single component is specified in any system of units.
• and find the masses of all components present when the reaction is complete.
• Describe the manner in which the concept of limiting reactant relates to combustion and human exercise.
A chemical equation expresses the net change in composition associated with a chemical reaction by showing the number of moles of reactants and products. But because each component has its own molar mass, equations also implicitly define the way in which the masses of products and reactants are related. In this unit we will concentrate on understanding and making use of these mass relations.
In a chemical reaction, one or more reactants are transformed into products:
reactants → products
The purpose of a chemical equation is to express this relation in terms of the formulas of the actual reactants and products that define a particular chemical change. For example, the reaction of mercury with oxygen to produce mercuric oxide would be expressed by the equation
Hg + O2 → HgO2
Sometimes, for convenience, it is desirable to indicate the physical state (gas, liquid or solid) of one or more of the species by appropriate abbreviations:
Hg(l) + O2(g) → HgO2(s)
C(graphite) + O2(g) → CO2(g)
C(diamond) + O2(g) → CO2(g)
However, this is always optional.
How to read and write chemical equations
A chemical equation is a statement of a fact: it expresses the net change that occurs as the result of a chemical reaction. In doing so, is must be consistent with the law of conservation of mass:
In the context of an ordinary chemical reaction, conservation of mass means that atoms are neither created nor destroyed. This requirement is easily met by making sure that there are equal numbers of all atoms on both sides of the equation.
When we balance an equation, we simply make it consistent with the observed fact that individual atoms are conserved in chemical changes. There is no set “recipe’’ for balancing ordinary chemical equations; it is best to begin by carefully studying selected examples such as those given below.
Example $1$: Combustion of Propane
Write a balanced equation for the combustion of propane C3H8 in oxygen O2. The products are carbon dioxide CO2 and water H2O.
Solution
Begin by writing the unbalanced equation
$C_3H_8 + O_2 → CO_2 + H_2O \nonumber$
It is usually best to begin by balancing compounds containing the least abundant element, so we first balance the equation for carbon:
C3H8 + O23 CO2 + H2O
In balancing the oxygen, we see that there is no way that an even number of O2 molecules on the left can yield the uneven number of O atoms shown on the right. Don't worry about this now— just use the appropriate fractional coefficient:
C3H8 + 3 ½ O2 → 3 CO2 + H2O
Finally, we balance the hydrogens by adding more waters on the right:
C3H8 + 7/2 O2 → 3 CO2 + 4 H2O
Ah, but now the oxygens are off again — fixing this also allows us to get rid of the fraction on the left side:
C3H8 + 5 O2 → 3 CO2 + 4 H2O
It often happens, however, that we do end up with a fractional coefficient, as in this variant of the above example.
Example $2$: combustion of ethane
Write a balanced equation for the combustion of ethane C2H6 in oxygen O2. The products are carbon dioxide CO2 and water H2O.
Solution
Begin by writing the unbalanced equation
C2H6 + O2 → CO2 + H2O
...then balance the carbon:
C2H6 + O2 2 CO2 + H2O
Let's balance the hydrogen next:
C2H6 + O2 → 2 CO2 + 3 H2O
...but now we need a non-integral number of dioxygen molecules on the left:
C2H6 + 7/2 O2 → 2 CO2 + 3 H2O
My preference is to simply leave it in this form; there is nothing wrong with 7/2 = 3 ½ moles of O2, and little to be gained by multiplying every term by two— not unless your teacher is a real stickler for doing it "by the book", in which case you had better write
2 C2H6 + 7 O2 → 4 CO2 + 6 H2O
Net Ionic Equations
Ionic compounds are usually dissociated in aqueous solution; thus if we combine solutions of silver nitrate AgNO3 and sodium chloride NaCl we are really combining four different species: the cations (positive ions) Ag+and Na+and the anions (negative ions) NO3 and Cl. It happens that when the ions Ag+ and Cl are brought together, they will combine to form an insoluble precipitate of silver chloride. The net equation for this reaction is
$Ag^+_{(aq)} + Cl^–_{(aq)} → AgCl_{(s)}$
Note that
• the ions NO3 and Cl are not directly involved in this reaction; the equation expresses only the net change, which is the removal of the silver and chloride ions from the solution to form an insoluble solid.
• the symbol (aq) signifies that the ions are in aqueous solution, and thus are hydrated, or attached to water molecules.
• the symbol (s) indicates that the substance AgCl exists as a solid. When a solid is formed in a reaction that takes place in solution, it is known as a precipitate. The formation of a precipitate is often indicated by underscoring.
From the above example involving silver chloride, it is clear that a meaningful net ionic equation can be written only if two ions combine to form an insoluble compound. To make this determination, it helps to know the solubility rules— which all students of chemistry were at one time required to memorize, but are nowadays usually obtained from tables such as the one shown below.
Anion (negative ion) Cation (positive ion) Soluble?
any anion alkali metal ions (Li+, Na+, K+, etc.)
yes
nitrate, NO3 any cation
yes
acetate, CH3COO any cation except Ag+
yes
halide ions Cl, Br, or I Ag+, Pb2+, Hg22+, Cu2+
no
halide ions Cl, Br, or I any other cation
yes
sulfate, SO42– Ca2+, Sr2+, Ba2+, Ag+, Pb2+
no
sulfate, SO42– any other cation
yes
sulfide, S2 alkali metal ions or NH4+
yes
sulfide, S2 Be2+, Mg2+, Ca2+, Sr2+, Ba2+, Ra2+
yes
sulfide, S2 any other cation
no
hydroxide, OH alkali metal ions or NH4+
yes
hydroxide, OH Sr2+, Ba2+, Ra2+
slightly
hydroxide, OH any other cation
no
phosphate, PO43–, carbonate CO32– alkali metal ions or NH4+
yes
phosphate, PO43–, carbonate CO32– any other cation
no
Example $3$: net ionic equations
Write net ionic equations for what happens when aqueous solutions of the following salts are combined:
1. PbCl2 + K2SO4
2. K2CO3 + Sr(NO3)2
3. AlCl3 + CaSO4
4. Na3PO4 + CaCl2
Use the solubility rules table(above) to find the insoluble combinations:
1. Pb2+(aq) + SO42–(aq) → PbSO4(s)
2. Sr2+(aq) + CO32–(aq) → SrCO3(s)
3. no net reaction
4. 3 Ca2+(aq) + 2 PO43–(aq) → 3 Ca3(PO4)2(s)
Note the need to balance the electric charges
Mass Relations in Chemical Equations
A balanced chemical equation expresses the relative number of moles of each component (product or reactant), but because each formula in the equation implies a definite mass of the substance (its molar mass), the equation also implies that certain weight relations exist between the components. For example, the equation describing the combustion of carbon monoxide to carbon dioxide
$2 CO + O_2 → 2 CO_2$
implies the following relations:
The relative masses shown in the bottom line establish the stoichiometry of the reaction, that is, the relations between the masses of the various components. Since these masses vary in direct proportion to one another, we can define what amounts to a conversion factor (sometimes referred to as a chemical factor) that relates the mass of any one component to that of any other component.
Example 3.4.4: chemical factor and mass conversion
Evaluate the chemical factor and the conversion factor that relates the mass of carbon dioxide to that of the CO consumed in the reaction.
Solution
From the above box, the mass ratio of CO2 to CO in this reaction is 88/56 = 1.57; this is the chemical factor for the conversion of CO into CO2. The conversion factor is just 1.57/1 with the mass units explicitly stated:
$\dfrac{1.57\; g\; CO_2}{ 1\; g\; CO} = 1$
Example $5$: mass-mass calculations in various units
1. How many tons of CO2 can be obtained from the combustion of 10 tons of CO?
2. How many kg of CO must be burnt to produce 20 kg of CO2?
Solutions
1. (1.57 T CO2 / 1 T CO) × (10 T CO) = 15.7 T CO2
2. Notice the answer to this one must refer to carbon monoxide, not CO2, so we write the conversion factor in reverse:
(1 kg CO / 1.57 kg CO2) × (20 kg CO2) = (20/1.57)g CO = 12.7 kg CO.
Is this answer reasonable? Yes, because the mass of CO must always be smaller than that of CO2 in this reaction.
More mass-mass problems
Don't expect to pass Chemistry unless you can handle problems such as the ones below; they come up frequently in all kinds of contexts. If you feel the need for more guidance, see one of the video tutorials listed near the bottom of this page.
Example $6$
The ore FeS2 can be converted into the important industrial chemical sulfuric acid H2SO4 by a series of processes. Assuming that the conversion is complete, how many liters of sulfuric acid (density 1.86 kg L–1) can be made from 50 kg of ore?
Solution
As with most problems, this breaks down into several simpler ones. We begin by working out the stoichiometry on the assumption that all the sulfur in the or ends up as H2SO4, allowing us to write
FeS2 → 2 H2SO4
isbalanced in respect to the two components of interest, and this is all we need here. The molar masses of the two components are 120.0 and 98 g mol–1, respectively, so the equation can be interpreted in terms of masses as
[120 mass units] FeS2 → [2 × 98 mass units] H2SO4
Thus 50 kg of ore will yield (50 kg) × (196/120) = 81.7 kg of product.
[Check: is this answer reasonable? Yes, because the factor (196/120) is close to (200/120) = 5/3, so the mass of product should be slightly smaller than twice the mass of ore consumed.]
From the density information we find that the volume of liquid H2SO4 is
(81.7 kg) ÷ (1.86 kg L–1) = 43.9 L
[Check: is this answer reasonable? Yes, because density tells us that the number of liters of acid will be slightly greater than half of its weight.]
Example $7$
Barium chloride forms a crystalline hydrate, BaCl2·xH2O, in which x molecules of water are incorporated into the crystalline solid for every unit of BaCl2. This water can be driven off by heat; if 1.10 g of the hydrated salt is heated and reweighed several times until no further loss of weight (i.e., loss of water) occurs, the final weight of the sample is 0.937 g. What is the value of x in the formula of the hydrate?
Solution
The first step is to find the number of moles of BaCl2 (molecular weight 208.2) from the mass of the dehydrated sample.
(0.937 g) / (208.2 g mol–1) = 0.00450 mol
Now find the moles of H2O (molecular weight 18) lost when the sample was dried:
(1.10 – .937)g / (18 g mol–1) = .00905 mol
Allowing for a reasonable amount of measurement error, it is apparent that the mole ratio of BaCl2:H2O = 1:2. The formula of the hydrate is BaCl2·2H2O.
Limiting Reactants
Most chemical reactions that take place in the real world begin with more or less arbitrary amounts of the various reactants; we usually have to make a special effort if we want to ensure that stoichiometric amounts of the reactants are combined. This means that one or more reactant will usually be present in excess; there will be more present than can react, and some will remain after the reaction is over. At the same time, one reactant will be completely used up; we call this the limiting reactantbecause the amount of this substance present will control, or limit, the quantities of the other reactants that are consumed as well as the amounts of products produced.
Limiting reactant problems are handled in the same way as ordinary stoichiometry problems with one additional preliminary step: you must first determine which of the reactants is limiting— that is, which one will be completely used up. To start you off, consider the following very simple example
Example $8$
For the hypothetical reaction 3 A + 4 B → [products], determine which reactant will be completely consumed when we combine
1. equimolar quantities of A and B;
2. 0.57 mol of A and 0.68 mol of B.
Solution
a) Simple inspection of the equation shows clearly that more moles of B are required, so this component will be consumed (and is thus the limiting reactant), leaving behind ¾ as many moles of A.
b) How many moles of B will react with .57 mol of A? The answer will be
(4/3 × 0.57 mol). If this comes to less than 0.68 mol, then B will be the limiting reactant, and you must continue the problem on the basis of the amount of B present. If the limiting reactant is A, then all 0.57 mol of A will react, leaving some of the B in excess. Work it out!
Example $9$
Sulfur and copper, when heated together, react to form copper(I) sulfide, Cu2S. How many grams of Cu2S can be made from 10 g of sulfur and 15 g of copper?
Solution
From the atomic weights of Cu (63.55) and S (32.06) we can interpret the he reaction 2 Cu + S → Cu2S as
[2 × 63.55 = 127.1 mass units] Cu + [32.06 mass units] S
→ [159.2 mass units] Cu2S
Thus 10 g of S will require
(10 g S) × (127.1 g Cu)/(32.06 g S) = 39.6 g Cu
...which is a lot more than what is available, so copper is the limiting reactant here.
[Check: is this answer reasonable? Yes, because the chemical factor (127/32) works out to about 4, indicating that sulfur reacts with about four times its weight of copper.]
The mass of copper sulfide formed will be determined by the mass of copper available:
(15 g Cu) × (159.2 g Cu2S) / (127.1 g Cu) = 18.8 g Cu2S
[Check: is this answer reasonable? Yes, because the chemical factor (159.2/127.1) is just a bit greater than unity, indicating that the mass of the product will slightly exceed that of the copper consumed.]
The concept of limiting reactants touches us all in our everyday lives — and as we will show in the second example below, even in the maintenance of life itself!
Air-to-fuel ratios in combustion
Combustion is an exothermic process in which a fuel is combined with oxygen; complete combustion of a hydrocarbon fuel such as methane or gasoline yields carbon dioxide and water:
CH4 + 2 O2 → CO2 + 2 H2O(g)
Example $10$: Fuel-to-Oxygen mass ratio in combustion of methane
Calculate the mass ratio of CH4 to O2 required for complete combustion.
Solution
This is just the ratio of the molar mass of CH4 (16 g) to that of two moles of dioxygen (2 x 32 g)
Thus (64 g) / (16 g) = 4/1 = 4.0.
Complete combustion of each kg of methane consumes 4 kg of dioxygen, which is supplied by the air. In the classic Bunsen burner, this air is admitted through an adjustable opening near the base. When it is fully open, the flame is blue in color and achieves its maximum temperature, indicating that combustion is approximately stoichiometric. If the opening is gradually closed, the appearance of the flame changes as illustrated. Under these conditions, oxygen becomes the limiting reactant and combustion is incomplete.
Incomplete combustion is generally undesirable because it wastes fuel, produces less heat, and releases pollutants such as carbon soot. Energy-producing combustion processes should always operate in fuel-limited mode.
In ordinary combustion processes, the source of oxygen is air. Because only about 20 percent of the molecules in dry air consist of O2, the volume of air that must be supplied is five times greater than what would be required for pure O2. Calculation of the air-to-fuel mass ratio ("A/F ratio") employed by combustion engineers is complicated by the differing molar masses of dioxygen and air. For methane combustion, the A/F ratio works out to about 17.2. A/F ratios which exceed the stoichiometric values are said to be lean, while those in which air becomes the limiting component are characterized as rich. In order to ensure complete combustion, it is common practice to maintain a slightly lean mixture. The quantities of so-called excess air commonly admitted to burners vary from 5-10% for natural gas to up to 100% for certain grades of coal.
For internal combustion engines fueled by gasoline (roughly equivalent to C7H14), the stoichiometric A/F ratio is 15:1. However, practical considerations necessitate differing ratios at various stages of operation. Typical values vary from a rich ratio for starting or acceleration to slightly lean ratios for ordinary driving. These ratios are set by the carburetor, with additional control by the engine computer and exhaust-line oxygen sensor in modern vehicles, or by a manual choke in earlier ones.
Aerobic and anaerobic respiration
Our bodies require a continual supply of energy in order to maintain neural activity, synthesize proteins and other essential biochemical components, replace cells, and to power muscular action. The "fuel" — the carrier of chemical energy — glucose, a simple sugar which is released as needed from the starch-like polymer glycogen, the form in which the energy we derive from food is stored. Arterial blood carries dissolved glucose along with hemoglobin-bound dioxygen to individual cells which are the sites of glucose "combustion":
$C_6H_{12}O_6 + 6 O_2 → 6 CO_2 + 6 H_2O$
The net reaction and the quantity of energy released are the same as if the glucose were burned in the open air, but within the cells the reaction proceeds in a series of tiny steps which capture most of this energy for the body's use, liberating only a small fraction of it as thermal energy (heat). Because this process utilizes oxygen from the air we breath, it is known as aerobic respiration. And as with any efficient combustion process, glucose is the limiting reactant here.
However, there are times when vigorous physical activity causes muscles to consume glucose at a rate that exceeds the capacity of the blood to deliver the required quantity of oxygen. Under these conditions, cellular respiration shifts to an alternative anaerobic mode:
$C_6H_{12}O_6 → 2 CH_3CH(OH)COOH$
As you can see from this equation, glucose is only partially broken down (into lactic acid), and thus only part of its chemical energy is captured by the body. There are numerous health benefits to aerobic exercise including increased ability of the body to maintain an aerobic condition. But if you are into short-distance running (sprinting) or being pursued by a tiger, the reduced efficiency of anaerobic exercise may be a small price to pay. | textbooks/chem/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.04%3A_Chemical_Equations_and_Stoichiometry.txt |
Learning Objectives
Different instructors set out widely varying requirements for chemical nomenclature. The following are probably the most commonly expected:
• You should know the name and symbols of at least the first twenty elements, as well as all of the halogen and noble gas groups (groups 17-18).
• Name any binary molecule, using the standard prefixes for 1-10.
• All of the commonly-encountered ions.
• Salts and other ion-derived compounds, including the acids listed here. In some courses you will not need to know the -ous/-ic names for salts of copper, iron, etc., but in others you will.
• Find out from your instructor which organic compounds you must be able to name.
Chemical nomenclature is far too big a topic to treat comprehensively, and it would be a useless diversion to attempt to do so in a beginning course; most chemistry students pick up chemical names and the rules governing them as they go along. But we can hardly talk about chemistry without mentioning some chemical substances, all of which do have names— and often, more than one! All we will try to do here is cover what you need to know to make sense of first-year chemistry. For those of you who plan to go on in chemistry, the really fun stuff comes later!
There are more than 100 million named chemical substances. Who thinks up the names for all these chemicals? Are we in danger of running out of new names? The answer to the last question is "no", for the simple reason that the vast majority of the names are not "thought up"; there are elaborate rules for assigning names to chemical substances on the basis of their structures. These are called systematic names; they may be a bit ponderous, but they uniquely identify a given substance. The rules for these names are defined by an international body. But in order to make indexing and identification easier, every known chemical substance has its own numeric "personal ID", known as a CAS registry number. For example, caffeine is uniquely identified by the registry number 58-08-2. About 15,000 new numbers are issued every day.
Common Names vs. Systematic Names
Many chemicals are so much a part of our life that we know them by their familiar names, just like our other friends. A given substance may have several common or trivial names; ordinary cane sugar, for example, is more formally known as "sucrose", but asking for it at the dinner table by that name will likely be a conversation-stopper, and I won't even venture to predict the outcome if you try using its systematic name in the same context:
"please pass the α-D-glucopyranosyl-(1,2)- β-D-fructofuranoside!"
But "sucrose" would be quite appropriate if you need to distinguish this particular sugar from the hundreds of other named sugars. The only place you would come across a systematic name like the rather unwieldy one mentioned here is when referring (in print or in a computer data base) to a sugar that has no common name.
Chemical substances have been a part the fabric of civilization and culture for thousands of years, and present-day chemistry retains a lot of this ancient baggage in the form of terms whose hidden cultural and historic connections add color and interest to the subject. Many common chemical names have reached us only after remarkably long journeys through time and place, as the following two examples illustrate.
Ammonia
Most people can associate the name ammonia (\(NH_3\)) with a gas having a pungent odor; the systematic name "nitrogen trihydride" (which is rarely used) will tell you its formula. What it will not tell you is that smoke from burning camel dung (the staple fuel of North Africa) condenses on cool surfaces to form a crystalline deposit. The ancient Romans first noticed this on the walls and ceiling of the temple that the Egyptians had built to the Sun-god Amun in Thebes, and they named the material sal ammoniac, meaning "salt of Amun". In 1774, Joseph Priestly (the discoverer of oxygen) found that heating sal ammoniac produced a gas with a pungent odor, which a T. Bergman named "ammonia" eight years later.
Alcohol
Alcohol entered the English language in the 17th Century with the meaning of a "sublimated" substance, then became the "pure spirit" of anything, and only became associated with "spirit of wine" in 1753. Finally, in 1852, it become a part of chemical nomenclature that denoted a common class of organic compound. But it's still common practice to refer to the specific substance CH3CH2OH as "alcohol" rather then its systematic name ethanol.
Arabic alchemy has given us a number of chemical terms; for example,alcohol is believed to derive from Arabic or al-ghawl whose original meaning was a metallic powder used to darken women's eyelids (kohl).
The general practice among chemists is to use the more common chemical names whenever it is practical to do so, especially in spoken or informal written communication. For many of the very simplest compounds (including most of those you will encounter in a first-year course), the systematic and common names are the same, but where there is a difference and if the context permits it, the common name is usually preferred.
Many of the "common" names we refer to in this lesson are known and used mainly by the scientific community. Chemical substances that are employed in the home, the arts, or in industry have acquired traditional or "popular" names that are still in wide use. Many, like sal ammoniac mentioned above, have fascinating stories to tell.
B4O7·10H2O
popular name chemical name formula
Table \(1\):
borax sodium tetraborate decahydrate
calomel mercury(I) chloride Hg2Cl2
milk of magnesia magnesium hydroxide Mg(OH)2
muriatic acid hydrochloric acid HCl(aq)
oil of vitriol sulfuric acid H2SO4
saltpeter sodium nitrate NaNO3
slaked lime calcium hydroxide Ca(OH)2
Minerals: Minerals are solid materials that occur in the earth which are classified and named according to their compositions (which often vary over a continuous range) and the arrangement of the atoms in their crystal lattices. There are about 4000 named minerals. Many are named after places, people, or properties, and most frequently end with -ite.
Proprietary names: Chemistry is a major industry, so it is not surprising that many substances are sold under trademarked names. This is especially common in the pharmaceutical industry, which uses computers to churn out names that they hope will distinguish a new product from those of its competitors. Perhaps the most famous of these is Aspirin, whose name was coined by the German company Bayer in 1899. This trade name was seized by the U.S. government following World War I, and is no longer a protected trade mark in that country.
Names and symbols of the Elements
Naming of chemical substances begins with the names of the elements. The discoverer of an element has traditionally had the right to name it, and one can find some interesting human and cultural history in these names, many of which refer to the element's properties or to geographic locations. Only some of the more recently-discovered (and artificially produced) elements are named after people. Some elements were not really "discovered", but have been known since ancient times; many of these have symbols that are derived from the Latin names of the elements. There are nine elements whose Latin-derived symbols you are expected to know (Table \(2\)).
element name
symbol
Latin name
Table \(2\)
antimony Sb stibium
copper Cu cuprum
gold Au aurum
iron Fe ferrum
lead Pb plumbum
mercury Hg hydrargyrum
potassium K kalium
sodium Na natrium
tin Sn stannum
There is a lot of history and tradition in many of these names. For example, the Latin name for mercury, hydrargyrum, means "water silver", or quicksilver. The appellation "quack", as applied to an incompetent physician, is a corruption of the Flemish word for quicksilver, and derives from the use of mercury compounds in 17th century medicine. The name "mercury" is of alchemical origin and is of course derived from the name of the Greek god after whom the planet is named; the enigmatic properties of the element, at the same time metallic, fluid, and vaporizable, suggest the same messenger with the winged feet who circles through the heavens close to the sun.
Naming the binary molecules
The system used for naming chemical substances depends on the nature of the molecular units making up the compound. These are usually either ions or molecules; different rules apply to each. In this section, we discuss the simplest binary (two-atom) molecules.
It is often necessary to distinguish between compounds in which the same elements are present in different proportions; carbon monoxide CO and carbon dioxide CO2 are familiar to everyone. Chemists, perhaps hoping it will legitimize them as scholars, employ Greek (of sometimes Latin) prefixes to designate numbers within names; you will encounter these frequently, and you should know them:
1/2 1 2 3 4 5 6 7 8 9 10
Table \(3\)
hemi mono di tri tetra penta hexa hepta octa nona deca
You will occasionally see names such as dihydrogen and dichlorine used to distinguish the common forms of these elements (H2, Cl2) from the atoms that have the same name when it is required for clarity.
Examples:
• N2O4 - dinitrogen tetroxide [note the missing a preceding the vowel]
• N2O - dinitrogen oxide [more commonly, nitrous oxide]
• SF6 - sulfur hexafluoride
• P4S3 - tetraphosphorus trisulfide [more commonly, phosphorus sesquisulfide]
• Na2HPO4 - disodium hydrogen phosphate
• H2S - hydrogen sulfide [we skip both the di and mono]
• CO - carbon monoxide [mono- to distinguish it from the dioxide]
• CaSO4·½H2O - calcium sulfate hemihydrate [In this solid, two CaSO4 units share one water of hydration between them; more commonly called Plaster of Paris]
It will be apparent from these examples that chemists are in the habit of taking a few liberties in applying the strict numeric prefixes to the more commonly known substances.
These two-element compounds are usually quite easy to name because most of them follow the systematic rule of adding the suffix -ide to the root name of the second element, which is normally the more "negative" one. Several such examples are shown above. But as noted above, there are some important exceptions in which common or trivial names take precedence over systematic names:
• H2O (water, not dihydrogen oxide)/
• H2O2 (hydrogen peroxide, not dihydrogen dioxide)
• H2S (hydrogen sulfide, not dihydrogen sulfide)
• NH3 (ammonia, not nitrogen trihydride)
• NO (nitric oxide, not nitrogen monoxide)
• N2O (nitrous oxide, not dinitrogen oxide)
• CH4 (methane, not carbon tetrahydride)
Naming the ions
An ion is an electrically charged atom or molecule— that is, one in which the number of electrons differs from the number of nuclear protons. Many simple compounds can be regarded, at least in a formal way, as being made up of a pair of ions having opposite charge signs. The positive ions, also known as cations, are mostly those of metallic elements which simply take the name of the element itself.
calcium
sodium
magnesium
cadmium
potassium
Ca2+
Na+
Mg2+
Cd2+
K+
The only important non-metallic cations you need to know about are
hydrogen
hydronium
ammonium
H+
H3O+
NH4+
(Later on, when you study acids and bases, you will learn that the first two represent the same chemical species.)
Some of the metallic ions are multivalent, meaning that they can exhibit more than one electric charge. For these there are systematic names that use Roman numerals, and the much older and less cumbersome common names that mostly employ the Latin names of the elements, using the endings -ous and -ic to denote the lower and higher charges, respectively (Table \(4\)). (In cases where more than two charge values are possible, the systematic names are used.)
\(Cu^+\)
\(Cu^{2+}\)
\(Fe^{2+}\)
\(Fe^{3+}\)
\(^*Hg_2^{2+}\)
\(Hg^{2+}\)
\(Sn^{2+}\)
\(Sn^{4+}\)
Table \(4\): Metalic names
copper(I)
copper(II)
iron(II)
iron(III)
mercury(I)
mercury(II)
tin(II)
tin(IV)
cuprous
cupric
ferrous
ferric
mercurous
mercuric
stannous
stannic
* The mercurous ion is a unique double cation that is sometimes incorrectly represented as Hg+.
The non-metallic elements generally form negative ions (anions). The names of the monatomic anions all end with the -ide suffix:
Cl
S2–
O2–
C4–
I
H
chloride
sulfide
oxide
carbide
iodide
hydride
There are a number of important polyatomic anions which, for naming purposes, can be divided into several categories. A few follow the pattern for the monatomic anions:
OH
CN
O2
hydroxide
cyanide
peroxide
Oxyanions
The most common oxygen-containing anions (oxyanions) have names ending in -ate, but if a variant containing a small number of oxygen atoms exists, it takes the suffix -ite.
CO32–
NO3
NO2
SO42–
SO32–
PO43–
carbonate
nitrate
nitrite
sulfate
sulfite
phosphate
The above ions (with the exception of nitrate) can also combine with H+ to produce "acid" forms having smaller negative charges. For rather obscure historic reasons, some of them have common names that begin with -biwhich, although officially discouraged, are still in wide use:
ion
systematic name
common name
HCO3
hydrogen carbonate
bicarbonate
HSO4
hydrogen sulfate
bisulfate
HSO3
hydrogen sulfite
bisulfite
Chlorine, and to a smaller extent bromine and iodine, form a more extensive series of oxyanions that requires a somewhat more intricate naming convention:
ClO
ClO2
ClO3
ClO4
hypochlorite
chlorite
chlorate
perchlorate
Ion-derived compounds
These compounds are formally derived from positive ions (cations) and negative ions (anions) in a ratio that gives an electrically neutral unit. Salts, of which ordinary "salt" (sodium chloride) is the most common example, are all solids under ordinary conditions. A small number of these (such as NaCl) do retain their component ions and are properly called "ionic solids". In many cases, however, the ions lose their electrically charged character and form largely-non-ionic solids such as CuCl2. The term "ion-derived solids" encompasses both of these classes of compounds.
Most of the cations and anions described above can combine to form solid compounds that are usually known as salts. The one overriding requirement is that the resulting compound must be electrically neutral: thus the ions Ca2+ and Brcombine only in a 1:2 ratio to form calcium bromide, CaBr2. Because no other simplest formula is possible, there is no need to name it "calcium dibromide".
Since some metallic elements form cations having different positive charges, the names of ionic compounds derived from these elements must contain some indication of the cation charge. The older method uses the suffixes -ous and -ic to denote the lower and higher charges, respectively. In the cases of iron and copper, the Latin names of the elements are used: ferrous, cupric.
This system is still widely used, although it has been officially supplanted by the more precise, if slightly cumbersome Stock system in which one indicates the cationic charge (actually, the oxidation number) by means of Roman numerals following the symbol for the cation. In both systems, the name of the anion ends in -ide.
formula
systematic name
common name
Table \(5\):
CuCl copper(I) chloride cuprous chloride
CuCl2 copper(II) chloride cupric chloride
Hg2Cl mercury(I) chloride mercurous chloride
HgO mercury(II) oxide mercuric oxide
FeS iron(II) sulfide ferrous sulfide
Fe2S3 iron(III) sulfide ferric sulfide
Acids
Most acids can be regarded as a combination of a hydrogen ion H+ with an anion; the name of the anion is reflected in the name of the acid. Notice, in the case of the oxyacids, how the anion suffixes -ate and -ite become -ic and -ous, respectively, in the acid name.
anion
anion name
acid
acid name
Table \(6\)
Cl chloride ion
HCl
hydrochloric acid
CO32– carbonate ion
H2CO3
carbonic acid
NO2 nitrite ion
HNO2
nitrous acid
NO3 nitrate ion
HNO3
nitric acid
SO32– sulfite ion
H2SO3
sulfurous acid
SO42– sulfate ion
H2SO4
sulfuric acid
CH3COO acetate ion CH3COOH
acetic acid
Organic compounds
Since organic (carbon) compounds constitute the vast majority of all known chemical substances, organic nomenclature is a huge subject in itself. We present here only the very basic part of it that you need to know in first-year chemistry— much more awaits those of you who are to experience the pleasures of an organic chemistry course later on. The simplest organic compounds are built of straight chains of carbon atoms which are named by means of prefixes that denote the number of carbons in the chain. Using the convention Cnto denote a straight chain of n atoms (don't even ask about branched chains!), the prefixes for chain lengths from 1 through 10 are given here:
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
meth- eth- prop- but- pent- hex- hept- oct- non- dec-
As you can see, chains from C5 onward use Greek number prefixes, so you don't have a lot new to learn here. The simplest of these compounds are hydrocarbons having the general formula CnH2n+2. They are known generically as alkanes, and their names all combine the appropriate numerical prefix with the ending -ane:
CH4
C2H6
C3H8
C8H18
C
C–C
C–C–C
C–C–C–C–C–C–C–C
methane
ethane
propane
octane
All carbon atoms must have four bonds attached to them; notice the common convention of not showing hydrogen atoms explicitly.
Functional groups
and higher chains, the substituent can be in more than one location, thus giving rise to numerous isomers.
Alcohols: the hydroxyl group
formula
common name
systematic name
CH3OH
methyl alcohol
methanol
CH3CH2OH
ethyl alcohol
ethanol
C8H15OH
octyl alcohol
octanol
Acids: The carboxyl group
formula
common name
systematic name
HCOOH
formic acid
methanoic acid
CH3COOH
acetic acid
ethanoic acid
C4H9COOH
butyric acid
butanoic acid
A few others...
class
example
name
amine
methylamine
CH3NH2
ketone
acetone (dimethylketone)
CH3-CO-CH3
ether
diethyl ether
C2H5-O-C2H5 | textbooks/chem/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.05%3A_Introduction_to_Chemical_Nomenclature.txt |
Learning Objectives
• Give an example of a measurement whose number of significant digits is clearly too great, and explain why.
• State the purpose of rounding off, and describe the information that must be known to do it properly.
• Round off a number to a specified number of significant digits.
• Explain how to round off a number whose second-most-significant digit is 9.
• Carry out a simple calculation that involves two or more observed quantities, and express the result in the appropriate number of significant figures.
The numerical values we deal with in science (and in many other aspects of life) represent measurements whose values are never known exactly. Our pocket-calculators or computers don't know this; they treat the numbers we punch into them as "pure" mathematical entities, with the result that the operations of arithmetic frequently yield answers that are physically ridiculous even though mathematically correct. The purpose of this unit is to help you understand why this happens, and to show you what to do about it.
Digits: Significant and otherwise
Consider the two statements shown below:
• "The population of our city is 157,872."
• "The number of registered voters as of Jan 1 was 27,833.
Which of these would you be justified in dismissing immediately? Certainly not the second one, because it probably comes from a database which contains one record for each voter, so the number is found simply by counting the number of records. The first statement cannot possibly be correct. Even if a city’s population could be defined in a precise way (Permanent residents? Warm bodies?), how can we account for the minute-by minute changes that occur as people are born and die, or move in and move away?
What is the difference between the two population numbers stated above? The first one expresses a quantity that cannot be known exactly — that is, it carries with it a degree of uncertainty. It is quite possible that the last census yielded precisely 157,872 records, and that this might be the “population of the city” for legal purposes, but it is surely not the “true” population. To better reflect this fact, one might list the population (in an atlas, for example) as 157,900 or even 158,000. These two quantities have been rounded off to four and three significant figures, respectively, and the have the following meanings:
• 157900 (the significant digits are underlined here) implies that the population is believed to be within the range of about 157850 to about 157950. In other words, the population is 157900±50. The “plus-or-minus 50” appended to this number means that we consider the absolute uncertainty of the population measurement to be 50 – (–50) = 100. We can also say that the relative uncertainty is 100/157900, which we can also express as 1 part in 1579, or 1/1579 = 0.000633, or about 0.06 percent.
• The value 158000 implies that the population is likely between about 157500 and 158500, or 158000±500. The absolute uncertainty of 1000 translates into a relative uncertainty of 1000/158000 or 1 part in 158, or about 0.6 percent.
Which of these two values we would report as “the population” will depend on the degree of confidence we have in the original census figure; if the census was completed last week, we might round to four significant digits, but if it was a year or so ago, rounding to three places might be a more prudent choice. In a case such as this, there is no really objective way of choosing between the two alternatives.
This illustrates an important point: the concept of significant digits has less to do with mathematics than with our confidence in a measurement. This confidence can often be expressed numerically (for example, the height of a liquid in a measuring tube can be read to ±0.05 cm), but when it cannot, as in our population example, we must depend on our personal experience and judgment.
So, what is a significant digit? According to the usual definition, it is all the numerals in a measured quantity (counting from the left) whose values are considered as known exactly, plus one more whose value could be one more or one less:
• In “157900” (four significant digits), the left most three digits are known exactly, but the fourth digit, “9” could well be “8” if the “true value” is within the implied range of 157850 to 157950.
• In “158000” (three significant digits), the left most two digits are known exactly, while the third digit could be either “7” or “8” if the true value is within the implied range of 157500 to 158500.
Although rounding off always leads to the loss of numeric information, what we are getting rid of can be considered to be “numeric noise” that does not contribute to the quality of the measurement. The purpose in rounding off is to avoid expressing a value to a greater degree of precision than is consistent with the uncertainty in the measurement.
Implied Uncertainty
If you know that a balance is accurate to within 0.1 mg, say, then the uncertainty in any measurement of mass carried out on this balance will be ±0.1 mg. Suppose, however, that you are simply told that an object has a length of 0.42 cm, with no indication of its precision. In this case, all you have to go on is the number of digits contained in the data. Thus the quantity “0.42 cm” is specified to 0.01 unit in 0 42, or one part in 42 . The implied relative uncertainty in this figure is 1/42, or about 2%. The precision of any numeric answer calculated from this value is therefore limited to about the same amount.
Rounding Error
It is important to understand that the number of significant digits in a value provides only a rough indication of its precision, and that information is lost when rounding off occurs. Suppose, for example, that we measure the weight of an object as 3.28 g on a balance believed to be accurate to within ±0.05 gram. The resulting value of 3.28±.05 gram tells us that the true weight of the object could be anywhere between 3.23 g and 3.33 g. The absolute uncertainty here is 0.1 g (±0.05 g), and the relative uncertainty is 1 part in 32.8, or about 3 percent.
How many significant digits should there be in the reported measurement? Since only the left most “3” in “3.28” is certain, you would probably elect to round the value to 3.3 g. So far, so good. But what is someone else supposed to make of this figure when they see it in your report? The value “3.3 g” suggests an implied uncertainty of 3.3±0.05 g, meaning that the true value is likely between 3.25 g and 3.35 g. This range is 0.02 g below that associated with the original measurement, and so rounding off has introduced a bias of this amount into the result. Since this is less than half of the ±0.05 g uncertainty in the weighing, it is not a very serious matter in itself. However, if several values that were rounded in this way are combined in a calculation, the rounding-off errors could become significant.
Rules for Rounding
The standard rules for rounding off are well known. Before we set them out, let us agree on what to call the various components of a numeric value.
• The most significant digit is the left most digit (not counting any leading zeros which function only as placeholders and are never significant digits.)
• If you are rounding off to n significant digits, then the least significant digit is the nth digit from the most significant digit. The least significant digit can be a zero.
• The first non-significant digit is the n+1th digit.
Rounding-off rules
• If the first non-significant digit is less than 5, then the least significant digit remains unchanged.
• If the first non-significant digit is greater than 5, the least significant digit is incremented by 1.
• If the first non-significant digit is 5, the least significant digit can either be incremented or left unchanged (see below!)
• All non-significant digits are removed.
Fantasies about fives
Students are sometimes told to increment the least significant digit by 1 if it is odd, and to leave it unchanged if it is even. One wonders if this reflects some idea that even numbers are somehow “better” than odd ones! (The ancient superstition is just the opposite, that only the odd numbers are "lucky".)
In fact, you could do it equally the other way around, incrementing only the even numbers. If you are only rounding a single number, it doesn't really matter what you do. However, when you are rounding a series of numbers that will be used in a calculation, if you treated each first nonsignificant 5 in the same way, you would be over- or understating the value of the rounded number, thus accumulating round-off error. Since there are equal numbers of even and odd digits, incrementing only the one kind will keep this kind of error from building up. You could do just as well, of course, by flipping a coin!
Table \(1\): Examples of rounding-off
number to round
number of significant digits
result
comment
34.216 3 34.2 First non-significant digit (1) is less than 5, so number is simply truncated.
2.252 2 2.2 or 2.3 First non-significant digit is 5, so least sig. digit can either remain unchanged or be incremented.
39.99 3 40.0 Crossing "decimal boundary", so all numbers change.
85,381 3 85,400 The two zeros are just placeholders
0.04597 3 0.0460 The two leading zeros are not significant digits.
Rounding up the Nines
Suppose that an object is found to have a weight of 3.98 ± 0.05 g. This would place its true weight somewhere in the range of 3.93 g to 4.03 g. In judging how to round this number, you count the number of digits in “3.98” that are known exactly, and you find none! Since the “4” is the left most digit whose value is uncertain, this would imply that the result should be rounded to one significant figure and reported simply as 4 g. An alternative would be to bend the rule and round off to two significant digits, yielding 4.0 g. How can you decide what to do? In a case such as this, you should look at the implied uncertainties in the two values, and compare them with the uncertainty associated with the original measurement.
Table \(2\)
rounded value
implied max
implied min
absolute uncertainty
relative uncertainty
3.98 g 3.985 g 3.975 g ±.005 g or 0.01 g 1 in 400, or 0.25%
4 g 4.5 g 3.5 g ±.5 g or 1 g 1 in 4, 25%
4.0 g 4.05 g 3.95 g ±.05 g or 0.1 g 1 in 40, 2.5%
Clearly, rounding off to two digits is the only reasonable course in this example. Observed values should be rounded off to the number of digits that most accurately conveys the uncertainty in the measurement.
• Usually, this means rounding off to the number of significant digits in in the quantity; that is, the number of digits (counting from the left) that are known exactly, plus one more.
• When this cannot be applied (as in the example above when addition of subtraction of the absolute uncertainty bridges a power of ten), then we round in such a way that the relative implied uncertainty in the result is as close as possible to that of the observed value.
Rounding the Results of Calculations
When carrying out calculations that involve multiple steps, you should avoid doing any rounding until you obtain the final result. Suppose you use your calculator to work out the area of a rectangle:
rounded value
relative implied uncertainty
1.58 1 part in 158, or 0.6%
1.6 1 part in 16, or 6 %
Note
Your calculator is of course correct as far as the pure numbers go, but you would be wrong to write down "1.57676 cm2" as the answer. Two possible options for rounding off the calculator answer are shown at the right.
It is clear that neither option is entirely satisfactory; rounding to 3 significant digits overstates the precision of the answer, whereas following the rule and rounding to the two digits in ".42" has the effect of throwing away some precision. In this case, it could be argued that rounding to three digits is justified because the implied relative uncertainty in the answer, 0.6%, is more consistent with those of the two factors.
The "rules" for rounding off are generally useful, convenient guidelines, but they do not always yield the most desirable result. When in doubt, it is better to rely on relative implied uncertainties.
Addition and Subtraction
In operations involving significant figures, the answer is reported in such a way that it reflects the reliability of the least precise operation. An answer is no more precise that the least precise number used to get the answer. When adding or subtracting, we go by the number of decimal places (i.e., the number of digits on the right side of the decimal point) rather than by the number of significant digits. Identify the quantity having the smallest number of decimal places, and use this number to set the number of decimal places in the answer.
Multiplication and Division
The result must contain the same number of significant figures as in the value having the least number of significant figures.
Logarithms and antilogarithms
If a number is expressed in the form a × 10b ("scientific notation") with the additional restriction that the coefficient a is no less than 1 and less than 10, the number is in its normalized form. Express the base-10 logarithm of a value using the same number of significant figures as is present in the normalized form of that value. Similarly, for antilogarithms (numbers expressed as powers of 10), use the same number of significant figures as are in that power.
Examples \(1\)
The following examples will illustrate the most common problems you are likely to encounter in rounding off the results of calculations. They deserve your careful study!
calculator result
rounded remarks
1.6 Rounding to two significant figures yields an implied uncertainty of 1/16 or 6%, three times greater than that in the least-precisely known factor. This is a good illustration of how rounding can lead to the loss of information.
1.9E6 The "3.1" factor is specified to 1 part in 31, or 3%. In the answer 1.9, the value is expressed to 1 part in 19, or 5%. These precisions are comparable, so the rounding-off rule has given us a reasonable result.
A certain book has a thickness of 117 mm; find the height of a stack of 24 identical books:
2810 mm The “24” and the “1” are exact, so the only uncertain value is the thickness of each book, given to 3 significant digits. The trailing zero in the answer is only a placeholder.
10.4 In addition or subtraction, look for the term having the smallest number of decimal places, and round off the answer to the same number of places.
23 cm see below
The last of the examples shown above represents the very common operation of converting one unit into another. There is a certain amount of ambiguity here; if we take "9 in" to mean a distance in the range 8.5 to 9.5 inches, then the implied uncertainty is ±0.5 in, which is 1 part in 18, or about ± 6%. The relative uncertainty in the answer must be the same, since all the values are multiplied by the same factor, 2.54 cm/in. In this case we are justified in writing the answer to two significant digits, yielding an uncertainty of about ±1 cm; if we had used the answer "20 cm" (one significant digit), its implied uncertainty would be ±5 cm, or ±25%.
When the appropriate number of significant digits is in question, calculating the relative uncertainty can help you decide. | textbooks/chem/General_Chemistry/Chem1_(Lower)/04%3A_The_Basics_of_Chemistry/4.06%3A_Significant_Figures_and_Rounding.txt |
Everything you need to know in a first-year college course about the principal concepts of quantum theory as applied to the atom, and how this determines the organization of the periodic table.
• 5.1: Primer on Quantum Theory
A quantum catechism: elementary introduction to quantum theory in the form of a question-and-answer "primer", emphasizing the concepts with a minimum of mathematics.
• 5.2: Quanta - A New View of the World
The fact is, however, that it is not only for real, but serves as the key that unlocks even some of the simplest aspects of modern Chemistry. Our goal in this lesson is to introduce you to this new reality, and to provide you with a conceptual understanding of it that will make Chemistry a more meaningful part of your own personal world.
• 5.3: Light, Particles, and Waves
Our intuitive view of the "real world" is one in which objects have definite masses, sizes, locations and velocities. Once we get down to the atomic level, this simple view begins to break down. It becomes totally useless when we move down to the subatomic level and consider the lightest of all chemically-significant particles, the electron. The chemical properties of a particular kind of atom depend on the arrangement and behavior of the electrons which make up almost the entire volume of the a
• 5.4: The Bohr Atom
Our goal in this unit is to help you understand how the arrangement of the periodic table of the elements must follow as a necessary consequence of the fundamental laws of the quantum behavior of matter. The modern theory of the atom makes full use of the wave-particle duality of matter. We will therefore present the theory in a semi-qualitative manner, emphasizing its results and their applications, rather than its derivation.
• 5.5: The Quantum Atom
The picture of the atom that Niels Bohr developed in 1913 served as the starting point for modern atomic theory, but it was not long before Bohr himself recognized that the advances in quantum theory that occurred through the 1920's required an even more revolutionary change in the way we view the electron as it exists in the atom. This lesson will attempt to show you this view— or at least the portion of it that can be appreciated without the aid of more than a small amount of mathematics.
• 5.6: Atomic Electron Configurations
According to the Pauli exclusion principle, no two electrons in the same atom can have the same set of quantum numbers (n,l,m,s). This limits the number of electrons in a given orbital to two (s = ±1), and it requires that atom containing more then two electrons must place them in standing wave patterns corresponding to higher principal quantum numbers n, which means that these electrons will be farther from the nucleus and less tightly bound by it.
• 5.7: Periodic Properties of the Elements
The periodic table in the form originally published by Dmitri Mendeleev in 1869 was an attempt to list the chemical elements in order of their atomic weights, while breaking the list into rows in such a way that elements having similar physical and chemical properties would be placed in each column. The shape and organization of the modern periodic table are direct consequences of the atomic electronic structure of the elements.
• 5.8: Why Don't Electrons Fall into the Nucleus?
The picture of electrons "orbiting" the nucleus like planets around the sun remains an enduring one, not only in popular images of the atom but also in the minds of many of us who know better. The proposal, first made in 1913, that the centrifugal force of the revolving electron just exactly balances the attractive force of the nucleus (in analogy with the centrifugal force of the moon in its orbit exactly counteracting the pull of the Earth's gravity) is a nice picture, but is simply untenable.
05: Atoms and the Periodic Table
Part 1: Particles and waves
Q1. What is a particle?
A particle is a discrete unit of matter having the attributes of mass, momentum (and thus kinetic energy) and optionally of electric charge.
Q2. What is a wave?
A wave is a periodic variation of some quantity as a function of location or time. For example, the wave motion of a vibrating guitar string is defined by the displacement of the string from its center as a function of distance along the string. A sound wave consists of variations in the pressure with location.
A wave is characterized by its wavelength $λ$ (lambda) and frequency $\nu$ (nu), which are connected by the relation
$\lmabda =\dfrac{u}{\nu} in which u is the velocity of propagation of the disturbance in the medium. Example The velocity of sound in the air is 330 m s–1. What is the wavelength of A440 on the piano keyboard? Solution \[ \lambda =\dfrac{330\, m S^{-1}}{440 \,s^{-1}} = 0.75\,m$
Two other attributes of waves are the amplitude (the height of the wave crests with respect to the base line) and the phase, which measures the position of a crest with respect to some fixed point. The square of the amplitude gives the intensity of the wave: the energy transmitted per unit time).
A unique property of waves is their ability to combine constructively or destructively, depending on the relative phases of the combining waves.
Q3. What is light?
Phrasing the question in this way reflects the deterministic mode of Western thought which assumes that something cannot "be" two quite different things at the same time. The short response to this question is that all we know about light (or anything else, for that matter) are the results of experiments, and that some kinds of experiments show that light behaves like particles, and that other experiments reveal light to have the properties of waves.
Q4. What is the wave theory of light?
In the early 19th century, the English scientist Thomas Young carried out the famous two-slit experiment which demonstrated that a beam of light, when split into two beams and then recombined, will show interference effects that can only be explained by assuming that light is a wavelike disturbance. By 1820, Augustin Fresnel had put this theory on a sound mathematical basis, but the exact nature of the waves remained unclear until the 1860's when James Clerk Maxwell developed his electromagnetic theory.
From the laws of electromagnetic induction that were discovered in the period 1820-1830 by Hans Christien Oersted and Michael Faraday, it was known that a moving electric charge gives rise to a magnetic field, and that a changing magnetic field can induce electric charges to move. Maxwell showed theoretically that when an electric charge is accelerated (by being made to oscillate within a piece of wire, for example), electrical energy will be lost, and an equivalent amount of energy is radiated into space, spreading out as a series of waves extending in all directions.
What is "waving" in electromagnetic radiation? According to Maxwell, it is the strengths of the electric and magnetic fields as they travel through space. The two fields are oriented at right angles to each other and to the direction of travel.
As the electric field changes, it induces a magnetic field, which then induces a new electric field, etc., allowing the wave to propagate itself through space
These waves consist of periodic variations in the electrostatic and electromagnetic field strengths. These variations occur at right angles to each other. Each electrostatic component of the wave induces a magnetic component, which then creates a new electrostatic component, so that the wave, once formed, continues to propagate through space, essentially feeding on itself. In one of the most brilliant mathematical developments in the history of science, Maxwell expounded a detailed theory, and even showed that these waves should travel at about 3E8 m s–1, a value which experimental observations had shown corresponded to the speed of light. In 1887, the German physicist Heinrich Hertz demonstrated that an oscillating electric charge (in what was in essence the world's first radio transmitting antenna) actually does produce electromagnetic radiation just as Maxwell had predicted, and that these waves behave exactly like light.
It is now understood that light is electromagnetic radiation that falls within a range of wavelengths that can be perceived by the eye. The entire electromagnetic spectrum runs from radio waves at the long-wavelength end, through heat, light, X-rays, and to gamma radiation.
Part 2: Quantum theory of light
Q5. How did the quantum theory of light come about?
It did not arise from any attempt to explain the behavior of light itself; by 1890 it was generally accepted that the electromagnetic theory could explain all of the properties of light that were then known.
Certain aspects of the interaction between light and matter that were observed during the next decade proved rather troublesome, however. The relation between the temperature of an object and the peak wavelength emitted by it was established empirically by Wilhelm Wien in 1893. This put on a quantitative basis what everyone knows: the hotter the object, the "bluer" the light it emits.
Q6. What is black body radiation?
All objects above the temperature of absolute zero emit electromagnetic radiation consisting of a broad range of wavelengths described by a distribution curve whose peak wavelength l at absolute temperature T for a "perfect radiator" known as a black body is given by Wien's law.
$\lambda_{peak} (cm) = 0.0029\,m\,K$
At ordinary temperatures this radiation is entirely in the infrared region of the spectrum, but as the temperature rises above about 1000K, more energy is emitted in the visible wavelength region and the object begins to glow, first with red light, and then shifting toward the blue as the temperature is increased.
This type of radiation has two important characteristics. First, the spectrum is a continuous one, meaning that all wavelengths are emitted, although with intensities that vary smoothly with wavelength. The other curious property of black body radiation is that it is independent of the composition of the object; all that is important is the temperature.
Q7. How did black body radiation lead to quantum physics?
Black body radiation, like all electromagnetic radiation, must originate from oscillations of electric charges which in this case were assumed to be the electrons within the atoms of an object acting somewhat as miniature Hertzian oscillators. It was presumed that since all wavelengths seemed to be present in the continuous spectrum of a glowing body, these tiny oscillators could send or receive any portion of their total energy. However, all attempts to predict the actual shape of the emission spectrum of a glowing object on the basis of classical physical theory proved futile.
In 1900, the great German physicist Max Planck (who earlier in the same year had worked out an empirical formula giving the detailed shape of the black body emission spectrum) showed that the shape of the observed spectrum could be exactly predicted if the energies emitted or absorbed by each oscillator were restricted to integral values of hν, where ν ("nu") is the frequency and h is a constant 6.626E–34 J s which we now know as Planck's Constant. The allowable energies of each oscillator are quantized, but the emission spectrum of the body remains continuous because of differences in frequency among the uncountable numbers of oscillators it contains.This modification of classical theory, the first use of the quantum concept, was as unprecedented as it was simple, and it set the stage for the development of modern quantum physics.
Q8. What is the photoelectric effect?
Shortly after J.J. Thompson's experiments led to the identification of the elementary charged particles we now know as electrons, it was discovered that the illumination of a metallic surface by light can cause electrons to be emitted from the surface. This phenomenon, the photoelectric effect, is studied by illuminating one of two metal plates in an evacuated tube.
The kinetic energy of the photoelectrons causes them to move to the opposite electrode, thus completing the circuit and producing a measurable current. However, if an opposing potential (the retarding potential) is imposed between the two plates, the kinetic energy can be reduced to zero so that the electron current is stopped. By observing the value of the retarding potential Vr, the kinetic energy of the photoelectrons can be calculated from the electron charge e, its mass m and the frequency of the incident light:
These two diagrams are taken from a web page by Joseph Alward of the University of the Pacific.
The plot at the right shows how the kinetic energy of the photoelectrons falls to zero at the critical wavelength corresponding to frequency f0.
Q9. What peculiarity of the photoelectric effect led to the photon?
Although the number of electrons ejected from the metal surface per second depends on the intensity of the light, as expected, the kinetic energies of these electrons (as determined by measuring the retarding potential needed to stop them) does not, and this was definitely not expected. Just as a more intense physical disturbance will produce higher energy waves on the surface of the ocean, it was supposed that a more intense light beam would confer greater energy on the photoelectrons. But what was found, to everyone's surprise, is that the photoelectron energy is controlled by the wavelength of the light, and that there is a critical wavelength below which no photoelectrons are emitted at all.
Albert Einstein quickly saw that if the kinetic energy of the photoelectrons depends on the wavelength of the light, then so must its energy. Further, if Planck was correct in supposing that energy must be exchanged in packets restricted to certain values, then light must similarly be organized into energy packets. But a light ray consists of electric and magnetic fields that spread out in a uniform, continuous manner; how can a continuously-varying wave front exchange energy in discrete amounts? Einstein's answer was that the energy contained in each packet of the light must be concentrated into a tiny region of the wave front. This is tantamount to saying that light has the nature of a quantized particle whose energy is given by the product of Planck's constant and the frequency:
Einstein's publication of this explanation in 1905 led to the rapid acceptance of Planck's idea of energy quantization, which had not previously attracted much support from the physics community of the time. It is interesting to note, however, that this did not make Planck happy at all. Planck, ever the conservative, had been reluctant to accept that his own quantized-energy hypothesis was much more than an artifice to explain black-body radiation; to extend it to light seemed an absurdity that would negate the well-established electromagnetic theory and would set science back to the time before Maxwell.
Q10. Where does relativity come in?
Einstein's special theory of relativity arose from his attempt to understand why the laws of physics that describe the current induced in a fixed conductor when a magnet moves past it are not formulated in the same way as the ones that describe the magnetic field produced by a moving conductor. The details of this development are not relevant to our immediate purpose, but some of the conclusions that this line of thinking led to very definitely are. Einstein showed that the velocity of light, unlike that of a material body, has the same value no matter what velocity the observer has. Further, the mass of any material object, which had previously been regarded as an absolute, is itself a function of the velocity of the body relative to that of the observer (hence "relativity"), the relation being given by
in which mo is the rest mass of the particle, v is its velocity with respect to the observer, and c is the velocity of light.
According to this formula, the mass of an object increases without limit as the velocity approaches that of light. Where does the increased mass come from? Einstein's answer was that the increased mass is that of the kinetic energy of the object; that is, energy itself has mass, so that mass and energy are equivalent according to the famous formula
The only particle that can move at the velocity of light is the photon itself, due to its zero rest mass.
Q11. Can the mass-less photon have momentum?
Although the photon has no rest mass, its energy, given by , confers upon it an effective mass of
and a momentum of
Q12. If waves can be particles, can particles be waves?
In 1924, the French physicist Louis de Broglie proposed (in his doctoral thesis) that just as light possesses particle-like properties, so should particles of matter exhibit a wave-like character. Within two years this hypothesis had been confirmed experimentally by observing the diffraction (a wave interference effect) produced by a beam of electrons as they were scattered by the row of atoms at the surface of a metal.
deBroglie showed that the wavelength of a particle is inversely proportional to its momentum:
Notice that the wavelength of a stationary particle is infinitely large, while that of a particle of large mass approaches zero. For most practical purposes, the only particle of interest to chemistry that is sufficiently small to exhibit wavelike behavior is the electron (mass 9.11E31 kg).
Q13. Exactly what is it that is "waving"?
We pointed out earlier that a wave is a change that varies with location in a periodic, repeating way. What kind of a change do the crests and hollows of a "matter wave" trace out? The answer is that the wave represents the value of a quantity whose square is a measure of the probability of finding the particle in that particular location. In other words, what is "waving" is the value of a mathematical probability function.
Q14. What is the uncertainty principle?
In 1927, Werner Heisenberg proposed that certain pairs of properties of a particle cannot simultaneously have exact values. In particular, the position and the momentum of a particle have associated with them uncertainties x and p given by
As with the de Broglie particle wavelength, this has practical consequences only for electrons and other particles of very small mass. It is very important to understand that these "uncertainties" are not merely limitations related to experimental error or observational technique, but instead they express an underlying fact that Nature does not allow a particle to possess definite values of position and momentum at the same time. This principle (which would be better described by the term "indeterminacy" than "uncertainty") has been thoroughly verified and has far-reaching practical consequences which extend to chemical bonding and molecular structure.
Q15. Is the uncertainty principle consistent with particle waves?
Yes; either one really implies the other. Consider the following two limiting cases: · A particle whose velocity is known to within a very small uncertainty will have a sharply-defined energy (because its kinetic energy is known) which can be represented by a probability wave having a single, sharply-defined frequency. A "monochromatic" wave of this kind must extend infinitely in space:
But if the peaks of the wave represent locations at which the particle is most likely to manifest itself, we are forced to the conclusion that it can "be" virtually anywhere, since the number of such peaks is infinite! Now think of the opposite extreme: a particle whose location is closely known. Such a particle would be described by a short wave train having only a single peak, the smaller the uncertainty in position, the more narrow the peak.
To help you see how waveforms of different wavelength combine, two such combinations are shown below:
It is apparent that as more waves of different frequency are mixed, the regions in which they add constructively diminish in extent. The extreme case would be a wave train in which destructive interference occurs at all locations except one, resulting in a single pulse:
Is such a wave possible, and if so, what is its wavelength? Such a wave is possible, but only as the sum (interference) of other waves whose wavelengths are all slightly different. Each component wave possesses its own energy (momentum), and adds that value to the range of momenta carried by the particle, thus increasing the uncertainty δp. In the extreme case of a quantum particle whose location is known exactly, the probability wavelet would have zero width which could be achieved only by combining waves of all wavelengths-- an infinite number of wavelengths, and thus an infinite range of momentum dp and thus kinetic energy.
Q16. Are they particles or are they waves?
Suppose we direct a beam of photons (or electrons; the experiment works with both) toward a piece of metal having a narrow opening. On the other side there are two more openings, or slits. Finally the particles impinge on a photographic plate or some other recording device. Taking into account their wavelike character, we would expect the probability waves to produce an interference pattern of the kind that is well known for sound and light waves, and this is exactly what is observed; the plate records a series of alternating dark and light bands, thus demonstrating beyond doubt that electrons and light have the character of waves.
Now let us reduce the intensity of the light so that only one photon at a time passes through the apparatus (it is experimentally possible to count single photons, so this is a practical experiment). Each photon passes through the first slit, and then through one or the other of the second set of slits, eventually striking the photographic film where it creates a tiny dot. If we develop the film after a sufficient number of photons have passed through, we find the very same interference pattern we obtained previously.
There is something strange here. Each photon, acting as a particle, must pass through one or the other of the pair of slits, so we would expect to get only two groups of spots on the film, each opposite one of the two slits. Instead, it appears that the each particle, on passing through one slit, "knows" about the other, and adjusts its final trajectory so as to build up a wavelike interference pattern.
It gets even stranger: suppose that we set up a detector to determine which slit a photon is heading for, and then block off the other slit with a shutter. We find that the photon sails straight through the open slit and onto the film without trying to create any kind of an interference pattern. Apparently, any attempt to observe the photon as a discrete particle causes it to behave like one.
The only conclusion possible is that quantum particles have no well defined paths; each photon (or electron) seems to have an infinity of paths which thread their way through space, seeking out and collecting information about all possible routes, and then adjusting its behavior so that its final trajectory, when combined with that of others, produces the same overall effect that we would see from a train of waves of wavelength λ= h/mv.
Q17. What are line spectra?
We have already seen that a glowing body (or actually, any body whose temperature is above absolute zero) emits and absorbs radiation of all wavelength in a continuous spectrum. In striking contrast is the spectrum of light produced when certain substances are volatilized in a flame, or when an electric discharge is passed through a tube containing gaseous atoms of an element. The light emitted by such sources consists entirely of discrete wavelengths. This kind of emission is known as a discrete spectrum or line spectrum (the "lines" that appear on photographic images of the spectrum are really images of the slit through which the light passes before being dispersed by the prism in the spectrograph).
Every element has its own line spectrum which serves as a sensitive and useful tool for detecting the presence and relative abundance of the element, not only in terrestrial samples but also in stars. (As a matter of fact, the element helium was discovered in the sun, through its line spectrum, before it had been found on Earth.) In some elements, most of the energy in the visible part of the emission spectrum is concentrated into just a few lines, giving their light characteristic colors: yellow-orange for sodium, blue-green for mercury (these are commonly seen in street lights) and orange for neon.
Line spectra were well known early in the 19th century, and were widely used for the analysis of ores and metals. The German spectroscopist R.W. Bunsen, now famous for his gas burner, was then best known for discovering two new elements, rubidium and cesium, from the line spectrum he obtained from samples of mineral spring waters.
Q18. How are line spectra organized?
Until 1885, line spectra were little more than "fingerprints" of the elements; extremely useful in themselves, but incapable of revealing any more than the identify of the individual atoms from which they arise. In that year a Swiss school teacher named Johann Balmer published a formula that related the wavelengths of the four known lines in the emission spectrum of hydrogen in a simple way. Balmer's formula was not based on theory; it was probably a case of cut-and-try, but it worked: he was able to predict the wavelength of a fifth, yet-to-be discovered emission line of hydrogen, and as spectroscopic and astronomical techniques improved (the only way of observing highly excited hydrogen atoms at the time was to observe the solar spectrum during an eclipse), a total of 35 lines were discovered, all having wavelengths given by the formula which we write in the modern manner as
in which m = 2 and R is a constant (the Rydberg constant, after the Swedish spectroscopist) whose value is 1.09678E7 m–1. The variable n is an integer whose values 1, 2, etc. give the wavelengths of the different lines.
It was soon discovered that by replacing m with integers other than 2, other series of hydrogen lines could be accounted for. These series, which span the wavelength region from the ultraviolet through infrared, are named after their discoverers.
name of series
when discovered
value of m
Lyman 1906-14
1
Balmer 1885
2
Paschen 1908
3
Brackett 1922
4
Pfund 1924
5
Attempts to adapt Balmer's formula to describe the spectra of atoms other than hydrogen generally failed, although certain lines of some of the spectra seemed to fit this same scheme, with the same value of R.
Q19. How large can n be?
There is no limit; values in the hundreds have been observed, although doing so is very difficult because of the increasingly close spacing of successive levels as n becomes large. Atoms excited to very high values of n are said to be in Rydberg states.
Q20. Why do line spectra become continuous at short wavelengths?
As n becomes larger, the spacing between neighboring levels diminishes and the discrete lines merge into a continuum. This can mean only one thing: the energy levels converge as n approaches infinity. This convergence limit corresponds to the energy required to completely remove the electron from the atom; it is the ionization energy.
At energies in excess of this, the electron is no longer bound to the rest of the atom, which is now of course a positive ion. But an unbound system is not quantized; the kinetic energy of the ion and electron can now have any value in excess of the ionization energy. When such an ion and electron pair recombine to form a new atom, the light emitted will have a wavelength that falls in the continuum region of the spectrum. Spectroscopic observation of the convergence limit is an important method of measuring the ionization energies of atoms.
Q21. What were the problems with the planetary model of the atom?
Rutherford's demonstration that the mass and the positive charge of the atom is mostly concentrated in a very tiny region called the nucleus, forced the question of just how the electrons are disposed outside the nucleus. By analogy with the solar system, a planetary model was suggested: if the electrons were orbiting the nucleus, there would be a centrifugal force that could oppose the electrostatic attraction and thus keep the electrons from falling into the nucleus. This of course is similar to the way in which the centrifugal force produced by an orbiting planet exactly balances the force due to its gravitational attraction to the sun.
The planetary model suffers from one fatal weakness: electrons, unlike planets, are electrically charged. An electric charge revolving in an orbit is continually undergoing a change of direction, that is, acceleration. It has been well known since the time of Hertz that an accelerating electric charge radiates energy. We would therefore expect all atoms to act as miniature radio stations. Even worse, conservation of energy requires that any energy that is radiated must be at the expense of the kinetic energy of the orbital motion of the electron. Thus the electron would slow down, reducing the centrifugal force and allowing the electron to spiral closer and closer to the nucleus, eventually falling into it. In short, no atom that operates according to the planetary model would last long enough for us to talk about it.
As if this were not enough, the planetary model was totally unable to explain any of the observed properties of atoms, including their line spectra.
Q22. How did Bohr's theory save the planetary model... for a while?
Niels Bohr was born in the same year (1885) that Balmer published his formula for the line spectrum of hydrogen. Beginning in 1913, the brilliant Danish physicist published a series of papers that would ultimately derive Balmer's formula from first principles.
Bohr's first task was to explain why the orbiting electron does not radiate energy as it moves around the nucleus. This energy loss, if it were to occur at all, would do so gradually and smoothly. But Planck had shown that black body radiation could only be explained if energy changes were limited to jumps instead of gradual changes. If this were a universal characteristic of energy- that is, if all energy changes were quantized, then very small changes in energy would be impossible, so that the electron would in effect be "locked in" to its orbit.
From this, Bohr went on to propose that there are certain stable orbits in which the electron can exist without radiating and thus without falling into a "death spiral". This supposition was a daring one at the time because it was inconsistent with classical physics, and the theory which would eventually lend it support would not come along until the work of de Broglie and Heisenberg more than ten years later.
Since Planck's quanta came in multiples of h, Bohr restricted his allowed orbits to those in which the product of the radius r and the momentum of the electron mv (which has the same units as h, J–s) are integral multiples of h:
rmv = nh (n = 1,2,3, . .)
Each orbit corresponds to a different energy, with the electron normally occupying the one having the lowest energy, which would be the innermost orbit of the hydrogen atom.
Taking the lead from Einstein's explanation of the photoelectric effect, Bohr assumed that each spectral line emitted by an atom that has been excited by absorption of energy from an electrical discharge or a flame represents a change in energy given by ΔE = hn = hc/λ, the energy lost when the electron falls from a higher orbit (value of n) into a lower one.
Finally, as a crowning triumph, Bohr derived an expression giving the radius of the nth orbit for the electron in hydrogen as
Substitution of the observed values of the electron mass and electron charge into this equation yielded a value of 0.529E10 m for the radius of the first orbit, a value that corresponds to the radius of the hydrogen atom obtained experimentally from the kinetic theory of gases. Bohr was also able to derive a formula giving the value of the Rydberg constant, and thus in effect predict the entire emission spectrum of the hydrogen atom.
Q23. What were the main problems with Bohr's theory?
There were two kinds of difficulties. First, there was the practical limitation that it only works for atoms that have one electron-- that is, for H, He+, Li2+, etc. The second problem was that Bohr was unable to provide any theoretical justification for his assumption that electrons in orbits described by the preceding equation would not lose energy by radiation. This reflects the fundamental underlying difficulty: because de Broglie's picture of matter waves would not come until a decade later, Bohr had to regard the electron as a classical particle traversing a definite orbital path.
Q24. How did the wave picture of the electron save Bohr's theory?
Once it became apparent that the electron must have a wavelike character, things began to fall into place. The possible states of an electron confined to a fixed space are in many ways analogous to the allowed states of a vibrating guitar string. These states are described as standing waves that must possess integral numbers of nodes. The states of vibration of the string are described by a series of integral numbers n = 1,2,... which we call the fundamental, first overtone, second overtone, etc. The energy of vibration is proportional to n2. Each mode of vibration contains one more complete wave than the one below it.
In exactly the same way, the mathematical function that defines the probability of finding the electron at any given location within a confined space possesses n peaks and corresponds to states in which the energy is proportional to n2.
The electron in a hydrogen atom is bound to the nucleus by its spherically symmetrical electrostatic charge, and should therefore exhibit a similar kind of wave behavior. This is most easily visualized in a two-dimensional cross section that corresponds to the conventional electron orbit. But if the particle picture is replaced by de Broglie's probability wave, this wave must follow a circular path, and- most important of all- its wavelength (and consequently its energy) is restricted to integral multiples n = 1,2,.. of the circumference 2πr = nλ.
for otherwise the wave would collapse owing to self-interference. That is, the energy of the electron must be quantized; what Bohr had taken as a daring but arbitrary assumption was now seen as a fundamental requirement. Indeed the above equation can be derived very simply by combining Bohr's quantum condition 2πrmv = nh with the expression mv = h/λ for the deBroglie wavelength of a particle.
Viewing the electron as a standing-wave pattern also explains its failure to lose energy by radiating. Classical theory predicts that an accelerating electric charge will act as a radio transmitter; an electron traveling around a circular wire would certainly act in this way, and so would one rotating in an orbit around the nucleus. In a standing wave, however, the charge is distributed over space in a regular and unchanging way; there is no motion of the charge itself, and thus no radiation.
Q25. What is an orbital?
Because the classical view of an electron as a localizable particle is now seen to be untenable, so is the concept of a definite trajectory, or "orbit". Instead, we now use the word orbital to describe the state of existence of an electron. An orbital is really no more than a mathematical function describing the standing wave that gives the probability of the electron manifesting itself at any given location in space. More commonly (and loosely) we use the word to describe the region of space in which an electron is likely to be found. Each kind of orbital is characterized by a set of quantum numbers n, l, and m These relate, respectively, to the average distance of the electron from the nucleus, to the shape of the orbital, and to its orientation in space.
Q26. If the electron cannot be localized, can it be moving?
In its lowest state in the hydrogen atom (in which l=0) the electron has zero angular momentum, so electrons in s orbitals are not in motion. In orbitals for which l>0 the electron does have an effective angular momentum, and since the electron also has a definite rest mass me = 9.11E31 kg, it must possess an effective velocity. Its value can be estimated from the Uncertainty Principle; if the volume in which the electron is confined is about 10–10 m, then the uncertainty in its momentum is at least h/(1010) = 6.6E–24 kg m s–1, which implies a velocity of around 107 m s–1, or almost one-tenth the velocity of light.
The stronger the electrostatic force of attraction by the nucleus, the faster the effective electron velocity. In fact, the innermost electrons of the heavier elements have effective velocities so high that relativistic effects set in; that is, the effective mass of the electron significantly exceeds its rest mass. This has direct chemical effects; it is the cause, for example, of the low melting point of metallic mercury and of the color of gold.
Q27. Why does the electron not fall into the nucleus?
The negatively-charged electron is attracted to the positive charge of the nucleus. What prevents it from falling in? This question can be answered in various ways at various levels. All start with the statement that the electron, being a quantum particle, has a dual character and cannot be treated solely by the laws of Newtonian mechanics.
We saw above that in its wavelike guise, the electron exists as a standing wave which must circle the nucleus at a sufficient distance to allow at least one wavelength to fit on its circumference. This means that the smaller the radius of the circle, the shorter must be the wavelength of the electron, and thus the higher the energy. Thus it ends up "costing" the electron energy if it gets too close to the nucleus. The normal orbital radius represents the balance between the electrostatic force trying to pull the electron in, and what we might call the "confinement energy" that opposes the electrostatic energy. This confinement energy can be related to both the particle and wave character of the electron.
If the electron as a particle were to approach the nucleus, the uncertainty in its position would become so small (owing to the very small volume of space close to the nucleus) that the momentum, and therefore the energy, would have to become very large. The electron would, in effect, be "kicked out" of the nuclear region by the confinement energy.
The standing-wave patterns of an electron in a box can be calculated quite easily. For a spherical enclosure of diameter d, the energy is given by
in which n = 1,2,3. etc.
Q28. What is electron spin?
Each electron in an atom has associated with it a magnetic field whose direction is quantized; there are only two possible values that point in opposite directions. We usually refer to these as "up" and "down", but the actual directions are parallel and antiparallel to the local magnetic field associated with the orbital motion of the electron.
The term spin implies that this magnetic moment is produced by the electron charge as the electron rotates about its own axis. Although this conveys a vivid mental picture of the source of the magnetism, the electron is not an extended body and its rotation is meaningless. Electron spin has no classical counterpart and no simple explanation; the magnetic moment is a consequence of relativistic shifts in local space and time due to the high effective velocity of the electron in the atom. This effect was predicted theoretically by P.A.M. Dirac in 1928. | textbooks/chem/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.01%3A_Primer_on_Quantum_Theory.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
• What was the caloric theory of heat, and how did Rumford's experiments in boring cannon barrels lead to its overthrow?
• Define thermal radiation and the "scandal of the ultraviolet" and the role Max Planck played in introducing the quantum concept.
• What is the photoelectric effect? Describe the crucial insight that led Einstein to the concept of the photon.
What we call "classical" physics is based on our experience of what we perceive as the "real world". Even without knowing the details of Newton's laws of motion that describe the behavior of macroscopic bodies, we have all developed an intuitive understanding of this behavior; it is a part of everyone's personal view of the world. By extension, we tend to view atoms and molecules in much the same way, that is, simply as miniature versions of the macroscopic objects we know from everyday life. It turns out, however, that our everyday view of the macroscopic world is only a first approximation of the reality that becomes apparent at the atomic level. Many of those who first encounter this microscopic world of quantum weirdness find it so foreign to prior experience that their first reaction is to dismiss it as pure fantasy.
The fact is, however, that it is not only for real, but serves as the key that unlocks even some of the simplest aspects of modern Chemistry. Our goal in this lesson is to introduce you to this new reality, and to provide you with a conceptual understanding of it that will make Chemistry a more meaningful part of your own personal world.
The Limits of Classical Physics
Near the end of the nineteenth century, the enormous success of the recently developed kinetic molecular theory of gases had dispelled most doubts about the atomic nature of matter; the material world was seen to consist of particles that had distinct masses and sizes, and which moved in trajectories just as definite as those of billiard balls.
In the 1890s, however, certain phenomena began to be noticed that seemed to be inconsistent with this dichotomy of particles and waves. This prompted further questions and further experiments which led eventually to the realization that classical physics, while it appears to be "the truth", is by no means the whole truth. In particular, it cannot accurately describe the behavior of objects that are extremely small or fast-moving.
Chemistry began as an entirely empirical, experimental science, dealing with the classification and properties of substances and with their transformations in chemical reactions. As this large body of facts developed into a science (one of whose functions is always to explain and correlate known facts and to predict new ones), it has become necessary to focus increasingly on the nature and behavior of individual atoms and of their own constituent parts, especially the electrons. Owing to their extremely small masses, electrons behave as quantum particles which do not obey the rules of classical physics.
The purpose of this introductory unit is to summarize the major ideas of quantum theory that will be needed to treat atomic and molecular structure later on in the course.
Quantum theory can be presented simply as a set of assumptions which are developed through mathematical treatment. This is in fact the best route to take if one is to use quantum mechanics as a working tool. More than this, however, quantum theory brings with it a set of concepts that have far-reaching philosophical implications and which should be a part of the intellectual equipment of anyone who claims to have a general education in the sciences. A major objective of this chapter will be to introduce you to "the quantum way of thinking" and to show how this led to a profound break with the past, and a shift in our way of viewing the world that has no parallel in Western intellectual history.
Light
The development of our ideas about light and radiation was not quite as direct. In the 17th century, heat was regarded as a substance called caloric whose invisible atoms could flow from one object to another, thus explaining thermal conduction. This view of heat as a material fluid seemed to be confirmed by the observation that heat can pass through a vacuum, a phenomenon that we now call radiant heat. Isaac Newton, whose experiments with a prism in 1672 led to his famous textbook "Optiks", noted that light seemed to react with green plants to produce growth, and must therefore be a "substance" having atoms of its own. By 1800, the corpuscular (particle) theory of light was generally accepted.
And yet there were questions. Count Rumford's observation that the drill bits employed in boring cannons produced more frictional heat when they were worn and dull led to the overthrow of the caloric theory.
The caloric theory of heat assumed that small particles are able to contain more heat than large ones, so that when a material is sawn or drilled, some of its heat is released as the filings are produced. A dull drill produces few filings, and according to this theory, should produce little heat, but Rumford was able to show that the amount of heat produced is in fact independent of the state of the drill, and depends only on the amount of mechanical work done in turning it.
In 1812, Christiaan Huygens showed how a number of optical effects could be explained if light had a wavelike nature, and this led Fresnel to develop an elaborate wave theory of light. By 1818 the question of "particle or wave" had become so confused that the French Academy held a great debate intended to settle the matter once for all. The mathematician Poisson pointed out that Fresnel's wave theory had a ridiculous consequence: the shadow cast by a circular disk should have a bright spot of light at its center, where waves arriving in phase would reinforce each other. Fresnel performed the experiment and was entirely vindicated: if the light source is sufficiently point-like (an extended source such as the sun or an ordinary lamp will not work), this diffraction effect is indeed observed.
Heat
By this time it was known that radiant heat and "cold" could be focused and transmitted by mirrors, and in 1800 William Herschel discovered that radiant heat could be sensed in the dark region just beyond the red light refracted by a prism. Light and radiant heat, which had formerly been considered separate, were now recognized as one, although the question of precisely what was doing the "waving" was something of an embarrassment.
The quantum revolution
By 1890, physicists thought they had tidied up the world into the two realms of particulate matter and of wavelike radiant energy, which by then had been shown by James Clerk Maxwell to be forms of electromagnetic energy. No sooner had all this been accomplished, than the cracks began to appear; these quickly widened into chasms, and within twenty years the entire foundations of classical physics had disintegrated; it would not be until the 1920's that anyone with a serious interest in the nature of the microscopic world would find a steady place to stand.
Cathode rays
The atom was the first to go. It had been known for some time that when a high voltage is applied to two separated pieces of metal in an evacuated tube, "cathode rays" pass between them. These rays could be detected by their ability to cause certain materials to give off light, or fluoresce, and were believed to be another form of electromagnetic radiation. Then, in the 1890s, J.J. Thompson and Jean Perrin showed that cathode rays are composed of particles having a measurable mass (less than 1/1000 of that of the hydrogen atom), they carry a fixed negative electric charge, and that they come from atoms. This last conclusion went so strongly against the prevailing view of atoms as the ultimate, un-cuttable stuff of the world that Thompson only reluctantly accepted it, and having done so, quickly became the object of widespread ridicule.
Radioactivity
But worse was soon to come; not only were atoms shown not to be the smallest units of matter, but the work of the Curies established that atoms are not even immutable; atoms of high atomic weight such as uranium and radium give off penetrating beams of radiation and in the process change into other elements, disintegrating through a series of stages until they turn into lead. Among the various kinds of radiation that accompany radioactive disintegration are the very same cathode rays that had been produced artificially by Thompson, and which we now know as electrons.
Radiation is quantized
The wave theory of radiation was also running into difficulties. Any object at a temperature above absolute zero gives off radiant energy; if the object is moderately warm, we sense this as radiant heat. As the temperature is raised, a larger proportion of shorter-wavelength radiation is given off, so that at sufficiently high temperatures the object becomes luminous. The origin of this radiation was thought to lie in the thermally-induced oscillations of the atoms within the object, and on this basis the mathematical physicist James Rayleigh had worked out a formula that related the wavelengths given off to the temperature. Unfortunately, this formula did not work; it predicted that most of the radiation given off at any temperature would be of very short wavelength, which would place it in the ultraviolet region of the spectrum. What was most disconcerting is that no one could say why Rayleigh's formula did not work, based as it was on sound classical physics; this puzzle became known as the "scandal of the ultraviolet".
Quanta
In 1899 the German physicist Max Planck pointed out that one simple change in Rayleigh's argument would produce a formula that accurately describes the radiation spectrum of a perfect radiator, which is known as a "black body". Rayleigh assumed that such an object would absorb and emit amounts of radiation in amounts of any magnitude, ranging from minute to very large. This is just what one would expect on the basis of the similar theory of mechanical physics which had long been well established. Planck's change, for which he could offer no physical justification other than that it works, was to discard this assumption, and to require that the absorption or emission of radiation occur only in discrete chunks, or quanta. Max Planck had unlocked the door that would lead to the resurrection of the corpuscular theory of radiation. Only a few years later, Albert Einstein would kick the door open and walk through.
The photoelectric effect
By 1900 it was known that a beam of light, falling on a piece of metal, could cause electrons to be ejected from its surface. Evidently the energy associated with the light overcomes the binding energy of the electron in the metal; any energy the light supplies in excess of this binding energy appears as kinetic energy of the emitted electron. What seemed peculiar, however, was that the energy of the ejected electrons did not depend on the intensity of the light as classical physics would predict. Instead, the energy of the photoelectrons (as they are called) varies with the color, or wavelength of the light; the higher the frequency (the shorter the wavelength), the greater the energy of the ejected electrons.
In 1905, Albert Einstein, then an unknown clerk in the Swiss Patent Office, published a remarkable paper in which he showed that if light were regarded as a collection of individual particles, a number of phenomena, including the photoelectric effect, could be explained. Each particle of light, which we now know as a photon , has associated with it a distinct energy that is proportional to the frequency of the light, and which corresponds to Planck's energy quanta. The energy of the photon is given by
$E = h u = \dfrac{hc}{\lambda}$
in which h is Planck's constant, 6.63×10–34 J-s, ν (Greek nu) is the frequency, λ (lamda) is the wavelength, and c is the velocity of light, 3.00×108 m s–1. The photoelectric effect is only seen if the photon energy e exceeds the binding energy of the electron in the metal; it is clear from the above equation that as the wavelength increases, e decreases, and eventually no electrons will be released. Einstein had in effect revived the corpuscular theory of light, although it would not be until about 1915 that sufficient experimental evidence would be at hand to convince most of the scientific world— but not all of it: Max Planck, whose work had led directly to the revival of the particle theory of light, remained one of the strongest doubters.
The 1905 volume of Annalen der Physik is now an expensive collector's item, for in that year Einstein published three major papers, any one of which would have guaranteed him his place in posterity. The first, on the photoelectric effect, eventually won him the Nobel Prize. The second paper, on Brownian motion, amounted to the first direct confirmation of the atomic theory of matter. The third paper, his most famous, "On the electrodynamics of moving bodies", set forth the special theory of relativity.
The appearance of his general theory of relativity in 1919 would finally make Einstein into a reluctant public celebrity and scientific superstar. This theory explained gravity as a consequence of the curvature of space-time.
Matter and energy united
Energy
The concept of energy was slow to develop in science, partly because it was not adequately differentiated from the related quantities of force and motion. It was generally agreed that some agent of motion and change must exist; Descartes suggested, for example, that God, when creating the world, had filled it with "vortices" whose motions never ceased, but which could be transferred to other objects and thus give them motion. Gradually the concepts of vis viva and vis mortua developed; these later became kinetic and potential energy. Later on, the cannon-boring experiments of Benjamin Thompson (Count Rumford) revealed the connections between heat and work. Finally, the invention of the steam engine forced the birth of the science of thermodynamics, whose founding law was that a quantity known as energy can be transferred from one object to another through the processes of heat and work, but that the energy itself is strictly conserved.
Relativity
If Einstein's first 1905 paper put him on the scientific map, the third one made him a scientific celebrity.
In effect, Einstein merely asked a simple question about Faraday's law of electromagnetic induction, which says that a moving electric charge (such as is produced by an electric current flowing in a conductor) will create a magnetic field. Similarly, a moving magnetic field will induce an electric current. In either case, something has to be moving. Why, Einstein asked, does this motion have to be relative to that of the room in which the experiment is performed— that is, relative to the Earth? A stationary charge creates no field, but we know that there is really no such thing as a stationary charge, since the Earth itself is in motion; what, then, do motion and velocity ultimately relate to?
The answer, Einstein suggested, is that the only constant and unchanging velocity in the universe is that of light. This being so, the beam emitted by the headlight of a moving vehicle, for example, can travel no faster than the light coming from a stationary one. This in turn suggested (through the Lorentz transformation - we are leaving out a few steps here!) that mass, as well as velocity (and thus also, time) are relative in that they depend entirely on the motion of the observer. Two observers, moving at different velocities relative to each other, will report different masses for the same object, and will age at different rates. Further, the faster an object moves with respect to an observer, the greater is its mass, and the harder it becomes to accelerate it to a still greater velocity. As the velocity of an object approaches the speed of light, its mass approaches infinity, making it impossible for an object to move as fast as light.
According to Einstein, the speed of light is really the only speed in the universe. If you are sitting still, you are moving through the time dimension at the speed of light. If you are flying in an airplane, your motion along the three cartesian dimensions subtracts from that along the fourth (time) coordinate, with the result that time, for you, passes more slowly.
Relativity comes into chemistry in two rather indirect ways: it is responsible for the magnetic moment ("spin") of the electron, and in high-atomic weight atoms in which the electrons have especially high effective velocities, their greater [relativistic] masses cause them to be bound more tightly to the nucleus— accounting, among other things, for the color of gold, and for the unusual physical and chemical properties of mercury.
Mass-energy
Where does the additional mass of a moving body come from? Simply from the kinetic energy of the object; this equivalence of mass and energy, expressed by the famous relation e = mc2, is the most well known consequence of special relativity. The reason that photons alone can travel at the velocity of light is that these particles possess zero rest mass to start with. You can think of ordinary matter as "congealed energy", trapped by its possession of rest mass, whereas light is energy that has been liberated of its mass. | textbooks/chem/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.02%3A_Quanta_-_A_New_View_of_the_World.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas
• Cite two pieces of experimental evidence that demonstrate, respectively, the wave- and particle-like nature of light.
• Define the terms amplitude, wavelength, and frequency as they apply to wave phenomena.
• Give a qualitative description of electromagnetic radiation in terms of electrostatic and magnetic fields.
• Be able to name the principal regions of the electromagnetic spectrum (X-rays, infrared region, etc.) and specify their sequence in terms of either wavelength or energy per photon.
• Describe the difference between line spectra and continuous spectra in terms of both their appearance and their origins.
• What is meant by the de Broglie wavelength of a particle? How will the particle's mass and velocity affect the wavelength?
• State the consequences of the Heisenberg uncertainty principle in your own words.
Our intuitive view of the "real world" is one in which objects have definite masses, sizes, locations and velocities. Once we get down to the atomic level, this simple view begins to break down. It becomes totally useless when we move down to the subatomic level and consider the lightest of all chemically-significant particles, the electron. The chemical properties of a particular kind of atom depend on the arrangement and behavior of the electrons which make up almost the entire volume of the atom. The electronic structure of an atom can only be determined indirectly by observing the manner in which atoms absorb and emit light. Light, as you already know, has wavelike properties, so we need to know something about waves in order to interpret these observations. But because the electrons are themselves quantum particles and therefore have wavelike properties of their own, we will find that an understanding of the behavior of electrons in atoms can only be gained through the language of waves.
The language of light
Atoms are far too small to see directly, even with the most powerful optical microscopes. But atoms do interact with and under some circumstances emit light in ways that reveal their internal structures in amazingly fine detail. It is through the "language of light" that we communicate with the world of the atom. This section will introduce you to the rudiments of this language.
Wave, particle, or what?
In the early 19th century, the English scientist Thomas Young carried out the famous double-slit experiment which demonstrated that a beam of light, when split into two beams and then recombined, will show interference effects that can only be explained by assuming that light is a wavelike disturbance. By 1820, Augustin Fresnel had put this theory on a sound mathematical basis, but the exact nature of the waves remained unclear until the 1860's when James Clerk Maxwell developed his electromagnetic theory.
But Einstein's 1905 explanation of the photoelectric effect showed that light also exhibits a particle-like nature. The photon is the smallest possible packet (quantum) of light; it has zero mass but a definite energy.
When light-wave interference experiments are conducted with extremely low intensities of light, the wave theory breaks down; instead of recording a smooth succession of interference patterns as shown above, an extremely sensitive detector sees individual pulses— that is, individual photons.
Note
Suppose we conduct the double-slit interference experiment using a beam of light so weak that only one photon at a time passes through the apparatus (it is experimentally possible to count single photons, so this is a practical experiment.) Each photon passes through the first slit, and then through one or the other of the second set of slits, eventually striking the photographic film where it creates a tiny dot. If we develop the film after a sufficient number of photons have passed through, we find the very same interference pattern we obtained with higher-intensity light whose behavior was could be explained by wave interference.
There is something strange here. Each photon, acting as a particle, must pass through one or the other of the pair of slits, so we would expect to get only two groups of spots on the film, each opposite one of the two slits. Instead, it appears that the each particle, on passing through one slit, "knows" about the other, and adjusts its final trajectory so as to build up a wavelike interference pattern.
It gets even stranger: suppose that we set up a detector to determine which slit a photon is heading for, and then block off the other slit with a shutter. We find that the photon sails straight through the open slit and onto the film without trying to create any kind of an interference pattern. Apparently, any attempt to observe the photon as a discrete particle causes it to behave like one.
One well-known physicist (Landé) suggested that perhaps we should coin a new word, wavicle, to reflect this duality.
Later on, virtually the same experiment was repeated with electrons, thus showing that particles can have wavelike properties (as the French physicist Louis de Broglie predicted in 1923), just as what were conventionally thought to be electromagnetic waves possess particle-like properties.
Is it a particle or is it a wave?
For large bodies (most atoms, baseballs, cars) there is no question: the wave properties are insignificant, and the laws of classical mechanics can adequately describe their behaviors. But for particles as tiny as electrons (quantum particles), the situation is quite different: instead of moving along well defined paths, a quantum particle seems to have an infinity of paths which thread their way through space, seeking out and collecting information about all possible routes, and then adjusting its behavior so that its final trajectory, when combined with that of others, produces the same overall effect that we would see from a train of waves of wavelength = h/mv.
Taking this idea of quantum indeterminacy to its most extreme, the physicist Erwin Schrödinger proposed a "thought experiment" in which the radioactive decay of an atom would initiate a chain of events that would lead to the death of a cat placed in a closed box. The atom has a 50% chance of decaying in an hour, meaning that its wave representation will contain both possibilities until an observation is made. The question, then, is will the cat be simultaneously in an alive-and-dead state until the box is opened? If so, this raises all kinds of interesting questions about the nature of being.
What you need to know about waves
We use the term "wave" to refer to a quantity which changes with time. Waves in which the changes occur in a repeating or periodic manner are of special importance and are widespread in nature; think of the motions of the ocean surface, the pressure variations in an organ pipe, or the vibrations of a plucked guitar string. What is interesting about all such repeating phenomena is that they can be described by the same mathematical equations.
Wave motion arises when a periodic disturbance of some kind is propagated through a medium; pressure variations through air, transverse motions along a guitar string, or variations in the intensities of the local electric and magnetic fields in space, which constitutes electromagnetic radiation. For each medium, there is a characteristic velocity at which the disturbance travels.
There are three measurable properties of wave motion: amplitude,wavelength, and frequency, the number of vibrations per second. The relation between the wavelength $λ$ (Greek lambda) and frequency of a wave $u$ (Greek nu) is determined by the propagation velocity v.
$v = u λ$
Example $1$
What is the wavelength of the musical note A = 440 hz when it is propagated through air in which the velocity of sound is 343 m s–1?
Solution
$λ = \dfrac{v} { u} = \dfrac{343\; m \,s^{–1}}{440\, s^{–1}} = 0.80\; m$
Light and electromagnetic radiation
Michael Faraday's discovery that electric currents could give rise to magnetic fields and vice versa raised the question of how these effects are transmitted through space. Around 1870, the Scottish physicist James Clerk Maxwell (1831-1879) showed that this electromagnetic radiation can be described as a train of perpendicular oscillating electric and magnetic fields.
Maxwell was able to calculate the speed at which electromagnetic disturbances are propagated, and found that this speed is the same as that of light. He therefore proposed that light is itself a form of electromagnetic radiation whose wavelength range forms only a very small part of the entire electromagnetic spectrum. Maxwell's work served to unify what were once thought to be entirely separate realms of wave motion.
The electromagnetic spectrum
The electromagnetic spectrum is conventionally divided into various parts as depicted in the diagram below, in which the four logarithmic scales correlate the wavelength of electromagnetic radiation with its frequency in herz (units of s–1) and the energy per photon, expressed both in joules and electron-volts.
The other items shown on the diagram, from the top down, are:
• the names used to denote the various wavelength ranges of radiation (you should know their names and the order in which they appear)
• the principal effects of the radiation on atoms and molecules
• the peaks of thermal radiation emitted by black bodies at three different temperatures
Electromagnetic radiation and chemistry. It's worth noting that radiation in the ultraviolet range can have direct chemical effects by ionizing atoms and disrupting chemical bonds. Longer-wavelength radiation can interact with atoms and molecules in ways that provide a valuable means of identifying them and revealing particular structural features.
Energy units and magnitudes
It is useful to develop some feeling for the various magnitudes of energy that we must deal with. The basic SI unit of energy is the Joule; the appearance of this unit in Planck's constant h allows us to express the energy equivalent of light in joules. For example, light of wavelength 500 nm, which appears blue-green to the human eye, would have a frequency of
The quantum of energy carried by a single photon of this frequency is
Another energy unit that is commonly employed in atomic physics is the electron volt; this is the kinetic energy that an electron acquires upon being accelerated across a 1-volt potential difference. The relationship 1 eV = 1.6022E–19 J gives an energy of 2.5 eV for the photons of blue-green light.
Two small flashlight batteries will produce about 2.5 volts, and thus could, in principle, give an electron about the same amount of kinetic energy that blue-green light can supply. Because the energy produced by a battery derives from a chemical reaction, this quantity of energy is representative of the magnitude of the energy changes that accompany chemical reactions.
In more familiar terms, one mole of 500-nm photons would have an energy equivalent of Avogadro's number times 4E–19 J, or 240 kJ per mole. This is comparable to the amount of energy required to break some chemical bonds. Many substances are able to undergo chemical reactions following light-induced disruption of their internal bonding; such molecules are said to be photochemically active.
Continuous spectra
Any body whose temperature is above absolute zero emits radiation covering a broad range of wavelengths. At very low temperatures the predominant wavelengths are in the radio microwave region. As the temperature increases, the wavelengths decrease; at room temperature, most of the emission is in the infrared.
At still higher temperatures, objects begin to emit in the visible region, at first in the red, and then moving toward the blue as the temperature is raised. These thermal emission spectra are described as continuous spectra, since all wavelengths within the broad emission range are present.
The source of thermal emission most familiar to us is the Sun. When sunlight is refracted by rain droplets into a rainbow or by a prism onto a viewing screen, we see the visible part of the spectrum.
Red hot, white hot, blue hot... your rough guide to temperatures of hot objects.
Line spectra
Heat a piece of iron up to near its melting point and it will emit a broad continuous spectrum that the eye perceives as orange-yellow. But if you zap the iron with an electric spark, some of the iron atoms will vaporize and have one or more of their electrons temporarily knocked out of them. As they cool down the electrons will re-combine with the iron ions, losing energy as the move in toward the nucleus and giving up this excess energy as light. The spectrum of this light is anything but continuous; it consists of a series of discrete wavelengths which we call lines.
Each chemical element has its own characteristic line spectrum which serves very much like a "fingerprint" capable of identifying a particular element in a complex mixture. Shown below is what you would see if you could look at several different atomic line spectra directly.
Atomic line spectra are extremely useful for identifying small quantities of different elements in a mixture.
• Companies that own large fleets of trucks and buses regularly submit their crankcase engine oil samples to spectrographic analysis. If they find high levels of certain elements (such as vanadium) that occur only in certain alloys, this can signal that certain parts of the engine are undergoing severe wear. This allows the mechanical staff to take corrective action before engine failure occurs.
• Several elements (Rb, Cs, Tl) were discovered by observing spectral lines that did not correspond to any of the then-known elements. Helium, which is present only in traces on Earth, was first discovered by observing the spectrum of the Sun.
• A more prosaic application of atomic spectra is determination of the elements present in stars.
If you live in a city, you probably see atomic line light sources every night! "Neon" signs are the most colorful and spectacular, but high-intensity street lighting is the most widespread source. A look at the emission spectrum (above) of sodium explains the intense yellow color of these lamps. The spectrum of mercury (not shown) similarly has its strongest lines in the blue-green region.
Particles and waves
There is one more fundamental concept you need to know before we can get into the details of atoms and their spectra. If light has a particle nature, why should particles not possess wavelike characteristics? In 1923 a young French physicist, Louis de Broglie, published an argument showing that matter should indeed have a wavelike nature. The de Broglie wavelength of a body is inversely proportional to its momentum mv:
$\lambda =\dfrac{h}{mv}$
If you explore the magnitude of the quantities in this equation (recall that h is around 10–33 J s), it will be apparent that the wavelengths of all but the lightest bodies are insignificantly small fractions of their dimensions, so that the objects of our everyday world all have definite boundaries. Even individual atoms are sufficiently massive that their wave character is not observable in most kinds of experiments. Electrons, however, are another matter; the electron was in fact the first particle whose wavelike character was seen experimentally, following de Broglie's prediction. Its small mass (9.1E–31 kg) made it an obvious candidate, and velocities of around 100 km/s are easily obtained, yielding a value of λ in the above equation that well exceeds what we think of as the "radius" of the electron. At such velocities the electron behaves as if it is "spread out" to atomic dimensions; a beam of these electrons can be diffracted by the ordered rows of atoms in a crystal in much the same way as visible light is diffracted by the closely-spaced groves of a CD recording.
Electron diffraction has become an important tool for investigating the structures of molecules and of solid surfaces.
A more familiar exploitation of the wavelike properties of electrons is seen in the electron microscope, whose utility depends on the fact that the wavelength of the electrons is much less than that of visible light, thus allowing the electron beam to reveal detail on a correspondingly smaller scale.
The uncertainty principle
In 1927, the German physicist Werner Heisenberg pointed out that the wave nature of matter leads to a profound and far-reaching conclusion: no method of observation, however perfectly it is carried out, can reveal both the exact location and momentum (and thus the velocity) of a particle. This is the origin of the widely known concept that the very process of observation will change the value of the quantity being observed. The Heisenberg principle can be expressed mathematically by the inequality
$\Delta{x}\Delta{p} \leq \dfrac{h}{2\pi}$
in which the $\Delta$ (deltas) represent the uncertainties with which the location and momentum are known.
Note
Suppose that you wish to measure the exact location of a particle that is at rest (zero momentum). To accomplish this, you must "see" the molecule by illuminating it with light or other radiation. But the light acts like a beam of photons, each of which possesses the momentum h/λ in which λ is the wavelength of the light. When a photon collides with the particle, it transfers some of its momentum to the particle, thus altering both its position and momentum.
Notice how the form of this expression predicts that if the location of an object is known exactly ($\Delta{x} = 0$), then the uncertainty in the momentum must be infinite, meaning that nothing at all about the velocity can be known. Similarly, if the velocity were specified exactly, then the location would be entirely uncertain and the particle could be anywhere. One interesting consequence of this principle is that even at a temperature of absolute zero, the molecules in a crystal must still possess a small amount of zero point vibrational motion, sufficient to limit the precision to which we can measure their locations in the crystal lattice.
An equivalent formulation of the uncertainty principle relates the uncertainties associated with a measurement of the energy of a system to the time $\Delta{t}$ taken to make the measurement:
$\Delta{E}\Delta{t} \leq \dfrac{h}{2 \pi}$
The "uncertainty" referred to here goes much deeper than merely limiting our ability to observe the quantity $\Delta{x}\Delta{p}$ to a greater precision than $h/2\pi$. It means, rather, that this product has no exact value, nor, by extension, do position and momentum on a microscopic scale. A more appropriate term would be indeterminacy, which is closer to Heisenberg's original word Ungenauigkeit.
The revolutionary nature Heisenberg's uncertainty principle soon extended far beyond the arcane world of physics; its consequences quickly entered the realm of ideas and has inspired numerous creative works in the arts— few of which really have much to do with the Principle! A possible exception is Michael Frayn's widely acclaimed play (see below) that has brought a sense of Heisenberg's thinking to a wide segment of the public. | textbooks/chem/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.03%3A_Light_Particles_and_Waves.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Describe the Thompson, Rutherford, and early planetary models of the atom, and explain why the latter is not consistent with classical physics.
• State the major concepts that distinguished Bohr's model of the atom from the earlier planetary model.
• Give an example of a mechanical standing wave; state the meaning and importance of its boundary conditions.
• Sketch out a diagram showing how the concept of a standing wave applies to the description of the electron in a hydrogen atom.
• What is an atomic line emission spectrum? What is the significance of the continuum region of an emission spectrum? Sketch out a drawing showing the essentials of such a spectrum, including the ionization limit and the continuum.
• Describe the way in which Bohr's quantum numbers explain the observed spectrum of a typical atom.
• Explain the relation between the absorption and emission spectrum of an atom.
Our goal in this unit is to help you understand how the arrangement of the periodic table of the elements must follow as a necessary consequence of the fundamental laws of the quantum behavior of matter. The modern theory of the atom makes full use of the wave-particle duality of matter. In order to develop and present this theory in a comprehensive way, we would require a number of mathematical tools that lie beyond the scope of this course. We will therefore present the theory in a semi-qualitative manner, emphasizing its results and their applications, rather than its derivation.
Models of the atom
Models are widely employed in science to help understand things that cannot be viewed directly. The idea is to imagine a simplified system or process that might be expected to exhibit the basic properties or behavior of the real thing, and then to test this model against more complicated examples and modify it as necessary. Although one is always on shaky philosophical ground in trying to equate a model with reality, there comes a point when the difference between them becomes insignificant for most practical purposes.
The planetary model
The demonstration by Thompson in 1867 that all atoms contain units of negative electric charge led to the first science-based model of the atom which envisaged the electrons being spread out uniformly throughout the spherical volume of the atom. Ernest Rutherford, a New Zealander who started out as Thompson's student at Cambridge, distrusted this "plum pudding" model (as he called it) and soon put it to rest; Rutherford's famous alpha-ray bombardment experiment (carried out, in 1909, by his students Hans Geiger and Ernest Marsden) showed that nearly all the mass of the atom is concentrated in an extremely small (and thus extremely dense) body called the nucleus. This led him to suggest the planetary model of the atom, in which the electrons revolve in orbits around the nuclear "sun".
Even though the planetary model has long since been discredited, it seems to have found a permanent place in popular depictions of the atom, and certain aspects of it remain useful in describing and classifying atomic structure and behavior. The planetary model of the atom assumed that the electrostatic attraction between the central nucleus and the electron is exactly balanced by the centrifugal force created by the revolution of the electron in its orbit. If this balance were not present, the electron would either fall into the nucleus, or it would be flung out of the atom.
The difficulty with this picture is that it is inconsistent with a well established fact of classical electrodynamics which says that whenever an electric charge undergoes a change in velocity or direction (that is, acceleration, which must happen if the electron circles around the nucleus), it must continually radiate energy. If electrons actually followed such a trajectory, all atoms would act is miniature broadcasting stations. Moreover, the radiated energy would come from the kinetic energy of the orbiting electron; as this energy gets radiated away, there is less centrifugal force to oppose the attractive force due to the nucleus. The electron would quickly fall into the nucleus, following a trajectory that became known as the "death spiral of the electron". According to classical physics, no atom based on this model could exist for more than a brief fraction of a second.
Bohr's Model
Niels Bohr was a brilliant Danish physicist who came to dominate the world of atomic and nuclear physics during the first half of the twentieth century. Bohr suggested that the planetary model could be saved if one new assumption were made: certain "special states of motion" of the electron, corresponding to different orbital radii, would not result in radiation, and could therefore persist indefinitely without the electron falling into the nucleus. Specifically, Bohr postulated that the angular momentum of the electron, mvr (the mass and angular velocity of the electron and in an orbit of radius $r$) is restricted to values that are integral multiples of $h/2\pi$. The radius of one of these allowed Bohr orbits is given by
$r=\dfrac{nh}{2\pi m u}$
in which h is Planck's constant, m is the mass of the electron, v is the orbital velocity, and n can have only the integer values 1, 2, 3, etc. The most revolutionary aspect of this assumption was its use of the variable integer n; this was the first application of the concept of the quantum number to matter. The larger the value of n, the larger the radius of the electron orbit, and the greater the potential energy of the electron.
As the electron moves to orbits of increasing radius, it does so in opposition to the restoring force due to the positive nucleus, and its potential energy is thereby raised. This is entirely analogous to the increase in potential energy that occurs when any mechanical system moves against a restoring force— as, for example, when a rubber band is stretched or a weight is lifted.
Thus what Bohr was saying, in effect, is that the atom can exist only in certain discrete energy states: the energy of the atom is quantized. Bohr noted that this quantization nicely explained the observed emission spectrum of the hydrogen atom. The electron is normally in its smallest allowed orbit, corresponding to n = 1; upon excitation in an electrical discharge or by ultraviolet light, the atom absorbs energy and the electron gets promoted to higher quantum levels. These higher excited states of the atom are unstable, so after a very short time (around 10—9 sec) the electron falls into lower orbits and finally into the innermost one, which corresponds to the atom's ground state. The energy lost on each jump is given off as a photon, and the frequency of this light provides a direct experimental measurement of the difference in the energies of the two states, according to the Planck-Einstein relationship e = hν.
Vibrations, standing waves and bound states
Bohr's theory worked; it completely explained the observed spectrum of the hydrogen atom, and this triumph would later win him a Nobel prize. The main weakness of the theory, as Bohr himself was the first to admit, is that it could offer no good explanation of whythese special orbits immunized the electron from radiating its energy away. The only justification for the proposal, other than that it seems to work, comes from its analogy to certain aspects of the behavior of vibrating mechanical systems.
Spectrum of a guitar string
In order to produce a tone when plucked, a guitar string must be fixed at each end (that is, it must be a bound system) and must be under some tension. Only under these conditions will a transverse disturbance be countered by a restoring force (the string's tension) so as to set up a sustained vibration. Having the string tied down at both ends places a very important boundary condition on the motion: the only allowed modes of vibration are those whose wavelengths produce zero displacements at the bound ends of the string; if the string breaks or becomes unattached at one end, it becomes silent.
In its lowest-energy mode of vibration there is a single wave whose point of maximum displacement is placed at the center of the string. In musical terms, this corresponds to the fundamental note to which the string is tuned; in terms of the theory of vibrations, it corresponds to a "quantum number" of 1. Higher modes, known as overtones (and in music, as octaves), contain 2, 3, 4 and more points of maximum displacement (antinodes) spaced evenly along the string, separated by points of zero displacement (nodes). These correspond to successively higher quantum numbers and higher energies.
The vibrational states of the string are quantized in the sense that an integral number of antinodes must be present. Note again that this condition is imposed by the boundary condition that the ends of the string, being fixed in place, must be nodes. Because the locations of the nodes and antinodes do not change as the string vibrates, the vibrational patterns are known as standing waves.
A similar kind of quantization occurs in other musical instruments; in each case the vibrations, whether of a stretched string, a column of air, or of a stretched membrane.
Standing waves live in places other than atoms and musical instruments: every time you turn on your microwave oven, a complex set of standing waves fills the interior. What is "waving" here is the alternating electrostatic field as a function of location; the wave patterns are determined by the dimensions of the oven interior and by the objects placed within it. But the part of a pizza that happens to be located at a node would not get very hot, so all microwave ovens provide a mechanical means of rotating either the food (on a circular platform) or the microwave beam (by means of a rotating deflector) so that all parts will pass through high-amplitude parts of the waves.
Standing waves in the hydrogen atom
The analogy with the atom can be seen by imagining a guitar string that has been closed into a circle. The circle is the electron orbit, and the boundary condition is that the waves must not interfere with themselves along the circle. This condition can only be met if the circumference of an orbit can exactly accommodate an integral number of wavelengths. Thus only certain discrete orbital radii and energies are allowed, as depicted in the two diagrams below.
Unbound states
If a guitar string is plucked so harshly that it breaks, the restoring force and boundary conditions that restricted its motions to a few discrete harmonically related frequencies are suddenly absent; with no constraint on its movement, the string's mechanical energy is dissipated in a random way without musical effect. In the same way, if an atom absorbs so much energy that the electron is no longer bound to the nucleus, then the energy states of the atom are no longer quantized; instead of the line spectrum associated with discrete energy jumps, the spectrum degenerates into a continuum in which all possible electron energies are allowed. The energy at which the ionization continuum of an atom begins is easily observed spectroscopically, and serves as a simple method of experimentally measuring the energy with which the electron is bound to the atom.
Spectrum of the hydrogen atom
Hydrogen, the simplest atom, also has the simplest line spectrum (line spectra were briefly introduced in the previous chapter.) The hydrogen spectrum was the first to be observed (by Ånders Ångström in the 1860's). Johannn Balmer, a German high school teacher, discovered a simple mathematical formula that related the wavelengths of the various lines that are observable in the visible and near-uv parts of the spectrum. This set of lines is now known as the Balmer Series.
The four lines in the visible spectrum (designated by α through δ) were the first observed by Balmer. Notice how the lines crowd together as they approach the ionization limit in the near-ultraviolet part of the spectrum. Once the electron has left the atom, it is in an unbound state and its energy is no longer quantized. When such electrons return to the atom, they possess random amounts of kinetic energies over and above the binding energy. This reveals itself as the radiation at the short-wavelength end of the spectrum known as the continuum radiation. Other named sets of lines in the hydrogen spectrum are the Lyman series (in the ultraviolet) and the Paschen, Brackett, Pfund and Humphrey series in the infrared.
How the Bohr model explains the hydrogen line spectrum
Each spectral line represents an energy difference between two possible states of the atom. Each of these states corresponds to the electron in the hydrogen atom being in an "orbit" whose radius increases with the quantum number n. The lowest allowed value of n is 1; because the electron is as close to the nucleus as it can get, the energy of the system has its minimum (most negative) value. This is the "normal" (most stable) state of the hydrogen atom, and is called the ground state.
If a hydrogen atom absorbs radiation whose energy corresponds to the difference between that of n=1 and some higher value of n, the atom is said to be in an excited state. Excited states are unstable and quickly decay to the ground state, but not always in a single step. For example, if the electron is initially promoted to the n=3 state, it can decay either to the ground state or to the n=2 state, which then decays to n=1. Thus this single n=1→3 excitation can result in the three emission lines depicted in the diagram above, corresponding to n=3→1, n=3→2, and n=2→1.
If, instead, enough energy is supplied to the atom to completely remove the electron, we end up with a hydrogen ion and an electron. When these two particles recombine (H+ + e → H), the electron can initially find itself in a state corresponding to any value of n, leading to the emission of many lines.
The lines of the hydrogen spectrum can be organized into different series according to the value of n at which the emission terminates (or at which absorption originates.) The first few series are named after their discoverers. The most well-known (and first-observed) of these is the Balmer series, which lies mostly in the visible region of the spectrum. The Lyman lines are in the ultraviolet, while the other series lie in the infrared. The lines in each series crowd together as they converge toward the series limit which corresponds to ionization of the atom and is observed as the beginning of the continuum emission. Note that the ionization energy of hydrogen (from its ground state) is 1312 kJ mol–1. Although an infinite number of n-values are possible, the number of observable lines is limited by our ability to resolve them as they converge into the continuum; this number is around a thousand.
Emission and absorption spectra
The line emission spectra we have been discussing are produced when electrons which had previously been excited to values of n greater than 1 fall back to the n=1 ground state, either directly, or by way of intermediate-n states. But if light from a continuous source (a hot body such as a star) passes through an atmosphere of hydrogen (such as the star's outer atmosphere), those wavelengths that correspond to the allowed transitions are absorbed, and appear as dark lines superimposed on the continuous spectrum.
These dark absorption lines were first observed by William Wollaston in his study of the solar spectrum. In 1814, Joseph von Fraunhofner (1787-1826) re-discovered them and made accurate measurements of 814 lines, including the four most prominent of the Balmer lines. | textbooks/chem/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.04%3A_The_Bohr_Atom.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas
• State the fundamental distinction between Bohr's original model of the atom and the modern orbital model.
• Explain the role of the uncertainty principle in preventing the electron from falling into the nucleus.
• State the physical meaning of the principal quantum number of an electron orbital, and make a rough sketch of the shape of the probability-vs-distance curve for any value of n.
• Sketch out the shapes of an s, p, or a typical d orbital.
• Describe the significance of the magnetic quantum number as it applies to a p orbital.
• State the Pauli exclusion principle.
The picture of the atom that Niels Bohr developed in 1913 served as the starting point for modern atomic theory, but it was not long before Bohr himself recognized that the advances in quantum theory that occurred through the 1920's required an even more revolutionary change in the way we view the electron as it exists in the atom. This lesson will attempt to show you this view— or at least the portion of it that can be appreciated without the aid of more than a small amount of mathematics.
From Orbits to Orbitals
About ten years after Bohr had developed his theory, de Broglie showed that the electron should have wavelike properties of its own, thus making the analogy with the mechanical theory of standing waves somewhat less artificial. One serious difficulty with the Bohr model still remained, however: it was unable to explain the spectrum of any atom more complicated than hydrogen. A refinement suggested by Sommerfeld assumed that some of the orbits are elliptical instead of circular, and invoked a second quantum number, l, that indicated the degree of ellipticity. This concept proved useful, and it also began to offer some correlation with the placement of the elements in the periodic table.
By 1926, de Broglie's theory of the wave nature of the electron had been experimentally confirmed, and the stage was set for its extension to all matter. At about the same time, three apparently very different theories that attempted to treat matter in general terms were developed. These were Schrödinger's wave mechanics, Heisenberg's matrix mechanics, and a more abstract theory of P.A.M. Dirac. These eventually were seen to be mathematically equivalent, and all continue to be useful.
Of these alternative treatments, the one developed by Schrödinger is the most easily visualized. Schrödinger started with the simple requirement that the total energy of the electron is the sum of its kinetic and potential energies:
$E = \underbrace{\dfrac{mv^2}{2}}_{\text{kinetic energy}} + \underbrace{\dfrac{-e^2}{r}}_{\text{potential energy}} \label{5.6.1}$
The second term represents the potential energy of an electron (whose charge is denoted by e) at a distance r from a proton (the nucleus of the hydrogen atom). In quantum mechanics it is generally easier to deal with equations that use momentum ($p = mv$) rather than velocity, so the next step is to make this substitution:
$E = \dfrac{p^2}{2m} - \dfrac{e^2}{r} \label{5.6.2}$
This is still an entirely classical relation, as valid for the waves on a guitar string as for those of the electron in a hydrogen atom. The third step is the big one: in order to take into account the wavelike character of the hydrogen atom, a mathematical expression that describes the position and momentum of the electron at all points in space is applied to both sides of the equation. The function, denoted by $\Psi$, "modulates" the equation of motion of the electron so as to reflect the fact that the electron manifests itself with greater probability in some locations that at others. This yields the celebrated Schrödinger equation
$\left( \dfrac{mv^2}{2} - \dfrac{e^2}{r} \right) \Psi = E\Psi \label{5.6.3}$
Physical significance of the wavefunction
How can such a simple-looking expression contain within it the quantum-mechanical description of an electron in an atom— and thus, by extension, of all matter? The catch, as you may well suspect, lies in discovering the correct form of Ψ , which is known as the wave function. As this names suggests, the value of Ψ is a function of location in space relative to that of the proton which is the source of the binding force acting on the electron. As in any system composed of standing waves, certain boundary conditions must be applied, and these are also contained in Ψ; the major ones are that the value of must approach zero as the distance from the nucleus approaches infinity, and that the function be continuous.
When the functional form of has been worked out, the Schrödinger equation is said to have been solved for a particular atomic system. The details of how this is done are beyond the scope of this course, but the consequences of doing so are extremely important to us. Once the form of is known, the allowed energies E of an atom can be predicted from the above equation. Soon after Schrödinger's proposal, his equation was solved for several atoms, and in each case the predicted energy levels agreed exactly with the observed spectra.
There is another very useful kind of information contained in Ψ. Recalling that its value depends on the location in space with respect to the nucleus of the atom, the square of this function Ψ2, evaluated at any given point, represents the probability of finding the electron at that particular point. The significance of this cannot be overemphasized; although the electron remains a particle having a definite charge and mass, and the question of "where" it is located is no longer meaningful. Any single experimental observation will reveal a definite location for the electron, but this will in itself have little significance; only a large number of such observations (similar to a series of multiple exposures of a photographic film) will yield meaningful results which will show that the electron can "be" anywhere with at least some degree of probability. This does not mean that the electron is "moving around" to all of these places, but that (in accord with the uncertainty principle) the concept of location has limited meaning for a particle as small as the electron. If we count only those locations in space at which the probability of the electron manifesting itself exceeds some arbitrary value, we find that the Ψ function defines a definite three-dimensional region which we call an orbital.
Why doesn't the electron fall into the nucleus?
We can now return to the question which Bohr was unable to answer in 1912. Even the subsequent discovery of the wavelike nature of the electron and the analogy with standing waves in mechanical systems did not really answer the question; the electron is still a particle having a negative charge and is attracted to the nucleus.
The answer comes from the Heisenberg Uncertainty Principle, which says that a quantum particle such as the electron cannot simultaneously have sharply-defined values of location and of momentum (and thus kinetic energy). To understand the implications of this restriction, suppose that we place the electron in a small box. The walls of the box define the precision δx to which the location is known; the smaller the box, the more exactly will we know the location of the electron. But as the box gets smaller, the uncertainty in the electron's kinetic energy will increase. As a consequence of this uncertainty, the electron will at times possess so much kinetic energy (the "confinement energy") that it may be able to penetrate the wall and escape the confines of the box.
This process is known as tunneling; the tunnel effect is exploited in various kinds of semiconductor devices, and it is the mechanism whereby electrons jump between dissolved ions and the electrode in batteries and other electrochemical devices.
The region near the nucleus can be thought of as an extremely small funnel-shaped box, the walls of which correspond to the electrostatic attraction that must be overcome if an electron confined within this region is to escape. As an electron is drawn toward the nucleus by electrostatic attraction, the volume to which it is confined diminishes rapidly. Because its location is now more precisely known, its kinetic energy must become more uncertain; the electron's kinetic energy rises more rapidly than its potential energy falls, so that it gets ejected back into one of its allowed values of n.
We can also dispose of the question of why the orbiting electron does not radiate its kinetic energy away as it revolves around the nucleus. The Schrödinger equation completely discards any concept of a definite path or trajectory of a particle; what was formerly known as an "orbit" is now an "orbital", defined as the locations in space at which the probability of finding the electrons exceeds some arbitrary value. It should be noted that this wavelike character of the electron coexists with its possession of a momentum, and thus of an effective velocity, even though its motion does not imply the existence of a definite path or trajectory that we associate with a more massive particle.
Orbitals
The modern view of atomic structure dismisses entirely the old but comfortable planetary view of electrons circling around the nucleus in fixed orbits. As so often happens in science, however, the old outmoded theory contains some elements of truth that are retained in the new theory. In particular, the old Bohr orbits still remain, albeit as spherical shells rather than as two-dimensional circles, but their physical significance is different: instead of defining the "paths" of the electrons, they merely indicate the locations in the space around the nucleus at which the probability of finding the electron has higher values. The electron retains its particle-like mass and momentum, but because the mass is so small, its wavelike properties dominate. The latter give rise to patterns of standing waves that define the possible states of the electron in the atom.
The quantum numbers
Modern quantum theory tells us that the various allowed states of existence of the electron in the hydrogen atom correspond to different standing wave patterns. In the preceding lesson we showed examples of standing waves that occur on a vibrating guitar string. The wave patterns of electrons in an atom are different in two important ways:
1. Instead of indicating displacement of a point on a vibrating string, the electron waves represent the probability that an electron will manifest itself (appear to be located) at any particular point in space. (Note carefully that this is not the same as saying that "the electron is smeared out in space"; at any given instant in time, it is either at a given point or it is not.)
2. The electron waves occupy all three dimensions of space, whereas guitar strings vibrate in only two dimensions.
Aside from this, the similarities are striking. Each wave pattern is identified by an integer number n, which in the case of the atom is known as the principal quantum number. The value of $n$ tells how many peaks of amplitude (antinodes) exist in that particular standing wave pattern; the more peaks there are, the higher the energy of the state.
The three simplest orbitals of the hydrogen atom are depicted above in pseudo-3D, in cross-section, and as plots of probability (of finding the electron) as a function of distance from the nucleus. The average radius of the electron probability is shown by the blue circles or plots in the two columns on the right. These radii correspond exactly to those predicted by the Bohr model.
The principal quantum number. The value of $n$ tells how many peaks of amplitude (antinodes) exist in that particular standing wave pattern; the more peaks there are, the higher the energy of the state.
Physical Significance of n
The potential energy of the electron is given by the formula
$E =\dfrac{-4 \pi^2 e^4 m}{h^2n^2} \label{5.6.4}$
with
• $e$ is the charge of the electron, $m$ is its mass,
• $h$ is Planck's constant, and
• $n$ is the principal quantum number.
The negative sign ensures that the potential energy is always negative. Notice that this energy in inversely proportional to the square of $n$, so that the energy rises toward zero as $n$ becomes very large, but it can never exceed zero.
Equation $\ref{5.6.4}$ was actually part of Bohr's original theory and is still applicable to the hydrogen atom, although not to atoms with two or more electrons. In the Bohr model, each value of $n$ corresponded to an orbit of a different radius. The larger the orbital radius, the higher the potential energy of the electron; the inverse square relationship between electrostatic potential energy and distance is reflected in the inverse square relation between the energy and n in the above formula. Although the concept of a definite trajectory or orbit of the electron is no longer tenable, the same orbital radii that relate to the different values of n in Bohr's theory now have a new significance: they give the average distance of the electron from the nucleus. The averaging process must encompass several probability peaks in the case of higher values of $n$. The spatial distribution of these probability maxima defines the particular orbital.
This physical interpretation of the principal quantum number as an index of the average distance of the electron from the nucleus turns out to be extremely useful from a chemical standpoint, because it relates directly to the tendency of an atom to lose or gain electrons in chemical reactions.
The Angular Momentum Quantum Number
The electron wave functions that are derived from Schrödinger's theory are characterized by several quantum numbers. The first one, n, describes the nodal behavior of the probability distribution of the electron, and correlates with its potential energy and average distance from the nucleus as we have just described.
The theory also predicts that orbitals having the same value of n can differ in shape and in their orientation in space. The quantum number l, known as the angular momentum quantum number, determines the shape of the orbital. (More precisely, l determines the number of angular nodes, that is, the number of regions of zero probability encountered in a 360° rotation around the center.)
When l = 0, the orbital is spherical in shape. If l = 1, the orbital is elongated into something resembling a figure-8 shape, and higher values of l correspond to still more complicated shapes— but note that the number of peaks in the radial probability distributions (below) decreases with increasing l. The possible values that l can take are strictly limited by the value of the principal quantum number; l can be no greater than n – 1. This means that for n = 1, l can only have the single value zero which corresponds to a spherical orbital. For historical reasons, the orbitals corresponding to different values of l are designated by letters, starting with s for l = 0, p for l = 1, d for l = 2, and f for l = 3.
The function relationship is given by
$\bar{r} = (5.29\; pm) \dfrac{n^2}{Z} \left[ \dfrac{3}{2} - \dfrac{l(l-1)}{2n^2}\right]$
in which $Z$ is the nuclear charge of the atom, which of course is unity for hydrogen.
The Magnetic Quantum Number
An s-orbital, corresponding to l = 0, is spherical in shape and therefore has no special directional properties. The probability cloud of ap orbital is aligned principally along an axis extending along any of the three directions of space. The additional quantum number m is required to specify the particular direction along which the orbital is aligned.
"Direction in space" has no meaning in the absence of a force field that serves to establish a reference direction. For an isolated atom there is no such external field, and for this reason there is no distinction between the orbitals having different values of $m$. If the atom is placed in an external magnetic or electrostatic field, a coordinate system is established, and the orbitals having different values of m will split into slightly different energy levels. This effect was first seen in the case of a magnetic field, and this is the origin of the term magnetic quantum number. In chemistry, however, electrostatic fields are much more important for defining directions at the atomic level because it is through such fields that nearby atoms in a molecule interact with each other. The electrostatic field created when other atoms or ions come close to an atom can cause the energies of orbitals having different direction properties to split up into different energy levels; this is the origin of the colors seen in many inorganic salts of transition elements, such as the blue color of copper sulfate.
The quantum number m can assume 2l + 1 values for each value of l, from –l through 0 to +l. When l = 0 the only possible value of m will also be zero, and for the p orbital (l = 1), m can be –1, 0, and +1. Higher values of l introduce more complicated orbital shapes which give rise to more possible orientations in space, and thus to more values of m.
Electron Spin and the Exclusion Principle
Certain fundamental particles have associated with them a magnetic moment that can align itself in either of two directions with respect to an external magnetic field. The electron is one such particle, and the direction of its magnetic moment is called its spin.
A basic principle of modern physics states that for particles such as electrons that possess half-integral values of spin, no two of them can be in identical quantum states within the same system. The quantum state of a particle is defined by the values of its quantum numbers, so what this means is that no two electrons in the same atom can have the same set of quantum numbers. This is known as the Pauli exclusion principle, named after the German physicist Wolfgang Pauli (1900-1958, Nobel Prize 1945).
The Electrons is not Really Spinning
The mechanical analogy implied by the term spin is easy to visualize, but should not be taken literally. Physical rotation of an electron is meaningless. However, the coordinates of the electron's wave function can be rotated mathematically; when this is done, it is found that a rotation of 720° is required to restore the function to its initial value— rather weird, considering that a 360° rotation will leave any extended body unchanged! Electron spin is basically a relativistic effect in which the electron's momentum distorts local space and time. It has no classical counterpart and thus cannot be visualized other than through mathematics.
The exclusion principle was discovered empirically and was placed on a firm theoretical foundation by Pauli in 1925. A complete explanation requires some familiarity with quantum mechanics, so all we will say here is that if two electrons possess the same quantum numbers n, l, m and s (defined below), the wave function that describes the state of existence of the two electrons together becomes zero, which means that this is an "impossible" situation.
A given orbital is characterized by a fixed set of the quantum numbers n, l, and m. The electron spin itself constitutes a fourth quantum number s, which can take the two values +1 and –1. Thus a given orbital can contain two electrons having opposite spins, which "cancel out" to produce zero magnetic moment. Two such electrons in a single orbital are often referred to as an electron pair.
If it were not for the exclusion principle, the atoms of all elements would behave in the same way, and there would be no need for a science of Chemistry!
As we have seen, the lowest-energy standing wave pattern the electron can assume in an atom corresponds to n=1, which describes the state of the single electron in hydrogen, and of the two electrons in helium. Since the quantum numbers m and l are zero for n=1, the pair of electrons in the helium orbital have the values (n, l, m, s) = (1,0,0,+1) and (1,0,0,–1)— that is, they differ only in spin. These two sets of quantum numbers are the only ones that are possible for a n=1 orbital. The additional electrons in atoms beyond helium must go into higher-energy (n>1) orbitals. Electron wave patterns corresponding to these greater values of n are concentrated farther from the nucleus, with the result that these electrons are less tightly bound to the atom and are more accessible to interaction with the electrons of neighboring atoms, thus influencing their chemical behavior. If it were not for the Pauli principle, all the electrons of every element would be in the lowest-energy n=1 state, and the differences in the chemical behavior the different elements would be minimal. Chemistry would certainly be a simpler subject, but it would not be very interesting! | textbooks/chem/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.05%3A_The_Quantum_Atom.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• State the principle feature of that distinguishes the energies of the excited states of a single-electron atom from atoms containing more than one electron.
• Explain why the first ionization energy of the helium atom is smaller than twice the first ionization of the hydrogen atom.
• Be able to write a plausible electron configuration for any atom having an atomic number less than 90.
In the previous section you learned that an electron standing-wave pattern characterized by the quantum numbers (n,l,m) is called an orbital. According to the Pauli exclusion principle, no two electrons in the same atom can have the same set of quantum numbers (n,l,m,s). This limits the number of electrons in a given orbital to two (s = ±1), and it requires that atom containing more then two electrons must place them in standing wave patterns corresponding to higher principal quantum numbers n, which means that these electrons will be farther from the nucleus and less tightly bound by it.
In this chapter, we will see how the Pauli restrictions on the allowable quantum numbers of electrons in an atom affect the electronic configuration of the different elements, and, by influencing their chemical behavior, governs the structure of the periodic table.
One-electron atoms
Let us begin with atoms that contain only a single electron. Hydrogen is of course the only electrically neutral species of this kind, but by removing electrons from heavier elements we can obtain one-electron ions such as \(He^+\) and \(Li^{2+}\), etc. Each has a ground state configuration of 1s1, meaning that its single electron exhibits a standing wave pattern governed by the quantum numbers n=1, m=0 and l=0, with the spin quantum number s undefined because there is no other electron to compare it with. All have simple emission spectra whose major features were adequately explained by Bohr's model.
The most important feature of a single-electron atom is that the energy of the electron depends only on the principal quantum number n. As the above diagram shows, the quantum numbers l and m have no effect on the energy; we say that all orbitals having a given value of n are degenerate. Thus the emission spectrum produced by exciting the electron to the n=2 level consists of a single line, not four lines. The wavelength of this emission line for the atoms H, He+and Li2+ will diminish with atomic number because the greater nuclear charge will lower the energies of the various n levels. For the same reason, the energies required to remove an electron from these species increases rapidly as the nuclear charge increases, because the increasing attraction pulls the electron closer to the nucleus, thus producing an even greater attractive force.
Electron-Electron Repulsion
It takes 1312 kJ of energy to remove the electron from a mole of hydrogen atoms. What might we expect this value to be for helium? Helium contains two electrons, but its nucleus contains two protons; each electron "sees" both protons, so we might expect that the electrons of helium would be bound twice as strongly as the electron of hydrogen. The ionization energy of helium should therefore be twice 1312 kJ/mol, or 2612 kJ/mol. However, if one looks at the spectrum of helium, the continuum is seen to begin at a wavelength corresponding to an ionization energy of 2372 kJ/mol, or about 90% of the predicted value.
Why are the electrons in helium bound less tightly than the +2 nuclear charge would lead us to expect? The answer is that there is another effect to consider: the repulsion between the two electrons; the resulting electron-electron repulsion subtracts from the force holding the electron to the nucleus, reducing the local binding of each.
Electron-electron repulsion is a major factor in both the spectra and chemical behavior of the elements heavier than hydrogen. In particular, it acts to "break the degeneracy" (split the energies) of orbitals having the same value of n but different l.
The diagram below shows how the energies of the s- and p-orbitals of different principal quantum numbers get split as the result of electron-electron repulsion. Notice the contrast with the similar diagram for one-electron atoms near the top of this page. The fact that electrons preferentially fill the lowest-energy empty orbitals is the basis of the rules for determining the electron configuration of the elements and of the structure of the periodic table.
The Aufbau rules
The German word Aufbau means "building up", and this term has traditionally been used to describe the manner in which electrons are assigned to orbitals as we carry out the imaginary task of constructing the atoms of elements having successively larger atomic numbers. In doing so, we are effectively "building up" the periodic table of the elements, as we shall shortly see.
How to play the Aufbau game
• Electrons occupy the lowest-energy available orbitals; lower-energy orbitals are filled before the higher ones.
• No more than two electrons can occupy any orbital.
• For the lighter elements, electrons will fill orbitals of the same type only one electron at a time, so that their spins are all unpaired. They will begin to pair up only after all the orbitals are half-filled. This principle, which is a consequence of the electrostatic repulsion between electrons, is known as Hund's rule.
• For the first 18 elements, up to the point where the 3s and 3p levels are completely filled, this scheme is entirely straightforward and leads to electronic configurations which you are expected to be able to work out for each of these elements.
The preceding diagram illustrates the main idea here. Each orbital is represented as a little box that can hold up to two electrons having opposite spins, which we designated by upward- or downward-pointing arrows. Electrons fill the lowest-energy boxes first, so that additional electrons are forced into wave-patterns corresponding to higher (less negative) energies. Thus in the above diagram, the "third" electron of lithium goes into the higher-energy 2s orbital, giving this element an electron configuration which we write 1s2 2s1.
Example \(1\): Phosphorus
What is the electron configuration of the atom of phosphorus, atomic number 15?
The number of electrons filling the lowest-energy orbitals are:
1s: 2 electrons, 2s: 2 electrons; 2p: 6 electrons, 3s: 2 electrons. This adds up to 12 electrons. The remaining three electrons go into the 3p orbital, so the complete electron configuration of P is 1s2 2s2 2p6 3s2 3p3.
Energies of the highest occupied orbitals of the elements: This diagram illustrates the Aufbau rules as they are applied to all the elements. Note especially how the energies of the nd orbitals fall between the (n–1)s and (n–1)p orbitals so, for example the 3d orbitals begin to fill after the 4s orbital is filled, but before electrons populate the 4p orbitals. A similar relation exists with d- and f-orbitals.
It is very important that you understand this diagram and how it follows from the Pauli exclusion principle: You should be able to reproduce it from memory up to the 6s level, because it forms the fundamental basis of the periodic table of the elements.
Bending the rules
Inspection of a table of electron configurations of the elements reveals a few apparent non-uniformities in the filling of the orbitals, as is illustrated here for the elements of the so-called first transition series in which the 3d orbitals are being populated. These anomalies are a consequence of the very small energy differences between some of the orbitals, and of the reduced electron-electron repulsion when electrons remain unpaired (Hund's rule), as is evident in chromium, which contains six unpaired electrons.
The other anomaly here is copper, which "should" have the outer-shell configuration 3d94s2. The actual configuration of the Cu atom appears to be 3d104s1. Although the 4s orbital is normally slightly below the 3d orbital energy, the two are so close that interactions between the two when one is empty and the other is not can lead to a reversal. Detailed calculations in which the shapes and densities of the charge distributions are considered predict that the relative energies of many orbitals can reverse in this way. It gets even worse when f-orbitals begin to fill!
Because these relative energies can vary even for the same atom in different chemical environments, most instructors will not expect you to memorize them.
This diagram shows how the atomic orbitals corresponding to different principal quantum numbers become interspersed with one another at higher values of n. The actual situation is more complicated than this; calculations show that the energies of d and f orbitals vary with the atomic number of the element.
The Periodic Table
The relative orbital energies illustrated above and the Pauli exclusion principle constitute the fundamental basis of the periodic table of the elements which was of course worked out empirically late in the 19th century, long before electrons had been heard of.
The periodic table of the elements is conventionally divided into sections calledblocks, each of which designates the type of "sub-orbital" (s, p, d, f) which contains the highest-energy electrons in any particular element. Note especially that
• The non-metallic elements occur only in the p-block;
• The d-block elements contain the so-called transition elements;
• The f-block elements go in between Groups 3 and 4 of the d-block.
The above diagram illustrates the link between the electron configurations of the elements and the layout of the periodic table. Each row, also known as a period, commences with two s-block elements and continues through the p block. At the end of the rows corresponding to n>1 is an element having a p6 configuration, a so-called noble gas element. At n values of 2 and 3, d- and f-block element sequences are added.
The table shown above is called the long form of the periodic table; for many purposes, we can use a "short form" table in which the d-block is shown below the s- and p- block "representative elements" and the f-block does not appear at all. Note that the "long form" would be even longer if the f-block elements were shown where they actually belong, between La-Hf and Ac-Db. | textbooks/chem/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.06%3A_Atomic_Electron_Configurations.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• You should be able to sketch out the general form of the periodic table and identify the various blocks and identify the groups corresponding to the alkali metals, the transition elements, the halogens, and the noble gases.
• For the first eighteen elements, you should be able to predict the formulas of typical binary compounds they can be expected to form with hydrogen and with oxygen.
• Comment on the concept of the "size" of an atom, and give examples of how radii are defined in at least two classes of of substances.
• Define ionization energy and electron affinity, and explain the periodic general trends.
• State the meaning and significance of electronegativity.
The periodic table in the form originally published by Dmitri Mendeleev in 1869 was an attempt to list the chemical elements in order of their atomic weights, while breaking the list into rows in such a way that elements having similar physical and chemical properies would be placed in each column. At that time, nothing was known about atoms; the development of the table was entirely empirical. Our goal in this lesson is to help you understand how the shape and organization of the modern periodic table are direct consequences of the atomic electronic structure of the elements.
Organization of the Periodic Table
To understand how the periodic table is organized, imagine that we write down a long horizontal list of the elements in order of their increasing atomic number. It would begin this way:
H He Li Be B C N O F Ne Na Mg Al Si P S Cl Ar K Ca...
Now if we look at the various physical and chemical properties of these elements, we would find that their values tend to increase or decrease with Z in a manner that reveals a repeating pattern— that is, a periodicity. For the elements listed above, these breaks can be indicated by the vertical bars shown here in color:
H He | Li Be B C N O F Ne | Na Mg Al Si P S Cl Ar | Ca ...
Periods: To construct the table, we place each sequence in a separate row, which we call a period. The rows are aligned in such a way that the elements in each vertical column possess certain similarities. Thus the first short-period elements H and He are chemically similar to the elements Li and Ne at the beginning and end of the second period. The first period is split in order to place H above Li and He above Ne.
The "block" nomenclature shown above refers to the sub-orbital type (quantum number l, or s-p-d-f classification) of the highest-energy orbitals that are occupied in a given element. For n=1 there is no p block, and the s block is split so that helium is placed in the same group as the other inert gases, which it resembles chemically. For the second period (n=2) there is a p block but no d block; in the usual "long form" of the periodic table it is customary to leave a gap between these two blocks in order to accommodate the d blocks that occur at n=3 and above. At n=6 we introduce an f block, but in order to hold the table to reasonable dimensions the f blocks are placed below the main body of the table.
Groups: Each column of the periodic table is known as a group. The elements belonging to a given group bear a strong similarity in their chemical behaviors.
In the past, two different systems of Roman numerals and letters were used to denote the various groups. North Americans added the letter B to denote the d-block groups and A for the others; this is the system shown in the table above. The the rest of the world used A for the d-block elements and B for the others. In 1985, a new international system was adopted in which the columns were simply labeled 1-18. Although this system has met sufficient resistance in North America to slow its incorporation into textbooks, it seems likely that the "one to eighteen" system will gradually take over.
Families. Chemists have long found it convenient to refer to the elements of different groups, and in some cases of spans of groups by the names indicated in the table shown below. The two of these that are most important for you to know are the noble gases and thetransition metals.
The shell model of the atom
The properties of an atom depend ultimately on the number of electrons in the various orbitals, and on the nuclear charge which determines the compactness of the orbitals. In order to relate the properties of the elements to their locations in the periodic table, it is often convenient to make use of a simplified view of the atom in which the nucleus is surrounded by one or more concentric spherical "shells", each of which consists of the highest-principal quantum number orbitals (always s- and p-orbitals) that contain at least one electron. The shell model (as with any scientific model) is less a description of the world than a simplified way of looking at it that helps us to understand and correlate diverse phenomena. The principal simplification here is that it deals only with the main group elements of the s- and p-blocks, omitting the d- and f-block elements whose properties tend to be less closely tied to their group numbers.
The electrons (denoted by the red dots) in the outer-most shell of an atom are the ones that interact most readily with other atoms, and thus play a major role in governing the chemistry of an element. Notice the use of noble-gas symbols to simplify the electron-configuration notation.
In particular, the number of outer-shell electrons (which is given by the rightmost digit in the group number) is a major determinant of an element's "combining power", or valence. The general trend is for an atom to gain or lose electrons, either directly (leading to formation of ions) or by sharing electrons with other atoms so as to achieve an outer-shell configuration of s2p6. This configuration, known as an octet, corresponds to that of one of the noble-gas elements of Group 18.
• the elements in Groups 1, 2 and 13 tend to give up their valence electrons to form positive ions such as Na+, Mg2+ and Al3+, as well as compounds NaH, MgH2 and AlH3. The outer-shell configurations of the metal atoms in these species correspond to that of neon.
• elements in Groups 15-17 tend to acquire electrons, forming ions such as P3–, S2– and Cl or compounds such as PH3, H2S and HCl. The outer-shell configurations of these elements correspond to that of argon.
• the Group 14 elements do not normally form ions at all, but share electrons with other elements in tetravalent compounds such as CH4.
The above diagram shows the first three rows of what are known as the representative elements— that is, the s- and p-block elements only. As we move farther down (into the fourth row and below), the presence of d-electrons exerts a complicating influence which allows elements to exhibit multiple valances. This effect is especially noticeable in the transition-metal elements, and is the reason for not including the d-block with the representative elements at all.
Effective nuclear charge
Those electrons in the outmost or valence shell are especially important because they are the ones that can engage in the sharing and exchange that is responsible for chemical reactions; how tightly they are bound to the atom determines much of the chemistry of the element. The degree of binding is the result of two opposing forces: the attraction between the electron and the nucleus, and the repulsions between the electron in question and all the other electrons in the atom. All that matters is the net force, the difference between the nuclear attraction and the totality of the electron-electron repulsions.
We can simplify the shell model even further by imagining that the valence shell electrons are the only electrons in the atom, and that the nuclear charge has whatever value would be required to bind these electrons as tightly as is observed experimentally. Because the number of electrons in this model is less than the atomic number Z, the required nuclear charge will also be smaller. and is known as the effective nuclear charge. Effective nuclear charge is essentially the positive charge that a valence electron "sees".
Z vs. Zeffective
Part of the difference between Z and Zeffective is due to other electrons in the valence shell, but this is usually only a minor contributor because these electrons tend to act as if they are spread out in a diffuse spherical shell of larger radius. The main actors here are the electrons in the much more compact inner shells which surround the nucleus and exert what is often called a shielding or "screening" effect on the valence electrons.
The formula for calculating effective nuclear charge is not very complicated, but we will skip a discussion of it here. An even simpler although rather crude procedure is to just subtract the number of inner-shell electrons from the nuclear charge; the result is a form of effective nuclear charge which is called the core charge of the atom.
Sizes of atoms and ions
The concept of "size" is somewhat ambiguous when applied to the scale of atoms and molecules. The reason for this is apparent when you recall that an atom has no definite boundary; there is a finite (but very small) probability of finding the electron of a hydrogen atom, for example, 1 cm, or even 1 km from the nucleus. It is not possible to specify a definite value for the radius of an isolated atom; the best we can do is to define a spherical shell within whose radius some arbitrary percentage of the electron density can be found
.
When an atom is combined with other atoms in a solid element or compound, an effective radius can be determined by observing the distances between adjacent rows of atoms in these solids. This is most commonly carried out by X-ray scattering experiments. Because of the different ways in which atoms can aggregate together, several different kinds of atomic radii can be defined.
Note
Distances on the atomic scale have traditionally been expressed in Ångstrom units (1Å = 10–8 cm = 10–10 m), but nowadays the picometer is preferred:
1 pm = 10–12 m = 1010 cm = 10–2 Å, or 1Å = 100 pm.
The radii of atoms and ions are typically in the range 70-400 pm.
A rough idea of the size of a metallic atom can be obtained simply by measuring the density of a sample of the metal. This tells us the number of atoms per unit volume of the solid. The atoms are assumed to be spheres of radius r in contact with each other, each of which sits in a cubic box of edge length 2r. The volume of each box is just the total volume of the solid divided by the number of atoms in that mass of the solid; the atomic radius is the cube root of r.
Although the radius of an atom or ion cannot be measured directly, in most cases it can be inferred from measurements of the distance between adjacent nuclei in a crystalline solid. This is most commonly carried out by X-ray scattering experiments. Because such solids fall into several different classes, several kinds of atomic radius are defined. Many atoms have several different radii; for example, sodium forms a metallic solid and thus has a metallic radius, it forms a gaseous molecule Na2 in the vapor phase (covalent radius), and of course it forms ionic solids such as NaCl.
molecules in crystalline iodine.
Ionic radius is the effective radius of ions in solids such as NaCl. It is easy enough to measure the distance between adjacent rows of Na+ and Cl ions in such a crystal, but there is no unambiguous way to decide what portions of this distance are attributable to each ion. The best one can do is make estimates based on studies of several different ionic solids (LiI, KI, NaI, for example) that contain one ion in common. Many such estimates have been made, and they turn out to be remarkably consistent.
Many atoms have several different radii; for example, sodium forms a metallic solid and thus has a metallic radius, it forms a gaseous molecule Na2 in the vapor phase (covalent radius), and of course it forms ionic solids as mentioned above.
Periodic trends in atomic size
We would expect the size of an atom to depend mainly on the principal quantum number of the highest occupied orbital; in other words, on the "number of occupied electron shells". Since each row in the periodic table corresponds to an increment in n, atomic radius increases as we move down a column. The other important factor is the nuclear charge; the higher the atomic number, the more strongly will the electrons be drawn toward the nucleus, and the smaller the atom. This effect is responsible for the contraction we observe as we move across the periodic table from left to right.
The figure shows a periodic table in which the sizes of the atoms are represented graphically. The apparent discontinuities in this diagram reflect the difficulty of comparing the radii of atoms of metallic and nonmetallic bonding types. Radii of the noble gas elements are estimates from those of nearby elements.
Ionic radii
A positive ion is always smaller than the neutral atom, owing to the diminished electron-electron repulsion. If a second electron is lost, the ion gets even smaller; for example, the ionic radius of Fe2+ is 76 pm, while that of Fe3+ is 65 pm. If formation of the ion involves complete emptying of the outer shell, then the decrease in radius is especially great.
The hydrogen ion H+ is in a class by itself; having no electron cloud at all, its radius is that of the bare proton, or about 0.1 pm— a contraction of 99.999%! Because the unit positive charge is concentrated into such a small volume of space, the charge density of the hydrogen ion is extremely high; it interacts very strongly with other matter, including water molecules, and in aqueous solution it exists only as the hydronium ion H3O+.
Negative ions are always larger than the parent ion; the addition of one or more electrons to an existing shell increases electron-electron repulsion which results in a general expansion of the atom.
An isoelectronic series is a sequence of species all having the same number of electrons (and thus the same amount of electron-electron repulsion) but differing in nuclear charge. Of course, only one member of such a sequence can be a neutral atom (neon in the series shown below.) The effect of increasing nuclear charge on the radius is clearly seen.
Periodic Trends in ion formation
Chemical reactions are based largely on the interactions between the most loosely bound electrons in atoms, so it is not surprising that the tendency of an atom to gain, lose or share electrons is one of its fundamental chemical properties.
Ionization Energy
This term always refers to the formation of positive ions. In order to remove an electron from an atom, work must be done to overcome the electrostatic attraction between the electron and the nucleus; this work is called the ionization energy of the atom and corresponds to the exothermic process
\[M_{(g)} → M^+_{(g)} + e^–\]
where \(M_{(g)}\) stands for any isolated (gaseous) atom.
An atom has as many ionization energies as it has electrons. Electrons are always removed from the highest-energy occupied orbital. An examination of the successive ionization energies of the first ten elements (below) provides experimental confirmation that the binding of the two innermost electrons (1s orbital) is significantly different from that of the n=2 electrons.Successive ionization energies of an atom increase rapidly as reduced electron-electron repulsion causes the electron shells to contract, thus binding the electrons even more tightly to the nucleus.
Ionization energies increase with the nuclear charge Z as we move across the periodic table. They decrease as we move down the table because in each period the electron is being removed from a shell one step farther from the nucleus than in the atom immediately above it. This results in the familiar zig-zag lines when the first ionization energies are plotted as a function of Z.
This more detailed plot of the ionization energies of the atoms of the first ten elements reveals some interesting irregularities that can be related to the slightly lower energies (greater stabilities) of electrons in half-filled (spin-unpaired) relative to completely-filled subshells.
Finally, a more comprehensive survey of the ionization energies of the main group elements is shown below.
Some points to note:
• The noble gases have the highest IE's of any element in the period. This has nothing to do with any mysterious "special stability" of the s2p6 electron configuration; it is simply a matter of the high nuclear charge acting on more contracted orbitals.
• IE's (as well as many other properties) tend not to vary greatly amongst the d-block elements. This reflects the fact that as the more-compact d orbitals are being filled, they exert a screening effect that partly offsets that increasing nuclear charge on the outermost s orbitals of higher principal quantum number.
• Each of the Group 13 elements has a lower first-IE than that of the element preceding it. The reversal of the IE trend in this group is often attributed to the more easy removal of the single outer-shell p electron compared to that of electrons contained in filled (and thus spin-paired) s- and d-orbitals in the preceding elements.
Electron affinity
Formation of a negative ion occurs when an electron from some external source enters the atom and become incorporated into the lowest energy orbital that possesses a vacancy. Because the entering electron is attracted to the positive nucleus, the formation of negative ions is usually exothermic. The energy given off is the electron affinity of the atom. For some atoms, the electron affinity appears to be slightly negative, suggesting that electron-electron repulsion is the dominant factor in these instances.
In general, electron affinities tend to be much smaller than ionization energies, suggesting that they are controlled by opposing factors having similar magnitudes. These two factors are, as before, the nuclear charge and electron-electron repulsion. But the latter, only a minor actor in positive ion formation, is now much more significant. One reason for this is that the electrons contained in the inner shells of the atom exert a collective negative charge that partially cancels the charge of the nucleus, thus exerting a so-called shielding effect which diminishes the tendency for negative ions to form.
Because of these opposing effects, the periodic trends in electron affinities are not as clear as are those of ionization energies. This is particularly evident in the first few rows of the periodic table, in which small effects tend to be magnified anyway because an added electron produces a large percentage increase in the number of electrons in the atom.
In general, we can say that electron affinities become more exothermic as we move from left to right across a period (owing to increased nuclear charge and smaller atom size). There are some interesting irregularities, however:
• In the Group 2 elements, the filled 2s orbital apparently shields the nucleus so effectively that the electron affinities are slightly endothermic.
• The Group 15 elements have rather low values, due possibly to the need to place the added electron in a half-filled p orbital; why the electron affinity of nitrogen should be endothermic is not clear. The vertical trend is for electron affinity to become less exothermic in successive periods owing to better shielding of the nucleus by more inner shells and the greater size of the atom, but here also there are some apparent anomalies.
Electronegativity
When two elements are joined in a chemical bond, the element that attracts the shared electrons more strongly is more electronegative. Elements with low electronegativities (the metallic elements) are said to be electropositive.
Electronegativities are properties of atoms that are chemically bound to each other; there is no way of measuring the electronegativity of an isolated atom.
Moreover, the same atom can exhibit different electronegativities in different chemical environments, so the "electronegativity of an element" is only a general guide to its chemical behavior rather than an exact specification of its behavior in a particular compound. Nevertheless, electronegativity is eminently useful in summarizing the chemical behavior of an element. You will make considerable use of electronegativity when you study chemical bonding and the chemistry of the individual elements.
Because there is no single definition of electronegativity, any numerical scale for measuring it must of necessity be somewhat arbitrary. Most such scales are themselves based on atomic properties that are directly measurable and which relate in one way or the other to electron-attracting propensity. The most widely used of these scales was devised by Linus Pauling and is related to ionization energy and electron affinity. The Pauling scale runs from 0 to 4; the highest electron affinity, 4.0, is assigned to fluorine, while cesium has the lowest value of 0.7. Values less than about 2.2 are usually associated with electropositive, or metallic character. In the representation of the scale shown in figure, the elements are arranged in rows corresponding to their locations in the periodic table. The correlation is obvious; electronegativity is associated with the higher rows and the rightmost columns.
The location of hydrogen on this scale reflects some of the significant chemical properties of this element. Although it acts like a metallic element in many respects (forming a positive ion, for example), it can also form hydride-ion (H) solids with the more electropositive elements, and of course its ability to share electrons with carbon and other p-block elements gives rise to a very rich chemistry, including of course the millions of organic compounds. | textbooks/chem/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.07%3A_Periodic_Properties_of_the_Elements.txt |
The picture of electrons "orbiting" the nucleus like planets around the sun remains an enduring one, not only in popular images of the atom but also in the minds of many of us who know better. The proposal, first made in 1913, that the centrifugal force of the revolving electron just exactly balances the attractive force of the nucleus (in analogy with the centrifugal force of the moon in its orbit exactly counteracting the pull of the Earth's gravity) is a nice picture, but is simply untenable.
One origin for this hypothesis that suggests this perspective is plausible is the similarity of the gravity and Coulombic interactions. The expression for the force of gravity between two masses (Newton's Law of gravity) is
$F_{gravity} \propto \dfrac{m_1m_2}{r^2}\label{1}$
with
• $m_1$ and $m_2$ representing the mass of object 1 and 2, respectively and
• $r$ representing the distance between the objects centers
The expression for the Coulomb force between two charged species is
$F_{Coulomb} \propto \dfrac{q_1q_2}{r^2}\label{2}$
with
• $q_1$ and $q_2$ representing the charge of object 1 and 2, respectively and
• $r$ representing the distance between the objects centers
However, an electron, unlike a planet or a satellite, is electrically charged, and it has been known since the mid-19th century that an electric charge that undergoes acceleration (changes velocity and direction) will emit electromagnetic radiation, losing energy in the process. A revolving electron would transform the atom into a miniature radio station, the energy output of which would be at the cost of the potential energy of the electron; according to classical mechanics, the electron would simply spiral into the nucleus and the atom would collapse.
Quantum theory to the Rescue!
By the 1920's, it became clear that a tiny object such as the electron cannot be treated as a classical particle having a definite position and velocity. The best we can do is specify the probability of its manifesting itself at any point in space. If you had a magic camera that could take a sequence of pictures of the electron in the 1s orbital of a hydrogen atom, and could combine the resulting dots in a single image, you would see something like this. Clearly, the electron is more likely to be found the closer we move toward the nucleus.
This is confirmed by this plot which shows the quantity of electron charge per unit volume of space at various distances from the nucleus. This is known as a probability density plot. The per unit volume of space part is very important here; as we consider radii closer to the nucleus, these volumes become very small, so the number of electrons per unit volume increases very rapidly. In this view, it appears as if the electron does fall into the nucleus!
According to classical mechanics, the electron would simply spiral into the nucleus and the atom would collapse. Quantum mechanics is a different story.
The Battle of the Infinities Saves the electron from its death spiral
As you know, the potential energy of an electron becomes more negative as it moves toward the attractive field of the nucleus; in fact, it approaches negative infinity. However, because the total energy remains constant (a hydrogen atom, sitting peacefully by itself, will neither lose nor acquire energy), the loss in potential energy is compensated for by an increase in the electron's kinetic energy (sometimes referred to in this context as "confinement" energy) which determines its momentum and its effective velocity.
So as the electron approaches the tiny volume of space occupied by the nucleus, its potential energy dives down toward minus-infinity, and its kinetic energy (momentum and velocity) shoots up toward positive-infinity. This "battle of the infinities" cannot be won by either side, so a compromise is reached in which theory tells us that the fall in potential energy is just twice the kinetic energy, and the electron dances at an average distance that corresponds to the Bohr radius.
There is still one thing wrong with this picture; according to the Heisenberg uncertainty principle (a better term would be "indeterminacy"), a particle as tiny as the electron cannot be regarded as having either a definite location or momentum. The Heisenberg principle says that either the location or the momentum of a quantum particle such as the electron can be known as precisely as desired, but as one of these quantities is specified more precisely, the value of the other becomes increasingly indeterminate. It is important to understand that this is not simply a matter of observational difficulty, but rather a fundamental property of nature.
What this means is that within the tiny confines of the atom, the electron cannot really be regarded as a "particle" having a definite energy and location, so it is somewhat misleading to talk about the electron "falling into" the nucleus.
Arthur Eddington, a famous physicist, once suggested, not entirely in jest, that a better description of the electron would be "wavicle"!
Probability Density vs. Radial probability
We can, however, talk about where the electron has the highest probability of manifesting itself— that is, where the maximum negative charge will be found.
This is just the curve labeled "probability density"; its steep climb as we approach the nucleus shows unambiguously that the electron is most likely to be found in the tiny volume element at the nucleus. But wait! Did we not just say that this does not happen? What we are forgetting here is that as we move out from the nucleus, the number of these small volume elements situated along any radius increases very rapidly with $r$, going up by a factor of $4πr^2$. So the probability of finding the electron somewhere on a given radius circle is found by multiplying the probability density by $4πr^2$. This yields the curve you have probably seen elsewhere, known as the radial probability, that is shown on the right side of the above diagram. The peak of the radial probability for principal quantum number $n = 1$ corresponds to the Bohr radius.
To sum up, the probability density and radial probability plots express two different things: the first shows the electron density at any single point in the atom, while the second, which is generally more useful to us, tells us the the relative electron density summed over all points on a circle of given radius. | textbooks/chem/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.08%3A_Why_Don%27t_Electrons_Fall_into_the_Nucleus.txt |
The gaseous state of matter is the only one that is based on a simple model that can be developed from first principles. As such, it serves as the starting point for the study of the other states of matter.
• 6.1: Observable Properties of Gas
The invention of the sensitive balance in the early seventeenth century showed once and for all that gases have weight and are therefore matter. Guericke's invention of air pump (which led directly to his discovery of the vacuum) launched the “pneumatic era" of chemistry long before the existence of atoms and molecules had been accepted. Indeed, the behavior of gases was soon to prove an invaluable tool in the development of the atomic theory of matter.
• 6.2: Ideal Gas Model - The Basic Gas Laws
This chapter covers the following topics:Boyle's, Charles', and Avogadro's (E.V.E.N.) laws, as well as Gay Lussac's law of combining volumes, and the ideal gas equation of state, including the PVT surface of an ideal gas.
• 6.3: Dalton's Law
Although all gases closely follow the ideal gas law PV = nRT under appropriate conditions, each gas is also a unique chemical substance consisting of molecular units that have definite masses. In this lesson we will see how these molecular masses affect the properties of gases that conform to the ideal gas law. Following this, we will look at gases that contain more than one kind of molecule— in other words, mixtures of gases. We begin with a review of molar volume and the E.V.E.N. principle.
• 6.4: Kinetic Molecular Theory (Overview)
The kinetic molecular theory of gases relates macroscopic properties to the behavior of the individual molecules, which are described by the microscopic properties of matter. This theory applies strictly only to a hypothetical substance known as an ideal gas; we will see, however, that under many conditions it describes the behavior of real gases at ordinary temperatures and pressures quite accurately, and serves as the starting point for dealing with more complicated states of matter.
• 6.5: More on Kinetic Molecular Theory
In this section, we look in more detail at some aspects of the kinetic-molecular model and how it relates to our empirical knowledge of gases. For most students, this will be the first application of algebra to the development of a chemical model; this should be educational in itself, and may help bring that subject back to life for you! As before, your emphasis should on understanding these models and the ideas behind them, there is no need to memorize any of the formulas.
• 6.6: Real Gases and Critical Phenomena
When the temperature is reduced, or the pressure is raised in a gas, the ideal gas begins to break down, and its properties become unpredictable; eventually the gas condenses into a liquid. It is vital for appreciating the limitations of the scientific model that constitutes the "ideal gas".
06: Properties of Gases
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• State the three major properties of gases that distinguish them from condensed phases of matter.
• Define pressure, and explain why a gas exerts pressure on the walls of a container.
• Explain the operation of a simple barometer, and why its invention revolutionized our understanding of gases.
• Explain why a barometer that uses water as the barometric fluid is usually less practical than one which employs mercury.
• How are the Celsius and Fahrenheit temperature scales defined? How do the magnitudes of the "degree" on each scale related?
• Why must the temperature and pressure be specified when reporting the volume of a gas?
The invention of the sensitive balance in the early seventeenth century showed once and for all that gases have weight and are therefore matter. Guericke's invention of air pump (which led directly to his discovery of the vacuum) launched the “pneumatic era" of chemistry long before the existence of atoms and molecules had been accepted. Indeed, the behavior of gases was soon to prove an invaluable tool in the development of the atomic theory of matter.
Introduction
The study of gases allows us to understand the behavior of matter at its simplest: individual particles, acting independently, almost completely uncomplicated by interactions and interferences between each other. This knowledge of gases will serve as the pathway to our understanding of the far more complicated condensed phases (liquids and solids) in which the theory of gases will no longer give us correct answers, but it will still provide us with a useful model that will at least help us to rationalize the behavior of these more complicated states of matter.
First, we know that a gas has no definite volume or shape; a gas will fill whatever volume is available to it. Contrast this to the behavior of a liquid, which always has a distinct upper surface when its volume is less than that of the space it occupies. The other outstanding characteristic of gases is their low densities, compared with those of liquids and solids. One mole of liquid water at 298 K and 1 atm pressure occupies a volume of 18.8 cm3, whereas the same quantity of water vapor at the same temperature and pressure has a volume of 30200 cm3, more than 1000 times greater.
The most remarkable property of gases, however, is that to a very good approximation, they all behave the same way in response to changes in temperature and pressure, expanding or contracting by predictable amounts. This is very different from the behavior of liquids or solids, in which the properties of each particular substance must be determined individually. We will see later that each of these three macroscopic characteristics of gases follows directly from the microscopic view— that is, from the atomic nature of matter.
The Pressure of a Gas
The molecules of a gas, being in continuous motion, frequently strike the inner walls of their container. As they do so, they immediately bounce off without loss of kinetic energy, but the reversal of direction (acceleration) imparts a force to the container walls. This force, divided by the total surface area on which it acts, is the pressure of the gas.
The pressure of a gas is observed by measuring the pressure that must be applied externally in order to keep the gas from expanding or contracting. To visualize this, imagine some gas trapped in a cylinder having one end enclosed by a freely moving piston. In order to keep the gas in the container, a certain amount of weight (more precisely, a force, f ) must be placed on the piston so as to exactly balance the force exerted by the gas on the bottom of the piston, and tending to push it up. The pressure of the gas is simply the quotient f/A, where A is the cross-section area of the piston.
Pressure Units
The unit of pressure in the SI system is the pascal (Pa), defined as a force of one newton per square meter (1 Nm–2 = 1 kg m–1 s–2.) At the Earth's surface, the force of gravity acting on a 1 kg mass is 9.81 N. Thus if the weight is 1 kg and the surface area of the piston is 1 M2, the pressure of the gas would be 9.81 Pa. A 1-gram weight acting on a piston of 1 cm2 cross-section would exert a pressure of 98.1 pA. (If you wonder why the pressure is higher in the second example, consider the number of cm2 contained in 1 m2.)
In chemistry, it is often common to express pressures in units of atmospheres or torr: 1 atm = 101325 Pa = 760 torr. The older unit millimeter of mercury (mm Hg) is almost the same as the torr and is defined as one mm of level difference in a mercury barometer at 0°C. In meteorology, the pressure unit most commonly used is the bar:
1 bar = 105 N m–2 = 750.06 torr = 0.987 atm.
The pressures of gases encountered in nature span an exceptionally wide range, only part of which is ordinarily encountered in chemistry. Note that in the chart below, the pressure scales are logarithmic; thus 0 on the atm scale means 100 = 1 atm.
Atmospheric Pressure
The column of air above us exerts a force on each 1-cm2 of surface equivalent to a weight of about 1034 g. The higher into the air you go, the lower the mass of air above you, hence the lower the pressure (right).
This figure is obtained by solving Newton's law \(\textbf{F} = m\textbf{a}\) for \(m\), using the acceleration of gravity for \(\textbf{a}\):
\[ m = \dfrac{101375\; kg\, m^{-1} \, s^{-2}}{9.8 \, m \, s^{-2}} = 10340 \, kg\, m^{-1} =1034\; g \; cm^{-2}\]
Example \(1\)
If several kilos of air are constantly pressing down on your body, why do you not feel it?
Solution
Because every other part of your body (including within your lungs and insides) also experiences the same pressure, so there is no net force (other than gravity) acting on you.
This was the crucial first step that led eventually to the concept of gases and their essential role in the early development of Chemistry. In the early 17th century the Italian Evangelista Torricelli invented a device — the barometer — to measure the pressure of the atmosphere. A few years later, the German scientist and some-time mayor of Magdeburg Otto von Guericke devised a method of pumping the air out of a container, thus creating which might be considered the opposite of air: the vacuum.
As with so many advances in science, idea of a vacuum — a region of nothingness — was not immediately accepted. Torricelli's invention overturned the then-common belief that air (and by extension, all gases) are weightless. The fact that we live at the bottom of a sea of air was most spectacularly demonstrated in 1654, when two teams of eight horses were unable to pull apart two 14-inch copper hemispheres (the "Magdeburg hemispheres") which had been joined together and then evacuated with Guericke's newly-invented vacuum pump.
The classical barometer, still used for the most accurate work, measures the height of a column of liquid that can be supported by the atmosphere. As indicated below, this pressure is exerted directly on the liquid in the reservoir, and is transmitted hydrostatically to the liquid in the column.
Metallic mercury, being a liquid of exceptionally high density and low vapor pressure, is the ideal barometric fluid. Its widespread use gave rise to the "millimeter of mercury" (now usually referred to as the "torr") as a measure of pressure.
Example \(2\): Water Barometer
How is the air pressure of 1034 g cm–3 related to the 760-mm height of the mercury column in the barometer? What if water were used in place of mercury?
Solution
The density of Hg is 13.6 g cm–3, so in a column of 1-cm2 cross-section, the height needed to counter the atmospheric pressure would be (1034 g × 1 cm2) / (13.6 g cm–3) = 76 cm.
The density of water is only 1/13.6 that of mercury, so standard atmospheric pressure would support a water column whose height is 13.6 x 76 cm = 1034 cm, or 10.3 m. You would have to read a water barometer from a fourth-story window!
Water barometers were once employed to measure the height of the ground and the heights of buildings before more modern methods were adopted.
A modification of the barometer, the U-tube manometer, provides a simple device for measuring the pressure of any gas in a container. The U-tube is partially filled with mercury, one end is connected to container, while the other end can either be opened to the atmosphere. The pressure inside the container is found from the difference in height between the mercury in the two sides of the U-tube. The illustration below shows how the two kinds of manometer work.
The manometers ordinarily seen in the laboratory come in two flavors: closed-tube and open-tube. In the closed-tube unit shown at the left, the longer limb of the J-tube is evacuated by filling it with mercury and then inverting it. If the sample container is also evacuated, the mercury level will be the same in both limbs. When gas is let into the container, its pressure pushes the mercury down on one side and up on the other; the difference in levels is the pressure in torr. For practical applications in engineering and industry, especially where higher pressures must be monitored, many types of mechanical and electrical pressure gauges are available.
The Temperature of a Gas
If two bodies are at different temperatures, heat will flow from the warmer to the cooler one until their temperatures are the same. This is the principle on which thermometry is based; the temperature of an object is measured indirectly by placing a calibrated device known as a thermometer in contact with it. When thermal equilibrium is obtained, the temperature of the thermometer is the same as the temperature of the object.
A thermometer makes use of some temperature-dependent quantity, such as the density of a liquid, to allow the temperature to be found indirectly through some easily measured quantity such as the length of a mercury column. The resulting scale of temperature is entirely arbitrary; it is defined by locating its zero point, and the size of the degree unit. At one point in the 18th century, 35 different temperature scales were in use! The Celsius temperature scale (formally called centigrade) locates the zero point at the freezing temperature of water; the Celsius degree is defined as 1/100 of the difference between the freezing and boiling temperatures of water at 1 atm pressure.
The older Fahrenheit scale placed the zero point at the coldest temperature it was possible to obtain at the time (by mixing salt and ice.) The 100° point was set with body temperature (later found to be 98.6°F.) On this scale, water freezes at 32°F and boils at 212°F. The Fahrenheit scale is a finer one than the Celsius scale; there are 180 Fahrenheit degrees in the same temperature interval that contains 100 Celsius degrees, so 1 F° = 5/9 C°. Since the zero points are also different by 32F°, conversion between temperatures expressed on the two scales requires the addition or subtraction of this offset, as well as multiplication by the ratio of the degree size.
You should be able to derive the formulas for this conversion.
Absolute Temperature
In 1787 the French mathematician and physicist Jacques Charles discovered that for each Celsius degree that the temperature of a gas is lowered, the volume of the gas will diminish by 1/273 of its volume at 0°C. The obvious implication of this is that if the temperature could be reduced to –273°C, the volume of the gas would contract to zero. Of course, all real gases condense to liquids before this happens, but at sufficiently low pressures their volumes are linear functions of the temperature (Charles' Law), and extrapolation of a plot of volume as a function of temperature predicts zero volume at -273°C. This temperature, known as absolute zero, corresponds to the total absence of thermal energy.
The temperature scale on which the zero point is –273.15°C was suggested by Lord Kelvin, and is usually known as the Kelvin scale. Since the size of the Kelvin and Celsius degrees are the same, conversion between the two scales is a simple matter of adding or subtracting 273.15; thus room temperature, 20°, is about 293 K.
Because the Kelvin scale is based on an absolute, rather than on an arbitrary zero of temperature, it plays a special significance in scientific calculations; most fundamental physical relations involving temperature are expressed mathematically in terms of absolute temperature. In engineering work, an absolute scale based on the Fahrenheit degree is sometimes used; this is known as the Rankine scale.
The volume occupied by a gas
The volume of a gas is simply the space in which the molecules of the gas are free to move. If we have a mixture of gases, such as air, the various gases will coexist within the same volume. In these respects, gases are very different from liquids and solids, the two condensed states of matter. The volume of a gas can be measured by trapping it above mercury in a calibrated tube known as a gas burette. The SI unit of volume is the cubic meter, but in chemistry we more commonly use the liter and the milliliter (mL). The cubic centimeter (cc) is also frequently used; it is very close to 1 milliliter (mL).
It's important to bear in mind, however, that the volume of a gas varies with both the temperature and the pressure, so reporting the volume alone is not very useful. A common practice is to measure the volume of the gas under the ambient temperature and atmospheric pressure, and then to correct the observed volume to what it would be at standard atmospheric pressure and some fixed temperature, usually 0° C or 25°C. | textbooks/chem/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.01%3A_Observable_Properties_of_Gas.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above, and be able to state them in your own words.
• Boyle's Law - The PV product for any gas at a fixed temperature has a constant value. Understand how this implies an inverse relationship between the pressure and the volume.
• Charles' Law - The volume of a gas confined by a fixed pressure varies directly with the absolute temperature. The same is true of the pressure of a gas confined to a fixed volume.
• Avogadro's Law - This is quite intuitive: the volume of a gas confined by a fixed pressure varies directly with the quantity of gas.
• The E.V.E.N. principle - this is just another way of expressing Avogadro's Law.
• Gay-Lussac's Law of Combining Volumes - you should be able to explain how this principle, that follows from the E.V.E.N. principle and the Law of Combining Weights,
• The ideal gas equation of state - this is one of the very few mathematical relations you must know. Not only does it define the properties of the hypothetical substance known as an ideal gas, but it's importance extends quite beyond the subject of gases.
The "pneumatic" era of chemistry began with the discovery of the vacuum around 1650 which clearly established that gases are a form of matter. The ease with which gases could be studied soon led to the discovery of numerous empirical (experimentally-discovered) laws that proved fundamental to the later development of chemistry and led indirectly to the atomic view of matter. These laws are so fundamental to all of natural science and engineering that everyone learning these subjects needs to be familiar with them.
Pressure-volume relations: Boyle's law
Robert Boyle (1627-91) showed that the volume of air trapped by a liquid in the closed short limb of a J-shaped tube decreased in exact proportion to the pressure produced by the liquid in the long part of the tube. The trapped air acted much like a spring, exerting a force opposing its compression. Boyle called this effect "the spring of the air", and published his results in a pamphlet of that title.
The difference between the heights of the two mercury columns gives the pressure (76 cm = 1 atm), and the volume of the air is calculated from the length of the air column and the tubing diameter. Some of Boyle's actual data are shown in Table $1$.
Table $1$: Volume vs. Pressure
volume pressure P × V
96.0 2.00 192
76.0 2.54 193
46.0 4.20 193
26.0 7.40 193
Boyle's law can be expressed as
$PV = \text{constant} \nonumber$
or, equivalently,
$P_1V_1 = P_2V_2$
These relations hold true only if the number of molecules n and the temperature are constant. This is a relation of inverse proportionality; any change in the pressure is exactly compensated by an opposing change in the volume. As the pressure decreases toward zero, the volume will increase without limit. Conversely, as the pressure is increased, the volume decreases, but can never reach zero. There will be a separate P-V plot for each temperature; a single P-V plot is therefore called an isotherm.
Shown here are some isotherms for one mole of an ideal gas at several different temperatures. Each plot has the shape of a hyperbola — the locus of all points having the property x y = a, where a is a constant. You will see later how the value of this constant (PV=25 for the 300K isotherm shown here) is determined. It is very important that you understand this kind of plot which governs any relationship of inverse proportionality. You should be able to sketch out such a plot when given the value of any one (x,y) pair.
A related type of plot with which you should be familiar shows the product PV as a function of the pressure. You should understand why this yields a straight line, and how this set of plots relates to the one immediately above.
Example $1$
In an industrial process, a gas confined to a volume of 1 L at a pressure of 20 atm is allowed to flow into a 12-L container by opening the valve that connects the two containers. What will be the final pressure of the gas?
Solution
The final volume of the gas is (1 + 12)L = 13 L. The gas expands in inverse proportion two volumes
$P_2 = (20 \,atm) (1 \,L ÷ 13 \,L) = 1.5 \,atm \nonumber$
Note that there is no need to make explicit use of any "formula" in problems of this kind!
How the temperature affects the volume: Charles' law
All matter expands when heated, but gases are special in that their degree of expansion is independent of their composition. The French scientists Jacques Charles (1746-1823) and Joseph Gay-Lussac (1778-1850) independently found that if the pressure is held constant, the volume of any gas changes by the same fractional amount (1/273 of its value) for each C° change in temperature.
The volume of a gas confined against a constant pressure is directly proportional to the absolute temperature. A graphical expression of the law of Charles and Gay-Lussac can be seen in these plots of the volume of one mole of an ideal gas as a function of its temperature at various constant pressures.
What do these plots show?
The straight-line plots show that the ratio V/T (and thus dV/dT) is a constant at any given pressure. Thus we can express the law algebraically as V/T = constant or V1/T1 = V2/T2
What is the significance of the extrapolation to zero volume?
If a gas contracts by 1/273 of its volume for each degree of cooling, it should contract to zero volume at a temperature of –273°C. This, of course, is the temperature of absolute zero, and this extrapolation of Charles' law is the first evidence of the special significance of this temperature.
Why do the plots for different pressures have different slopes?
The lower the pressure, the greater the volume (Boyle's law), so at low pressures the fraction (V/273) will have a larger value. You might say that the gas must "contract faster" to reach zero volume when its starting volume is larger.
Example $2$
The air pressure in a car tire is 30 psi (pounds per square inch) at 10°C. What will be pressure be after driving has raised its temperature to 45°C ? (Assume that the volume remains unchanged.)
Solution
The gas expands in direct proportion to the ratio of the absolute temperatures:
$P_2 = (30\, psi) × (318\,K ÷ 283\,K) = 33.7\, psi \nonumber$
Historical notes
The relation between the temperature of a gas and its volume has long been known. In 1702, Guillaume Amontons (1163-1705), who is better known for his early studies of friction, devised a thermometer that related the temperature to the volume of a gas. Robert Boyle had observed this inverse relationship in 1662, but the lack of any uniform temperature scale at the time prevented them from establishing the relationship as we presently understand it. Jacques Charles discovered the law that is named for him in the 1780s, but did not publish his work. John Dalton published a form of the law in 1801, but the first thorough published presentation was made by Gay-Lussac in 1802, who acknowledged Charles' earlier studies.
The buoyancy that lifts a hot-air balloon into the sky depends on the difference between the density (mass ÷ volume) of the air entrapped within the balloon's envelope, compared to that of the air surrounding it. When a balloon on the ground is being prepared for flight, it is first partially inflated by an external fan, and possesses no buoyancy at all. Once the propane burners are started, this air begins to expand according to Charles' law. After the warmed air has completely inflated the balloon, further expansion simply forces excess air out of the balloon, leaving the weight of the diminished mass of air inside the envelope smaller than that of the greater mass of cooler air that the balloon displaces.
Jacques Charles collaborated with the Montgolfier brothers whose hot-air balloon made the world's first manned balloon flight in June, 1783. Ten days later, Charles himself co-piloted the first hydrogen-filled balloon. Gay-Lussac, who had a special interest in the composition of the atmosphere, also saw the potential of the hot-air balloon, and in 1804 he ascended to a then-record height of 6.4 km.
Volume and the Number of Molecules
Gay-Lussac's Law of Combining Volumes
In the same 1808 article in which Gay-Lussac published his observations on the thermal expansion of gases, he pointed out that when two gases react, they do so in volume ratios that can always be expressed as small whole numbers. This came to be known as the Law of combining volumes. These "small whole numbers" are of course the same ones that describe the "combining weights" of elements to form simple compounds, as described in the lesson dealing with the simplest formulas. The Italian scientist Amedeo Avogadro (1776-1856) drew the crucial conclusion: these volume ratios must be related to the relative numbers of molecules that react, and thus the famous "E.V.E.N principle":
The E.V.E.N Principle
Equal volumes of gases, measured at the same temperature and pressure, contain equal numbers of molecules
Avogadro's law thus predicts a directly proportional relation between the number of moles of a gas and its volume. This relationship, originally known as Avogadro's Hypothesis, was crucial in establishing the formulas of simple molecules at a time (around 1811) when the distinction between atoms and molecules was not clearly understood. In particular, the existence of diatomic molecules of elements such as H2, O2, and Cl2 was not recognized until the results of combining-volume experiments such as those depicted below could be interpreted in terms of the E.V.E.N. principle.
How the E.V.E.N. principle led to the correct formula of water
Early chemists made the mistake of assuming that the formula of water is HO. This led them to miscalculate the molecular weight of oxygen as 8 (instead of 16). If this were true, the reaction H + O → HO would correspond to the following combining volumes results according to the E.V.E.N principle:
But a similar experiment on the formation of hydrogen chloride from hydrogen and chlorine yielded twice the volume of HCl that was predicted by the the assumed reaction H + Cl → HCl. This could be explained only if hydrogen and chlorine were diatomic molecules:
This made it necessary to re-visit the question of the formula of water. The experiment immediately confirmed that the correct formula of water is H2O:
This conclusion was also seen to be consistent with the observation, made a few years earlier by the English chemists Nicholson and Carlisle that the reverse of the above reaction, brought about by the electrolytic decomposition of water, yields hydrogen and oxygen in a 2:1 volume ratio.
Putting it all together: The Ideal Gas Equation of State
If the variables P, V, T and n (the number of moles) have known values, then a gas is said to be in a definite state, meaning that all other physical properties of the gas are also defined. The relation between these state variables is known as an equation of state. By combining the expressions of Boyle's, Charles', and Avogadro's laws (you should be able to do this!) we can write the very important ideal gas equation of state
$PV=NRT$
where the proportionality constant R is known as the gas constant. This is one of the few equations you must commit to memory in this course; you should also know the common value and units of $R$.
An ideal gas is defined as a hypothetical substance that obeys the ideal gas equation of state.
Take note of the word "hypothetical" here. No real gas (whose molecules occupy space and interact with each other) can behave in a truly ideal manner. But we will all gases behave more and more like an ideal gas as the pressure approaches zero. A pressure of only 1 atm is sufficiently close to zero to make this relation useful for most gases at this pressure.
Many textbooks show formulas, such as $P_1V_1 = P_2V_2$ for Boyle's law. Don't bother memorizing them; if you really understand the meanings of these laws as stated above, you can easily derive them on the rare occasions when they are needed. The ideal gas equation is the only one you need to know.
PVT surface for an ideal gas
In order to depict the relations between the three variables P, V and T we need a three-dimensional graph.
Example $3$
A biscuit made with baking powder has a volume of 20 mL, of which one-fourth consists of empty space created by gas bubbles produced when the baking powder decomposed to CO2. What weight of $\ce{NaHCO3}$ was present in the baking powder in the biscuit? Assume that the gas reached its final volume during the baking process when the temperature was 400°C.
(Baking powder consists of sodium bicarbonate mixed with some other solid that produces an acidic solution on addition of water, initiating the reaction
$\ce{NaHCO3(s) + H^{+} → Na^+ + H2O + CO2} \nonumber$
Solution: Use the ideal gas equation to find the number of moles of CO2 gas; this will be the same as the number of moles of NaHCO3 (84 g mol–1) consumed :
$n=\frac{(1 \mathrm{atm})(0.005 \mathrm{L})}{\left(.082 \mathrm{L} \mathrm{atm} \mathrm{mol}^{-1} \mathrm{K}^{-1}\right)(673 \mathrm{K})}=9.1 \times 10^{-6} \,\mathrm{mol} \nonumber$
$9.1 \mathrm{E}-6 \mathrm{mol} \times 84 \mathrm{g} \mathrm{mol}^{-1}=0.0076 \mathrm{g} \nonumber$ | textbooks/chem/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.02%3A_Ideal_Gas_Model_-_The_Basic_Gas_Laws.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented below.
• One mole of a gas occupies a volume of 22.4 L at STP (standard temperature and pressure, 273K, 1 atm = 103 kPa.)
• The above fact allows us to relate the measurable property of the density of a gas to its molar mass.
• The composition of a mixture of gases is commonly expressed in terms mole fractions; be sure you know how to calculate them.
• Dalton's Law of partial pressures says that every gas in a mixture acts independently, so the total pressure a gas exerts against the walls of a container is just the sum of the partial pressures of the individual components.
Although all gases closely follow the ideal gas law PV = nRT under appropriate conditions, each gas is also a unique chemical substance consisting of molecular units that have definite masses. In this lesson we will see how these molecular masses affect the properties of gases that conform to the ideal gas law. Following this, we will look at gases that contain more than one kind of molecule— in other words, mixtures of gases. We begin with a review of molar volume and the E.V.E.N. principle, which is central to our understanding of gas mixtures.
The Molar Volume of a Gas
You will recall that the molar mass of a pure substance is the mass of 6.02 x 1023 (Avogadro's number) of particles or molecular units of that substance. Molar masses are commonly expressed in units of grams per mole (g mol–1) and are often referred to as molecular weights. As was explained in the preceding lesson, equal volumes of gases, measured at the same temperature and pressure, contain equal numbers of molecules (this is the "EVEN" principle, more formally known as Avogadro's law.) Standard temperature and pressure: 273K, 1 atm
The magnitude of this volume will of course depend on the temperature and pressure, so as a means of convenient comparison it is customary to define a set of conditions T = 273 K and P = 1 atm as standard temperature and pressure, usually denoted as STP. Substituting these values into the ideal gas equation of state and solving for V yields a volume of 22.414 liters for 1 mole.
Example $1$
What would the volume of one mole of air be at 20°C on top of Mauna Kea, Hawa'ii (altitude 4.2 km) where the air pressure is approximately 60 kPa?
Scoria and cinder cones on Mauna Kea's summit in winter. (Public Domain; USGS)
Solution
Apply Boyle's and Charles' laws as successive correction factors to the standard sea-level pressure of 101.3 kPa:
The standard molar volume 22.4 L mol–1 is a value worth memorizing, but remember that it is valid only at STP. The molar volume at other temperatures and pressures can easily be found by simple proportion. The molar volume of a substance can tell us something about how much space each molecule occupies, as the following example shows.
Example $2$
Estimate the average distance between the molecules in a gas at 1 atm pressure and 0°C.
Solution
Consider a 1-cm3 volume of the gas, which will contain
$\dfrac{6.02 \times 10^{23} \;mol^{–1}}{22,400\; cm^3 \;mol^{–1}} = 2.69 \times 10^{19} cm^{-3} \nonumber$
The volume per molecule (not the same as the volume of a molecule, which for an ideal gas is zero!) is just the reciprocal of this, or $3.72 \times 10^{-20}\, cm^3$. Assume that the molecules are evenly distributed so that each occupies an imaginary box having this volume. The average distance between the centers of the molecules will be defined by the length of this box, which is the cube root of the volume per molecule:
$(3.72 \times 10^{–20})^{1/3} = 3.38 \times 10^{–7}\, cm = 3.4\, nm \nonumber$
Under conditions at which the ideal gas model is applicable (that is, almost always unless you are a chemical engineer dealing with high pressures), "a molecule is a molecule", so the volume of Avogadro's number of molecules will be independent of the composition of the gas. The reason, of course, is that the volume of the gas is mostly empty space; the volumes of the molecules themselves are negligible.
Molar Mass and Density of a Gas
The molecular weight (molar mass) of any gas is the mass, expressed in grams, of Avogadro's number of its molecules. This is true regardless of whether the gas is composed of one molecular species or is a mixture. For a mixture of gases, the molar mass will depend on the molar masses of its components, and on the fractional abundance of each kind of molecule in the mixture. The term "average molecular weight" is often used to describe the molar mass of a gas mixture.
The average molar mass ($\bar{m}$) of a mixture of gases is just the sum of the mole fractions of each gas, multiplied by the molar mass of that substance:
$\bar{m}=\sum_i \chi_im_i$
Example $3$
Find the average molar mass of dry air whose volume-composition is O2 (21%), N2 (78%) and Ar (1%).
Solution
The average molecular weight is the mole-fraction-weighted sum of the molecular weights of its components. The mole fractions, of course, are the same as the volume-fractions (E.V.E.N. principle.)
$m = (0.21 \times 32) + (0.78 \times 28) + (0.01 \times 20) = 28 \nonumber$
The molar volumes of all gases are the same when measured at the same temperature and pressure. However, the molar masses of different gases will vary. This means that different gases will have different densities (different masses per unit volume). If we know the molecular weight of a gas, we can calculate its density.
Example $4$
Uranium hexafluoride UF6 gas is used in the isotopic enrichment of natural uranium. Calculate its density at STP.
Solution
The molecular weight of UF6 is 352.
$\dfrac{352\; g \;mol^{–1}}{22.4\, L\, mol^{–1}} = 15.7\; g\; L^{–1} \nonumber$
Note: there is no need to look up a "formula" for this calculation; simply combine the molar mass and molar volume in such a way as to make the units come out correctly.
More importantly, if we can measure the density of an unknown gas, we have a convenient means of estimating its molecular weight. This is one of many important examples of how a macroscopic measurement (one made on bulk matter) can yield microscopic information (that is, about molecular-scale objects.)
Gas densities are now measured in industry by electro-mechanical devices such as vibrating reeds which can provide continuous, on-line records at specific locations, as within pipelines. Determination of the molecular weight of a gas from its density is known as the Dumas method, after the French chemist Jean Dumas (1800-1840) who developed it. One simply measures the weight of a known volume of gas and converts this volume to its STP equivalent, using Boyle's and Charles' laws. The weight of the gas divided by its STP volume yields the density of the gas, and the density multiplied by 22.4 mol–1 gives the molecular weight. Pay careful attention to the examples of gas density calculations shown here and in your textbook. You will be expected to carry out calculations of this kind, converting between molecular weight and gas density.
Example $5$
Calculate the approximate molar mass of a gas whose measured density is 3.33 g/L at 30°C and 780 torr.
Solution
Find the volume that would be occupied by 1 L of the gas at STP; note that correcting to 273 K will reduce the volume, while correcting to 1 atm (760 torr) will increase it:
$V=(1.00 \mathrm{L})\left(\frac{273}{303}\right)\left(\frac{780}{760}\right)=0.924 \mathrm{L} \nonumber$
The number of moles of gas is
$n = \dfrac{0.924\, L}{22.4\, L\, mol^{–1}}= 0.0412\, mol \nonumber$
The molecular weight is therefore
$\dfrac{33\, g\, L^{–1}}{0.0412\, mol\, L^{–1}} = 80.7\, g\, mol^{–1} \nonumber$
Density of a Gas Mixture
Gas density measurements can be a useful means of estimating the composition of a mixture of two different gases; this is widely done in industrial chemistry operations in which the compositions of gas streams must be monitored continuously.
Example $6$: Carbon Dioxide and Methane
Find the composition of a mixture of $\ce{CO2}$ (44 g/mol) and methane $\ce{CH4}$ (16 g/mol) that has a STP density of 1.214 g/L.
Solution
The density of a mixture of these two gases will be directly proportional to its composition, varying between that of pure methane and pure CO2. We begin by finding these two densities:
For CO2:
(44 g/mol) ÷ (22.4 L/mol) = 1.964 g/L
For CH4:
(16 g/mol) ÷ (22.4 L/mol) = 0.714 g/L
If x is the mole fraction of CO2 and (1–x) is the mole fraction of CH4, we can write
1.964 x + 0.714 (1–x) = 1.214
(Does this make sense? Notice that if x = 0, the density would be that of pure CH4, while if it were 1, it would be that of pure CO2.)
Expanding the above equation and solving for x yields the mole fractions of 0.40 for CO2 and 0.60 for CH4.
Expressing the Composition of a Gas Mixture
Because most of the volume occupied by a gas consists of empty space, there is nothing to prevent two or more kinds of gases from occupying the same volume. Homogeneous mixtures of this kind are generally known as solutions, but it is customary to refer to them simply as gaseous mixtures. We can specify the composition of gaseous mixtures in many different ways, but the most common ones are by volumes and by mole fractions.
Volume Fractions
From Avogadro's Law we know that "equal volumes contains equal numbers of molecules". This means that the volumes of gases, unlike those of solids and liquids, are additive. So if a partitioned container has two volumes of gas A in one section and one mole of gas B in the other (both at the same temperature and pressure), and we remove the partition, the volume remains unchanged.
Volume fractions are often called partial volumes:
$V_i = \dfrac{v_i}{\sum v_i}$
Don't let this type of notation put you off! The summation sign Σ (Greek Sigma) simply means to add up the v's (volumes) of every gas. Thus if Gas A is the "i-th" substance as in the expression immediately above, the summation runs from i=1 through i=2. Note that we can employ partial volumes to specify the composition of a mixture even if it had never actually been made by combining the pure gases.
When we say that air, for example, is 21 percent oxygen and 78 percent nitrogen by volume, this is the same as saying that these same percentages of the molecules in air consist of O2 and N2. Similarly, in 1.0 mole of air, there is 0.21 mol of O2 and 0.78 mol of N2 (the other 0.1 mole consists of various trace gases, but is mostly neon.) Note that you could never assume a similar equivalence with mixtures of liquids or solids, to which the E.V.E.N. principle does not apply.
Mole Fractions
These last two numbers (0.21 and 0.78) also express the mole fractions of oxygen and nitrogen in air. Mole fraction means exactly what it says: the fraction of the molecules that consist of a specific substance. This is expressed algebraically by
$X_i = \dfrac{n_i}{\sum_i n_i}$
so in the case of oxygen in the air, its mole fraction is
$X_{O_2} = \dfrac{n_{O_2}}{n_{O_2}+n_{N_2}+n_{Ar}}= \dfrac{0.21}{1}=0.21 \nonumber$
Example $7$
A mixture of $O_2$ and nitrous oxide, $N_2O$, is sometimes used as a mild anesthetic in dental surgery. A certain mixture of these gases has a density of 1.482 g L–1 at 25 and 0.980 atm. What was the mole-percent of $N_2O$ in this mixture?
Solution
First, find the density the gas would have at STP:
$\left(1.482 \mathrm{g} \mathrm{L}^{-1}\right) \times\left(\frac{298}{273}\right)\left(\frac{1}{980}\right)=1.65 \mathrm{g} \mathrm{L}^{-1}\nonumber$
The molar mass of the mixture is (1.65 g L–1)(22.4 L mol–1) = 37.0 g mol–1. The molecular weights of $O_2$ and $N_2$ are 32 and 44, respectively. 37.0 is 5/12 of the difference between the molar masses of the two pure gases. Since the density of a gas mixture is directly proportional to its average molar mass, the mole fraction of the heavier gas in the mixture is also 5/12:
$\dfrac{37-32}{44-32}=\dfrac{5}{12}=0.42 \nonumber$
Example $8$
What is the mole fraction of carbon dioxide in a mixture consisting of equal masses of CO2 (MW=44) and neon (MW=20.2)?
Solution
Assume any arbitrary mass, such as 100 g, find the equivalent numbers of moles of each gas, and then substitute into the definition of mole fraction:
• nCO2 = (100 g) ÷ (44 g mol–1) = 2.3 mol
• nNe= (100 g) ÷ (20.2 g mol–1) = 4.9 mol
• XNe = (2.3 mol) ÷ (2.3 mol + 4.9 mol) = 0.32
Dalton's Law of Partial Pressures
The ideal gas equation of state applies to mixtures just as to pure gases. It was in fact with a gas mixture, ordinary air, that Boyle, Gay-Lussac and Charles did their early experiments. The only new concept we need in order to deal with gas mixtures is the partial pressure, a concept invented by the famous English chemist John Dalton (1766-1844). Dalton reasoned that the low density and high compressibility of gases indicates that they consist mostly of empty space; from this it follows that when two or more different gases occupy the same volume, they behave entirely independently.
The contribution that each component of a gaseous mixture makes to the total pressure of the gas is known as the partial pressure of that gas. Dalton himself stated this law in the simple and vivid way shown at the left.
The usual way of stating Dalton's Law of Partial Pressures is
The total pressure of a gas is the sum of the partial pressures of its components
which is expressed algebraically as
$P_{total}=P_1+P_2+P_3 ... = \sum_i P_i$
or, equivalently
$P_{total} = \dfrac{RT}{V} \sum_i n_i$
There is also a similar relationship based on volume fractions, known as Amagat's law of partial volumes. It is exactly analogous to Dalton's law, in that it states that the total volume of a mixture is just the sum of the partial volumes of its components. But there are two important differences: Amagat's law holds only for ideal gases which must all be at the same temperature and pressure. Dalton's law has neither of these restrictions. Although Amagat's law seems intuitively obvious, it sometimes proves useful in chemical engineering applications. We will make no use of it in this course.
Example $9$
Calculate the mass of each component present in a mixture of fluorine (MW and xenon (MW 131.3) contained in a 2.0-L flask. The partial pressure of Xe is 350 torr and the total pressure is 724 torr at 25°C.
Solution
From Dalton's law, the partial pressure of F2 is (724 – 350) = 374 torr:
The mole fractions are
$\chi_{Xe} = \dfrac{350}{724} = 0.48 \nonumber$
and
$\chi_{F_2} = \dfrac{374}{724} = 0.52 \nonumber$
The total number of moles of gas is
$n=\dfrac{P V}{R T}=\frac{(724 / 60)(2)}{(.082)(298)}=0.078 \mathrm{mol}\nonumber$
The mass of $Xe$ is
$(131.3\, g\, mol^{–1}) \times (0.48 \times 0.078\, mol) = 4.9\, g \nonumber$
Example $10$
Three flasks having different volumes and containing different gases at various pressures are connected by stopcocks as shown. When the stopcocks are opened,
1. What will be the pressure in the system?
2. Which gas will be most abundant in the mixture?
Assume that the temperature is uniform and that the volume of the connecting tubes is negligible.
Solution
The trick here is to note that the total number of moles nT and the temperature remain unchanged, so we can make use of Boyle's law PV = constant. We will work out the details for CO2 only, denoted by subscripts a.
For CO2,
PaVa = (2.13 atm)(1.50 L) = 3.19 L-atm.
Adding the PV products for each separate container, we obtain
$\sum P_iV_i = 6.36 L-atm = n_T RT. \nonumber$
We will call this sum P1V1. After the stopcocks have been opened and the gases mix, the new conditions are denoted by P2V2.
From Boyle's law,
= 6.36 L-atm. V2 = ΣVi = 4.50 L.
Solving for the final pressure P2 we obtain
(6.36 L-atm)/(4.50 L) = 1.41 atm.
For CO2, this works out to (3.19/RT) / (6.36/RT) = 0.501. Because this exceeds 0.5, we know that this is the most abundant gas in the final mixture.
Application of Dalton's Law: Collecting Gases over Water
A common laboratory method of collecting the gaseous product of a chemical reaction is to conduct it into an inverted tube or bottle filled with water, the opening of which is immersed in a larger container of water. This arrangement is called a pneumatic trough, and was widely used in the early days of chemistry. As the gas enters the bottle it displaces the water and becomes trapped in the upper part.
The volume of the gas can be observed by means of a calibrated scale on the bottle, but what about its pressure? The total pressure confining the gas is just that of the atmosphere transmitting its force through the water. (An exact calculation would also have to take into account the height of the water column in the inverted tube.) But liquid water itself is always in equilibrium with its vapor, so the space in the top of the tube is a mixture of two gases: the gas being collected, and gaseous H2O. The partial pressure of H2O is known as the vapor pressure of water and it depends on the temperature. In order to determine the quantity of gas we have collected, we must use Dalton's Law to find the partial pressure of that gas.
Example $11$
Oxygen gas was collected over water as shown above. The atmospheric pressure was 754 torr, the temperature was 22°C, and the volume of the gas was 155 mL. The vapor pressure of water at 22°C is 19.8 torr. Use this information to estimate the number of moles of $O_2$ produced.
Solution
From Dalton's law, $P_{O_2} = P_{total} – P_{H_2O} = 754 – 19.8 = 734 \; torr = 0.966\; atm$.
$n=\frac{P V}{R T}=\frac{0.966 \mathrm{atm} \times(0.155 \mathrm{L})}{\left(.082 \mathrm{L} \mathrm{atm} \mathrm{mol}^{-1} \mathrm{K}^{-1}\right)(295 \mathrm{K})}=.00619 \mathrm{mol}\nonumber$
Application of Dalton's Law: Scuba diving
Our respiratory systems are designed to maintain the proper oxygen concentration in the blood when the partial pressure of O2 is 0.21 atm, its normal sea-level value. Below the water surface, the pressure increases by 1 atm for each 10.3 m increase in depth; thus a scuba diver at 10.3 m experiences a total of 2 atm pressure pressing on the body. In order to prevent the lungs from collapsing, the air the diver breathes should also be at about the same pressure.
But at a total pressure of 2 atm, the partial pressure of $O_2$ in ordinary air would be 0.42 atm; at a depth of 100 ft (about 30 m), the $O_2$ pressure of .8 atm would be far too high for health. For this reason, the air mixture in the pressurized tanks that scuba divers wear must contain a smaller fraction of $O_2$. This can be achieved most simply by raising the nitrogen content, but high partial pressures of N2 can also be dangerous, resulting in a condition known as nitrogen narcosis. The preferred diluting agent for sustained deep diving is helium, which has very little tendency to dissolve in the blood even at high pressures. | textbooks/chem/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.03%3A_Dalton%27s_Law.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which are presented below. It is especially important that you know the principal assumptions of the kinetic-molecular theory. These can be divided into those that refer to the nature of the molecules themselves, and those that describe the nature of their motions:
• The molecules - Negligible volume, absence of inermolecular attractions (think of them as very hard, "non-sticky" objects.)
• Their motions - Completely random in direction, in straight lines only (this is a consequence of their lack of attractions), average velocities proportional to the absolute temperature..
• The idea that random motions of individual molecules can result in non-random (directed) movement of the gas as a whole is one of the most important concepts of chemistry, exemplified here as the principle of diffusion.
• In most courses you will be expected to know and be able to use (or misuse!) Graham's law.
The properties such as temperature, pressure, and volume, together with others dependent on them (density, thermal conductivity, etc.) are known as macroscopic properties of matter; these are properties that can be observed in bulk matter, without reference to its underlying structure or molecular nature. By the late 19th century the atomic theory of matter was sufficiently well accepted that scientists began to relate these macroscopic properties to the behavior of the individual molecules, which are described by the microscopic properties of matter. The outcome of this effort was the kinetic molecular theory of gases. This theory applies strictly only to a hypothetical substance known as an ideal gas; we will see, however, that under many conditions it describes the behavior of real gases at ordinary temperatures and pressures quite accurately, and serves as the starting point for dealing with more complicated states of matter.
The basic ideas of kinetic-molecular theory
The "kinetic-molecular theory of gases" may sound rather imposing, but it is based on a series of easily-understood assumptions that, taken together, constitute a model that greatly simplifies our understanding of the gaseous state of matter. The five basic tenets of the kinetic-molecular theory are as follows:
1. A gas is composed of molecules that are separated by average distances that are much greater than the sizes of the molecules themselves. The volume occupied by the molecules of the gas is negligible compared to the volume of the gas itself.
2. The molecules of an ideal gas exert no attractive forces on each other, or on the walls of the container.
3. The molecules are in constant random motion, and as material bodies, they obey Newton's laws of motion. This means that the molecules move in straight lines (see demo illustration at the left) until they collide with each other or with the walls of the container.
4. Collisions are perfectly elastic; when two molecules collide, they change their directions and kinetic energies, but the total kinetic energy is conserved. Collisions are not “sticky".
5. The average kinetic energy of the gas molecules is directly proportional to the absolute temperature. Notice that the term “average” is very important here; the velocities and kinetic energies of individual molecules will span a wide range of values, and some will even have zero velocity at a given instant. This implies that all molecular motion would cease if the temperature were reduced to absolute zero.
According to this model, most of the volume occupied by a gas is empty space; this is the main feature that distinguishes gases from condensed states of matter (liquids and solids) in which neighboring molecules are constantly in contact. Gas molecules are in rapid and continuous motion; at ordinary temperatures and pressures their velocities are of the order of 0.1-1 km/sec and each molecule experiences approximately 1010collisions with other molecules every second.
The Gas Laws explained
If gases do in fact consist of widely-separated particles, then the observable properties of gases must be explainable in terms of the simple mechanics that govern the motions of the individual molecules. The kinetic molecular theory makes it easy to see why a gas should exert a pressure on the walls of a container. Any surface in contact with the gas is constantly bombarded by the molecules.
At each collision, a molecule moving with momentum mv strikes the surface. Since the collisions are elastic, the molecule bounces back with the same velocity in the opposite direction. This change in velocity ΔV is equivalent to an acceleration a; according to Newton's second law, a force f = ma is thus exerted on the surface of area A exerting a pressure P = f/A.
Kinetic Interpretation of Temperature
According to the kinetic molecular theory, the average kinetic energy of an ideal gas is directly proportional to the absolute temperature. Kinetic energy is the energy a body has by virtue of its motion:
$K.E. = \dfrac{mv^2}{2}$
As the temperature of a gas rises, the average velocity of the molecules will increase; a doubling of the temperature will increase this velocity by a factor of four. Collisions with the walls of the container will transfer more momentum, and thus more kinetic energy, to the walls. If the walls are cooler than the gas, they will get warmer, returning less kinetic energy to the gas, and causing it to cool until thermal equilibrium is reached. Because temperature depends on the average kinetic energy, the concept of temperature only applies to a statistically meaningful sample of molecules. We will have more to say about molecular velocities and kinetic energies farther on.
• Kinetic explanation of Boyle's law: Boyle's law is easily explained by the kinetic molecular theory. The pressure of a gas depends on the number of times per second that the molecules strike the surface of the container. If we compress the gas to a smaller volume, the same number of molecules are now acting against a smaller surface area, so the number striking per unit of area, and thus the pressure, is now greater.
• Kinetic explanation of Charles' law: Kinetic molecular theory states that an increase in temperature raises the average kinetic energy of the molecules. If the molecules are moving more rapidly but the pressure remains the same, then the molecules must stay farther apart, so that the increase in the rate at which molecules collide with the surface of the container is compensated for by a corresponding increase in the area of this surface as the gas expands.
• Kinetic explanation of Avogadro's law: If we increase the number of gas molecules in a closed container, more of them will collide with the walls per unit time. If the pressure is to remain constant, the volume must increase in proportion, so that the molecules strike the walls less frequently, and over a larger surface area.
• Kinetic explanation of Dalton's law: "Every gas is a vacuum to every other gas". This is the way Dalton stated what we now know as his law of partial pressures. It simply means that each gas present in a mixture of gases acts independently of the others. This makes sense because of one of the fundamental tenets of KMT theory that gas molecules have negligible volumes. So Gas A in mixture of A and B acts as if Gas B were not there at all. Each contributes its own pressure to the total pressure within the container, in proportion to the fraction of the molecules it represents.
Some important practical applications of KMT
The molecules of a gas are in a state of perpetual motion in which the velocity (that is, the speed and direction) of each molecule is completely random and independent of that of the other molecules. This fundamental assumption of the kinetic-molecular model helps us understand a wide range of commonly-observed phenomena.
Diffusion: random motion with direction
Diffusion refers to the transport of matter through a concentration gradient; the rule is that substances move (or tend to move) from regions of higher concentration to those of lower concentration. The diffusion of tea out of a tea bag into water, or of perfume from a person, are common examples; we would not expect to see either process happening in reverse!
When the stopcock is opened, random motions cause each gas to diffuse into the other container. After diffusion is complete (bottom), individual molecules of both kinds continue to pass between the flasks in both directions.
It might at first seem strange that the random motions of molecules can lead to a completely predictable drift in their ultimate distribution. The key to this apparent paradox is the distinction between an individual and the population. Although we can say nothing about the fate of an individual molecule, the behavior of a large collection ("population") of molecules is subject to the laws of statistics. This is exactly analogous to the manner in which insurance actuarial tables can accurately predict the average longevity of people at a given age, but provide no information on the fate of any single person.
Effusion and Graham's law
If a tiny hole is made in the wall of a vessel containing a gas, then the rate at which gas molecules leak out of the container will be proportional to the number of molecules that collide with unit area of the wall per second, and thus with the rms - average velocity of the gas molecules. This process, when carried out under idealized conditions, is known as effusion.
Around 1830, the English chemist Thomas Graham (1805-1869) discovered that the relative rates at which two different gases, at the same temperature and pressure, will effuse through identical openings is inversely proportional to the square root of its molar mass.
$v \propto \dfrac{1}{\sqrt{M}}$
Graham's law, as this relation is known, is a simple consequence of the square-root relation between the velocity of a body and its kinetic energy.
$HCl + NH_3 \rightarrow NH_4 Cl_{(s)}$
According to the kinetic molecular theory, the molecules of two gases at the same temperature will possess the same average kinetic energy. If v1and v2 are the average velocities of the two kinds of molecules, then at any given temperature KE1 = KE2 and
$\dfrac{m_1v_1^2}{2} = \dfrac{m_2v_2^2}{2}$
or, in terms of molar masses $M$,
$\color{red} { \dfrac{v_1}{v_2} = \sqrt{\dfrac{M_2}{M_1}}}$
Thus the average velocity of the lighter molecules must be greater than those of the heavier molecules, and the ratio of these velocities will be given by the inverse ratio of square roots of the molecular weights. Although Graham's law applies exactly only when a gas diffuses into a vacuum, the law gives useful estimates of relative diffusion rates under more practical conditions, and it provides insight into a wide range of phenomena that depend on the relative average velocities of molecules of different masses.
Example $1$
The glass tube shown above has cotton plugs inserted at either end. The plug on the left is moistened with a few drops of aqueous ammonia, from which $NH_3$ gas slowly escapes. The plug on the right is similarly moisted with a strong solution of hydrochloric acid, from which gaseous $HCl$ escapes. The gases diffuse in opposite directions within the tube; at the point where they meet, they combine to form solid ammonium chloride, which appears first as a white fog and then begins to coat the inside of the tube.
The reaction is
$NH_{3(g)} + HCl_{(g)} \rightarrow NH_4Cl_{(s)}$
1. In what part of the tube (left, right, center) will the NH4Cl first be observed?
2. If the distance between the two ends of the tube is 100 cm, how many cm from the left end of the tube will the NH4Cl first form?
Solution
a) The lighter ammonia molecules will diffuse more rapidly, so the point where the two gases meet will be somewhere in the right half of the tube.
b) The ratio of the diffusion velocities of ammonia (v1)and hydrogen chloride (v2) can be estimated from Graham's law:
$\dfrac{v_1}{v_2} = \sqrt{\dfrac{36.5}{17}} = 1.46$
We can therefore assign relative velocities of the two gases as $v_1 = 1.46$ and $v_2 = 1$. Clearly, the meeting point will be directly proportional to v1. It will, in fact, be proportional to the ratio v1/(v1+v2)*:
$\dfrac{v_1}{v_1+v_2} \times 100\; cm = \dfrac{1.46}{1.46 + 1.00} \times 100\, cm = 59 \;cm$
*In order to see how this ratio was deduced, consider what would happen in the three special cases in which v1=0, v2=0, and v1=v2, for which the distances (from the left end) would be 0, 50, and 100 cm, respectively. It should be clear that the simpler ratio v1/v2 would lead to absurd results.
Note that the above calculation is only an estimate. Graham's law is strictly valid only under special conditions, the most important one being that no other gases are present. Contrary to what is written in some textbooks and is often taught, Graham's law does not accurately predict the relative rates of escape of the different components of a gaseous mixture into the outside air, nor does it give the rates at which two gases will diffuse through another gas such as air. See Misuse of Graham's Laws by Stephen J. Hawkes, J. Chem. Education 1993 70(10) 836-837
Uranium enrichment
One application of this principle that was originally suggested by Graham himself but was not realized on a practical basis until a century later is the separation of isotopes. The most important example is the enrichment of uranium in the production of nuclear fission fuel.
The K-25 Gaseous Diffusion Plant was one of the major sources of enriched uranium during World War II. It was completed in 1945 and employed 12,000 workers. Owing to the secrecy of the Manhattan Project, the women who operated the system were unaware of the purpose of the plant; they were trained to simply watch the gauges and turn the dials for what they were told was a "government project".
Uranium consists mostly of U238, with only 0.7% of the fissionable isotope U235. Uranium is of course a metal, but it reacts with fluorine to form a gaseous hexafluoride, UF6. In the very successful gaseous diffusion process the UF6 diffuses repeatedly through a porous wall. Each time, the lighter isotope passes through a bit more rapidly then the heavier one, yielding a mixture that is minutely richer in U235. The process must be over a thousand times to achieve the desired degree of enrichment. The development of a large-scale diffusion plant was a key part of the U.S. development of the first atomic bomb in 1945. This process is now obsolete, having been replaced by other methods.
Density fluctuations: Why is the sky blue?
Diffusion ensures that molecules will quickly distribute themselves throughout the volume occupied by the gas in a thoroughly uniform manner. The chances are virtually zero that sufficiently more molecules might momentarily find themselves near one side of a container than the other to result in an observable temporary density or pressure difference. This is a result of simple statistics. But statistical predictions are only valid when the sample population is large.
Consider what would happen if we consider extremely small volumes of space: cubes that are about 10–7 cm on each side, for example. Such a cell would contain only a few molecules, and at any one instant we would expect to find some containing more or less than others, although in time they would average out to the same value. The effect of this statistical behavior is to give rise to random fluctuations in the density of a gas over distances comparable to the dimensions of visible light waves. When light passes through a medium whose density is non-uniform, some of the light is scattered. The kind of scattering due to random density fluctuations is called Rayleigh scattering, and it has the property of affecting (scattering) shorter wavelengths more effectively than longer wavelengths. The clear sky appears blue in color because the blue (shorter wavelength) component of sunlight is scattered more. The longer wavelengths remain in the path of the sunlight, available to delight us at sunrise or sunset.
[source]
What we have been discussing is a form of what is known as fluctuation phenomena. As the animation shows, the random fluctuations in pressure of a gas on either side do not always completely cancel when the density of molecules (i.e., pressures) are quite small.
Incandescent light bulbs
An interesting application involving several aspects of the kinetic molecular behavior of gases is the use of a gas, usually argon, to extend the lifetime of incandescent lamp bulbs. As a light bulb is used, tungsten atoms evaporate from the filament and condense on the cooler inner wall of the bulb, blackening it and reducing light output. As the filament gets thinner in certain spots, the increased electrical resistance results in a higher local power dissipation, more rapid evaporation, and eventually the filament breaks.
The pressure inside a lamp bulb must be sufficiently low for the mean free path of the gas molecules to be fairly long; otherwise heat would be conducted from the filament too rapidly, and the bulb would melt. (Thermal conduction depends on intermolecular collisions, and a longer mean free path means a lower collision frequency). A complete vacuum would minimize heat conduction, but this would result in such a long mean free path that the tungsten atoms would rapidly migrate to the walls, resulting in a very short filament life and extensive bulb blackening.
Around 1910, the General Electric Company hired Irving Langmuir as one of the first chemists to be employed as an industrial scientist in North America. Langmuir quickly saw that bulb blackening was a consequence of the long mean free path of vaporized tungsten atoms, and he showed that the addition of a small amount of argon will reduce the mean free path, increasing the probability that an outward-moving tungsten atom will collide with an argon atom. A certain proportion of these will eventually find their way back to the filament, partially reconstituting it.
Krypton would be a better choice of gas than argon, since its greater mass would be more effective in changing the direction of the rather heavy tungsten atom. Unfortunately, krypton, being a rarer gas, is around 50 times as expensive as argon, so it is used only in “premium” light bulbs. The more recently-developed halogen-cycle lamp is an interesting chemistry-based method of prolonging the life of a tungsten-filament lamp.
Viscosity of gases
Gases, like all fluids, exhibit a resistance to flow, a property known as viscosity. The basic cause of viscosity is the random nature of thermally-induced molecular motion. In order to force a fluid through a pipe or tube, an additional non-random translational motion must be superimposed on the thermal motion.
There is a slight problem, however. Molecules flowing near the center of the pipe collide mostly with molecules moving in the same direction at about the same velocity, but those that happen to find themselves near the wall will experience frequent collisions with the wall. Since the molecules in the wall of the pipe are not moving in the direction of the flow, they will tend to absorb more kinetic energy than they return, with the result that the gas molecules closest to the wall of the pipe lose some of their forward momentum. Their random thermal motion will eventually take them deeper into the stream, where they will collide with other flowing molecules and slow them down. This gives rise to a resistance to flow known as viscosity; this is the reason why long gas transmission pipelines need to have pumping stations every 100 km or so.
As you know, liquids such as syrup or honey exhibit smaller viscosities at higher temperatures as the increased thermal energy reduces the influence of intermolecular attractions, thus allowing the molecules to slip around each other more easily. Gases, however, behave in just the opposite way; gas viscosity arises from collision-induced transfer of momentum from rapidly-moving molecules to slow ones that have been released from the boundary layer. The higher the temperature, the more rapidly the molecules move and collide with each other, so the higher the viscosity.
Distribution of gas molecules in a gravitational field
Everyone knows that the air pressure decreases with altitude. This effect is easily understood qualitatively through the kinetic molecular theory. Random thermal motion tends to move gas molecules in all directions equally. In the presence of a gravitational field, however, motions in a downward direction are slightly favored. This causes the concentration, and thus the pressure of a gas to be greater at lower elevations and to decrease without limit at higher elevations.
The pressure at any elevation in a vertical column of a fluid is due to the weight of the fluid above it. This causes the pressure to decrease exponentially with height.
The exact functional relationship between pressure and altitude is known as the barometric distribution law. It is easily derived using first-year calculus. For air at 25°C the pressure Ph at any altitude is given by
$P_h = P_o e^{–.011h}$
in which $P_o$ is the pressure at sea level.
This is a form of the very common exponential decay law which we will encounter in several different contexts in this course. An exponential decay (or growth) law describes any quantity whose rate of change is directly proportional to its current value, such as the amount of money in a compound-interest savings account or the density of a column of gas at any altitude. The most important feature of any quantity described by this law is that the fractional rate of change of the quantity in question (in this case, ΔP/P or in calculus, dP/P) is a constant. This means that the increase in altitude required to reduce the pressure by half is also a constant, about 6 km in the Earth's case.
Because heavier molecules will be more strongly affected by gravity, their concentrations will fall off more rapidly with elevation. For this reason the partial pressures of the various components of the atmosphere will tend to vary with altitude. The difference in pressure is also affected by the temperature; at higher temperatures there is more thermal motion, and hence a less rapid fall-off of pressure with altitude. Owing to atmospheric convection and turbulence, these effects are not observed in the lower part of the atmosphere, but in the uppermost parts of the atmosphere the heavier molecules do tend to drift downward.
The ionosphere and radio communication
At very low pressures, mean free paths are sufficiently great that collisions between molecules become rather infrequent. Under these conditions, highly reactive species such as ions, atoms, and molecular fragments that would ordinarily be destroyed on every collision, can persist for appreciable periods of time.
The most important example of this occurs at the top of the Earth's atmosphere, at an altitude of 200 km, at pressure of about 10–7 atm. Here the mean free path will be 107 times its value at 1 atm, or about 1 m. In this part of the atmosphere, known as the thermosphere, the chemistry is dominated by species such as O, O2+ and HO which are formed by the action of intense solar ultraviolet light on the normal atmospheric gases near the top of the stratosphere. The high concentrations of electrically charged species in these regions (sometimes also called the ionosphere) reflect radio waves and are responsible for around-the-world transmission of mid-frequency radio signals.
The ion density in the lower part of the ionosphere (about 80 km altitude) is so great that the radiation from broadcast-band radio stations is absorbed in this region before these waves can reach the reflective high-altitude layers. However, the pressure in this region (known as the D-layer) is great enough that the ions recombine soon after local sunset, causing the D-layer to disappear and allowing the waves to reflect off of the upper (F-layer) part of the ionosphere. This is the reason that distant broadcast stations can only be heard at night. | textbooks/chem/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.04%3A_Kinetic_Molecular_Theory_%28Overview%29.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas that are presented below. It is especially important that you know the precise meanings of all the italicized terms in the context of this topic.
• You should be able to sketch the general shape of the Maxwell-Boltzmann plot showing the distribution of molecular velocities. You should also show how these plots are affected by the temperature and by the molar mass.
• Although there is no need for you to be able to derive the ideal gas equation of state, you should understand that the equation PV = nRT can be derived from the principles of kinetic molecular theory, as outlined above.
• Explain the concept of the mean free path of a gas molecule (but no need to reproduce the mathematics.)
In this section, we look in more detail at some aspects of the kinetic-molecular model and how it relates to our empirical knowledge of gases. For most students, this will be the first application of algebra to the development of a chemical model; this should be educational in itself, and may help bring that subject back to life for you! As before, your emphasis should on understanding these models and the ideas behind them, there is no need to memorize any of the formulas.
Image: Wikimedia Commons
The Velocities of Gas Molecules
At temperatures above absolute zero, all molecules are in motion. In the case of a gas, this motion consists of straight-line jumps whose lengths are quite great compared to the dimensions of the molecule. Although we can never predict the velocity of a particular individual molecule, the fact that we are usually dealing with a huge number of them allows us to know what fraction of the molecules have kinetic energies (and hence velocities) that lie within any given range.
The trajectory of an individual gas molecule consists of a series of straight-line paths interrupted by collisions. What happens when two molecules collide depends on their relative kinetic energies; in general, a faster or heavier molecule will impart some of its kinetic energy to a slower or lighter one. Two molecules having identical masses and moving in opposite directions at the same speed will momentarily remain motionless after their collision.
If we could measure the instantaneous velocities of all the molecules in a sample of a gas at some fixed temperature, we would obtain a wide range of values. A few would be zero, and a few would be very high velocities, but the majority would fall into a more or less well defined range. We might be tempted to define an average velocity for a collection of molecules, but here we would need to be careful: molecules moving in opposite directions have velocities of opposite signs. Because the molecules are in a gas are in random thermal motion, there will be just about as many molecules moving in one direction as in the opposite direction, so the velocity vectors of opposite signs would all cancel and the average velocity would come out to zero. Since this answer is not very useful, we need to do our averaging in a slightly different way.
The proper treatment is to average the squares of the velocities, and then take the square root of this value. The resulting quantity is known as the root mean square, or RMS velocity
$\nu_{rms} = \sqrt{\dfrac{\sum \nu^2}{n}}$
which we will denote simply by $\bar{v}$. The formula relating the RMS velocity to the temperature and molar mass is surprisingly simple, considering the great complexity of the events it represents:
$\bar{v}= v_{rms} = \sqrt{\dfrac{3RT}{m}}$
in which $m$ is the molar mass in kg mol–1, and k = R÷6.02E23, the “gas constant per molecule", is known as the Boltzmann constant.
Example $1$
What is the average velocity of a nitrogen molecules at 300 K?
Solution
The molar mass of N2 is 28.01 g. Substituting in the above equation and expressing R in energy units, we obtain
$v^{2}=\frac{3 \times 8.31 \mathrm{J} \mathrm{mol}^{-1} \mathrm{K}^{-1} \times 300 \mathrm{K}}{28.01 \times 10^{-3} \mathrm{kg} \mathrm{mol}^{-1}}=2.67 \times 10^{5} \mathrm{J} \mathrm{kg}^{-1} \nonumber$
Recalling the definition of the joule (1 J = 1 kg m2 s–2) and taking the square root,
$\overline{v}=\sqrt{2.67 \times 10^{5} \mathrm{J} \mathrm{kg}^{-1} \times \frac{1 \mathrm{kg} \mathrm{m}^{2} \mathrm{s}^{-2}}{1 \mathrm{J}}}=517 \mathrm{ms}^{-1} \nonumber$
or
$517 \mathrm{m} \mathrm{s}^{-1} \times \frac{1 \mathrm{km}}{10^{3} \mathrm{m}} \times \frac{3600 \mathrm{s}}{1 \mathrm{h}}=1860 \mathrm{km} \mathrm{h}^{-1} \nonumber$
Comment: this is fast! The velocity of a rifle bullet is typically 300-500 m s–1; convert to common units to see the comparison for yourself.
A simpler formula for estimating average molecular velocities is
$v=157 \sqrt{\dfrac{T}{m}}$
in which $v$ is in units of meters/sec, $T$ is the absolute temperature and $m$ the molar mass in grams.
The Boltzmann Distribution
If we were to plot the number of molecules whose velocities fall within a series of narrow ranges, we would obtain a slightly asymmetric curve known as a velocity distribution. The peak of this curve would correspond to the most probable velocity. This velocity distribution curve is known as the Maxwell-Boltzmann distribution, but is frequently referred to only by Boltzmann's name. The Maxwell-Boltzmann distribution law was first worked out around 1850 by the great Scottish physicist, James Clerk Maxwell (left, 1831-1879), who is better known for discovering the laws of electromagnetic radiation. Later, the Austrian physicist Ludwig Boltzmann (1844-1906) put the relation on a sounder theoretical basis and simplified the mathematics somewhat. Boltzmann pioneered the application of statistics to the physics and thermodynamics of matter, and was an ardent supporter of the atomic theory of matter at a time when it was still not accepted by many of his contemporaries.
The derivation of the Boltzmann curve is a bit too complicated to go into here, but its physical basis is easy to understand. Consider a large population of molecules having some fixed amount of kinetic energy. As long as the temperature remains constant, this total energy will remain unchanged, but it can be distributed among the molecules in many different ways, and this distribution will change continually as the molecules collide with each other and with the walls of the container.
It turns out, however, that kinetic energy is acquired and handed around only in discrete amounts which are known as quanta. Once the molecule has a given number of kinetic energy quanta, these can be apportioned amongst the three directions of motion in many different ways, each resulting in a distinct total velocity state for the molecule. The greater the number of quanta, (that is, the greater the total kinetic energy of the molecule) the greater the number of possible velocity states. If we assume that all velocity states are equally probable, then simple statistics predicts that higher velocities will be more favored simply because there are so many more of them .
Although the number of possible higher-energy states is greater, the lower-energy states are more likely to be occupied . This is because only so much kinetic energy available to the gas as a whole; every molecule that acquires kinetic energy in a collision leaves behind another molecule having less. This tends to even out the kinetic energies in a collection of molecules, and ensures that there are always some molecules whose instantaneous velocity is near zero. The net effect of these two opposing tendencies, one favoring high kinetic energies and the other favoring low ones, is the peaked curve seen above. Notice that because of the asymmetry of this curve, the mean (rms average) velocity is not the same as the most probable velocity, which is defined by the peak of the curve.
At higher temperatures (or with lighter molecules) the latter constraint becomes less important, and the mean velocity increases. But with a wider velocity distribution, the number of molecules having any one velocity diminishes, so the curve tends to flatten out.
Velocity Distributions Depend on Temperature and Mass
Higher temperatures allow a larger fraction of molecules to acquire greater amounts of kinetic energy, causing the Boltzmann plots to spread out.
Notice how the left ends of the plots are anchored at zero velocity (there will always be a few molecules that happen to be at rest.) As a consequence, the curves flatten out as the higher temperatures make additional higher-velocity states of motion more accessible. The area under each plot is the same for a constant number of molecules.
All molecules have the same kinetic energy (mv2/2) at the same temperature, so the fraction of molecules with higher velocities will increase as m, and thus the molecular weight, decreases.
Boltzmann Distribution and Planetary Atmospheres
The ability of a planet to retain an atmospheric gas depends on the average velocity (and thus on the temperature and mass) of the gas molecules and on the planet's mass, which determines its gravity and thus the escape velocity. In order to retain a gas for the age of the solar system, the average velocity of the gas molecules should not exceed about one-sixth of the escape velocity. The escape velocity from the Earth is 11.2 km/s, and 1/6 of this is about 2 km/s. Examination of the above plot reveals that hydrogen molecules can easily achieve this velocity, and this is the reason that hydrogen, the most abundant element in the universe, is almost absent from Earth's atmosphere.
Although hydrogen is not a significant atmospheric component, water vapor is. A very small amount of this diffuses to the upper part of the atmosphere, where intense solar radiation breaks down the H2O into H2. Escape of this hydrogen from the upper atmosphere amounts to about 2.5 × 1010 g/year.
Derivation of the Ideal Gas Equation
The ideal gas equation of state came about by combining the empirically determined ("ABC") laws of Avogadro, Boyle, and Charles, but one of the triumphs of the kinetic molecular theory was the derivation of this equation from simple mechanics in the late nineteenth century. This is a beautiful example of how the principles of elementary mechanics can be applied to a simple model to develop a useful description of the behavior of macroscopic matter. We begin by recalling that the pressure of a gas arises from the force exerted when molecules collide with the walls of the container. This force can be found from Newton's law
$f = ma = m\dfrac{dv}{dt} \label{2.1}$
in which $v$ is the velocity component of the molecule in the direction perpendicular to the wall and $m$ is its mass.
To evaluate the derivative in Equation \ref{2.1}, which is the velocity change per unit time, consider a single molecule of a gas contained in a cubic box of length $l$. For simplicity, assume that the molecule is moving along the x-axis which is perpendicular to a pair of walls, so that it is continually bouncing back and forth between the same pair of walls. When the molecule of mass $m$ strikes the wall at velocity $+v$ (and thus with a momentum $mv$ ) it will rebound elastically and end up moving in the opposite direction with –v. The total change in velocity per collision is thus 2v and the change in momentum is $2mv$.
The Frequency of Collisions
After the collision the molecule must travel a distance l to the opposite wall, and then back across this same distance before colliding again with the wall in question. This determines the time between successive collisions with a given wall; the number of collisions per second will be $v/2l$. The force $F$ exerted on the wall is the rate of change of the momentum, given by the product of the momentum change per collision and the collision frequency:
$F = \dfrac{d(mv_x}{dt} = (2mv_x) \times \left( \dfrac{v_x}{2l} \right) = \dfrac{m v_x^2}{l} \label{2-2}$
Pressure is force per unit area, so the pressure $P$ exerted by the molecule on the wall of cross-section $l^2$ becomes
$P = \dfrac{mv^2}{l^3} = \dfrac{mv^2}{V} \label{2-3}$
in which $V$ is the volume of the box.
The pressure produced by N molecules
As noted near the beginning of this unit, any given molecule will make about the same number of moves in the positive and negative directions, so taking a simple average would yield zero. To avoid this embarrassment, we square the velocities before averaging them, and then take the square root of the average. This result is known as the root mean square (rms) velocity.
We have calculated the pressure due to a single molecule moving at a constant velocity in a direction perpendicular to a wall. If we now introduce more molecules, we must interpret $v^2$ as an average value which we will denote by $\bar{v^2}$. Also, since the molecules are moving randomly in all directions, only one-third of their total velocity will be directed along any one Cartesian axis, so the total pressure exerted by $N$ molecules becomes
$P=\dfrac{N}{3}\dfrac{m \bar{\nu}^2}{V} \label{2.4}$
The above statement that "one-third of the total velocity (of all the molecules together)..." does not mean that 1/3 of the molecules themselves are moving in each of these three directions; each individual particle is free to travel in any possible direction between collisions. However, any random trajectory can be regarded as composed of three components that correspond to these three axes.
The red arrow in the illustration depicts the path of a single molecule as it travels from the point of its last collision at the origin (lower left corner). The length of the arrow (which you may recognize as a vector) is proportional to its velocity. The three components of the molecule's velocity are indicated by the small green arrows. It should be clearly apparent the trajectory is mainly along the x,zaxis. In the section that follows, Equation \ref{2-5} contains another 1/3 factor that similarly divides the kinetic energy into components along the three axes. This makes sense because kinetic energy is partly determined by velocity.
The temperature of a gas is a measure of the average translational kinetic energy of its molecules, so we begin by calculating the latter. Recalling that mv2/2 is the average translational kinetic energy $ε$, we can rewrite the Equation \ref{2-4} as
$PV = \dfrac{1}{3} N m \bar{v^2} = \dfrac{2}{3} N \epsilon \label{2-5}$
The 2/3 factor in the proportionality reflects the fact that velocity components in each of the three directions contributes ½ kT to the kinetic energy of the particle. The average translational kinetic energy is directly proportional to temperature:
$\epsilon = \dfrac{3}{2} kT \label{2.6}$
in which the proportionality constant k is known as the Boltzmann constant. Substituting this into Equation \ref{2-5} yields
$PV = \left( \dfrac{2}{3}N \right) \left( \dfrac{3}{2}kT \right) =NkT \label{2.7}$
Notice that Equation \ref{2-7} looks very much like the ideal gas equation
$PV = nRT \nonumber$
but is not quite the same, however; we have been using capital $N$ to denote the number of molecules, whereas $n$ stands for the number of moles. And of course, the proportionality factor is not the gas constant $R$, but rather the Boltzmann constant, 1.381 × 10–23 J K–1. If we multiply $k$ by Avogadro's number ($N_A$
$(1.381 \times 10^{–23}{\, J \,K^{–1}) (6.022 \times 10^{23}) = 8.314 \,J \,K^{–1}.$
Hence, the Boltzmann constant $k$ is just the gas constant per molecule. So for n moles of particles, the Equtation \ref{2-7} turns into our old friend
$P V = n R T \label{2.8}$
The ideal gas equation of state came about by combining the empirically determined laws of Boyle, Charles, and Avogadro, but one of the triumphs of the kinetic molecular theory was the derivation of this equation from simple mechanics in the late nineteenth century. This is a beautiful example of how the principles of elementary mechanics can be applied to a simple model to develop a useful description of the behavior of macroscopic matter, and it will be worth your effort to follow and understand the individual steps of the derivation. (But don't bother to memorize it!)
RT has the dimensions of energy
Since the product $PV$ has the dimensions of energy, so does RT, and this quantity in fact represents the average translational kinetic energy per mole of molecular particles. The relationship between these two energy units can be obtained by recalling that 1 atm is $1.013\times 10^{5}\, N\, m^{–2}$, so that
$1\, liter-atm = 1000 \mathrm{cm}^{3}\left(\frac{1 \mathrm{m}^{3}}{10^{6} \mathrm{cm}^{3}}\right) \times 1.01325 \times 10^5} \mathrm{Nm}^{2}=101325 \mathrm{J}$
The gas constant $R$ is one of the most important fundamental constants relating to the macroscopic behavior of matter. It is commonly expressed in both pressure-volume and in energy units:
R = 0.082057 L atm mol–1 K–1 = 8.314 J mol–1 K–1
That is, R expresses the amount of energy per Kelvin degree. As noted above, the Boltzmann constant k, which appears in many expressions relating to the statistical treatment of molecules, is just
R ÷ 6.02E23 = 1.3807 × 10–23 J K–1,
the "gas constant per molecule "
How Far does a Molecule travel between Collisions?
Molecular velocities tend to be very high by our everyday standards (typically around 500 meters per sec), but even in gases, they bump into each other so frequently that their paths are continually being deflected in a random manner, so that the net movement (diffusion) of a molecule from one location to another occurs rather slowly. How close can two molecules get?
The average distance a molecule moves between such collisions is called the mean free path ($\lambda$), which depends on the number of molecules per unit volume and on their size. To avoid collision, a molecule of diameter σ must trace out a path corresponding to the axis of an imaginary cylinder whose cross-section is $\pi \sigma^2$. Eventually it will encounter another molecule (extreme right in the diagram below) that has intruded into this cylinder and defines the terminus of its free motion.
The volume of the cylinder is $\pi \sigma^2 \lambda.$ At each collision the molecule is diverted to a new path and traces out a new exclusion cylinder. After colliding with all n molecules in one cubic centimeter of the gas it will have traced out a total exclusion volume of $\pi \sigma^2 \lambda$. Solving for $\lambda$ and applying a correction factor $\sqrt{2}$ to take into account exchange of momentum between the colliding molecules (the detailed argument for this is too complicated to go into here), we obtain
$\lambda = \dfrac{1}{\sqrt{2\pi n \sigma^2}} \label{3.1}$
Small molecules such as He, H2 and CH4 typically have diameters of around 30-50 pm. At STP the value of $n$, the number of molecules per cubic meter, is
$\dfrac{6.022 \times 10^{23}\; mol^{-1}}{22.4 \times 10^{-3} m^3 \; mol^{-1}} = 2.69 \times 10 \; m^{-3}$
Substitution into Equation $\ref{3.1}$ yields a value of around $10^{–7}\; m (100\; nm)$ for the mean free path of most molecules under these conditions. Although this may seem like a very small distance, it typically amounts to 100 molecular diameters, and more importantly, about 30 times the average distance between molecules. This explains why so many gases conform very closely to the ideal gas law at ordinary temperatures and pressures.
On the other hand, at each collision the molecule can be expected to change direction. Because these changes are random, the net change in location a molecule experiences during a period of one second is typically rather small. Thus in spite of the high molecular velocities, the speed of molecular diffusion in a gas is usually quite small. | textbooks/chem/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.05%3A_More_on_Kinetic_Molecular_Theory.txt |
Learning Objectives
Make sure you thoroughly understand the following essential concepts that have been presented above.
• Real gases are subject to the effects of molecular volume (intermolecular repulsive force) and intermolecular attractive forces.
• The behavior of a real gas approximates that of an ideal gas as the pressure approaches zero.
• The effects of non-ideal behavior are best seen when the PV product is plotted as a function of P. You should be able to identify the regions of such a plot in which attractive and repulsive forces dominate.
• Each real gas has its own unique equation of state. Various general equations of state have been devised in which adjustable constants are used to approximate the behavior of a particular gas.
• The most well-known equation of state is that of van der Waals. Although you need not memorize this equation, you should be able to explain the significance of its terms.
The "ideal gas laws" as we know them do a remarkably good job of describing the behavior of a huge number chemically diverse substances as they exist in the gaseous state under ordinary environmental conditions, roughly around 1 atm pressure and a temperature of 300 K. But when the temperature is reduced, or the pressure is raised, the relation PV = constant that defines the ideal gas begins to break down, and its properties become unpredictable; eventually the gas condenses into a liquid. Why is this important? It is of obvious interest to a chemical engineer who needs to predict the properties of the gases involved in a chemical reaction carried out at several hundred atmospheres pressure. This is especially so when we consider that some of the basic tenets of the ideal gas model have to be abandoned in order to explain such properties as
• the average distance between collisions (the molecules really do take up space!)
• at sufficiently high pressures and low temperatures, intermolecular attractions assume control and the gas condenses to a liquid;
• the viscosity of a gas flowing through a pipe (the molecules do get temporarily "stuck" on the pipe surface, and are therefore affected by intermolecular attractive forces.)
Even so, many of the common laws such as Boyle's and Charles' continue to describe these gases quite well even under conditions where these phenomena are evident. Under ordinary environmental conditions (moderate pressures and above 0°C), the isotherms of substances we normally think of as gases don't appear to differ very greatly from the hyperbolic form
$\dfrac{PV}{RT} = \text{constant} \label{6.6.1}$
However, over a wider range of conditions, things begin to get more complicated. Thus isopentane (Figure $1$) behaves in a reasonably ideal manner above 210 K, but below this temperature the isotherms become somewhat distorted, and at 185 K and below they cease to be continuous, showing peculiar horizontal segments in which reducing the volume does not change the pressure.
Within this region, any attempt to compress the gas simply "squeezes" some of it into the liquid state whose greater density exactly compensates for the smaller volume, thus maintaining the pressure at a constant value. It turns out that real gases eventually begin to follow their own unique equations of state, and ultimately even cease to be gases. In this unit we will see why this occurs, what the consequences are, and how we might modify the ideal gas equation of state to extend its usefulness over a greater range of temperatures and pressures.
Effects of Intermolecular Forces
According to Boyle's law, the product PV is a constant at any given temperature, so a plot of PV as a function of the pressure of an ideal gas yields a horizontal straight line. This implies that any increase in the pressure of the gas is exactly counteracted by a decrease in the volume as the molecules are crowded closer together. But we know that the molecules themselves are finite objects having volumes of their own, and this must place a lower limit on the volume into which they can be squeezed. So we must reformulate the ideal gas equation of state as a relation that is true only in the limiting case of zero pressure:
$\lim_{P \rightarrow 0} PV=nRT \label{6.6.2}$
So what happens when a real gas is subjected to a very high pressure? The outcome varies with both the molar mass of the gas and its temperature, but in general we can see the the effects of both repulsive and attractive intermolecular forces:
• Repulsive forces: As a gas is compressed, the individual molecules begin to get in each other's way, giving rise to a very strong repulsive force acts to oppose any further volume decrease. We would therefore expect the PV vs P line to curve upward at high pressures, and this is in fact what is observed for all gases at sufficiently high pressures.
• Attractive forces: At very close distances, all molecules repel each other as their electron clouds come into contact. At greater distances, however, brief statistical fluctuations in the distribution these electron clouds give rise to a universal attractive force between all molecules. The more electrons in the molecule (and thus the greater the molecular weight), the greater is this attractive force. As long as the energy of thermal motion dominates this attractive force, the substance remains in the gaseous state, but at sufficiently low temperatures the attractions dominate and the substance condenses to a liquid or solid.
The universal attractive force described above is known as the dispersion, or London force. There may also be additional (and usually stronger) attractive forces related to charge imbalance in the molecule or to hydrogen bonding. These various attractive forces are often referred to collectively as van der Waals forces. A plot of PV/RT as a function of pressure is a very sensitive indicator of deviations from ideal behavior, since such a plot is just a horizontal line for an ideal gas. Figures $2$ and $3$ demonstrate how these plots vary with the nature of the gas and with temperature, respectively.
Intermolecular attractions, which generally increase with molecular weight, cause the PV product to decrease as higher pressures bring the molecules closer together and thus within the range of these attractive forces; the effect is to cause the volume to decrease more rapidly than it otherwise would. The repulsive forces always eventually win out. However, as the molecules begin to intrude on each others' territory, the stronger repulsive forces cause the curve to bend upward.
The temperature makes a big difference! At higher temperatures, increased thermal motions overcome the effects of intermolecular attractions which normally dominate at lower pressures (Figure $3$). So all gases behave more ideally at higher temperatures. For any gas, there is a special temperature (the Boyle temperature) at which attractive and repulsive forces exactly balance each other at zero pressure. As you can see in this plot for methane, some of this balance does remain as the pressure is increased.
The van der Waals Equation of State
How might we modify the ideal gas equation of state to take into account the effects of intermolecular interactions? The first and most well known answer to this question was offered by the Dutch scientist J.D. van der Waals (1837-1923) in 1873. The ideal gas model assumes that the gas molecules are merely points that occupy no volume; the "V" term in the equation is the volume of the container and is independent of the nature of the gas.
van der Waals recognized that the molecules themselves take up space that subtracts from the volume of the container (Figure $4$), so that the “volume of the gas” V in the ideal gas equation should be replaced by the term ($V–b$), in which $b$ relates to the excluded volume, typically of the order of 20-100 cm3 mol–1. The excluded volume surrounding any molecule defines the closest possible approach of any two molecules during collision. Note that the excluded volume is greater then the volume of the molecule, its radius being half again as great as that of a spherical molecule.
The other effect that van der Waals needed to correct for are the intermolecular attractive forces. These are ignored in the ideal gas model, but in real gases they exert a small cohesive force between the molecules, thus helping to hold the gas together and reducing the pressure it exerts on the walls of the container.
Because this pressure depends on both the frequency and the intensity of collisions with the walls, the reduction in pressure is proportional to the square of the number of molecules per volume of space, and thus for a fixed number of molecules such as one mole, the reduction in pressure is inversely proportional to the square of the volume of the gas. The smaller the volume, the closer are the molecules and the greater will be the effect. The van der Walls equation replaces the $P$ term in the ideal gas equation with $P + (a / V^2)$ in which the magnitude of the constant a increases with the strength of the intermolecular attractive forces.
The complete van der Waals equation of state can be written as
Although most students are not required to memorize this equation, you are expected to understand it and to explain the significance of the terms it contains. You should also understand that the van der Waals constants $a$ and $b$ must be determined empirically for every gas. This can be done by plotting the P-V behavior of the gas and adjusting the values of $a$ and $b$ until the van der Waals equation results in an identical plot. The constant a is related in a simple way to the molecular radius; thus the determination of $a$ constitutes an indirect measurement of an important microscopic quantity.
Table $1$: van der Waals constants for some gases
Substance
molar mass (g)
a (L2-atm mole–2)
b (L mol–1)
hydrogen H2 2 0.244 0.0266
helium He 4 0.034 0.0237
methane CH4 16 2.25 0.0428
water H2O 18 5.46 0.0305
nitrogen N2 28 1.39 0.0391
carbon dioxide CO2 44 3.59 0.0427
carbon tetrachloride CCl4 154 20.4 0.1383
The van der Waals equation is only one of many equations of state for real gases. More elaborate equations are required to describe the behavior of gases over wider pressure ranges. These generally take account of higher-order nonlinear attractive forces, and require the use of more empirical constants. Although we will make no use of them in this course, they are widely employed in chemical engineering work in which the behavior of gases at high pressures must be accurately predicted.
Condensation and the Critical Point
The most striking feature of real gases is that they cease to remain gases as the temperature is lowered and the pressure is increased. Figure $6$ illustrates this behavior; as the volume is decreased, the lower-temperature isotherms suddenly change into straight lines. Under these conditions, the pressure remains constant as the volume is reduced. This can only mean that the gas is “disappearing" as we squeeze the system down to a smaller volume. In its place, we obtain a new state of matter, the liquid. In the green-shaded region, two phases, liquid, and gas, are simultaneously present. Finally, at very small volume all the gas has disappeared and only the liquid phase remains. At this point the isotherms bend strongly upward, reflecting our common experience that a liquid is practically incompressible.
To better understand this plot, look at the isotherm labeled . As the gas is compressed from to , the pressure rises in much the same way as Boyle's law predicts. Compression beyond , however, does not cause any rise in the pressure. What happens instead is that some of the gas condenses to a liquid. At , the substance is entirely in its liquid state. The very steep rise to corresponds to our ordinary experience that liquids have very low compressibilities. The range of volumes possible for the liquid diminishes as the critical temperature is approached.
The Critical Point
Liquid and gas can coexist only within the regions indicated by the green-shaded area in the diagram above. As the temperature and pressure rise, this region becomes more narrow, finally reaching zero width at the critical point. The values of P, T, and V at this juncture are known as the critical constants Pc , Tc, and Vc. The isotherm that passes through the critical point is called the critical isotherm. Beyond this isotherm, the gas and liquids become indistinguishable; there is only a single fluid phase, sometimes referred to as a supercritical liquid (Figure $7$).
At temperatures below 31°C (the critical temperature), CO2 acts somewhat like an ideal gas even at a rather high pressure (). Below 31°, an attempt to compress the gas to a smaller volume eventually causes condensation to begin. Thus at 21°C, at a pressure of about 62 atm (), the volume can be reduced from 200 cm3 to about 55 cm3 without any further rise in the pressure. Instead of the gas being compressed, it is replaced with the far more compact liquid as the gas is essentially being "squeezed" into its liquid phase. After all of the gas has disappeared (), the pressure rises very rapidly because now all that remains is an almost incompressible liquid. Above this isotherm (), CO2 exists only as a supercritical fluid.
What happens if you have some liquid carbon dioxide in a transparent cylinder at just under its Pc of 62 atm, and you then compress it slightly? Nothing very dramatic until you notice that the meniscus has disappeared. By successively reducing and increasing the pressure, you can "turn the meniscus on and off".
One intriguing consequence of the very limited bounds of the liquid state is that you could start with a gas at large volume and low temperature, raise the temperature, reduce the volume, and then reduce the temperature so as to arrive at the liquid region at the lower left, without ever passing through the two-phase region, and thus without undergoing condensation!
Supercritical fluids
The supercritical state of matter, as the fluid above the critical point is often called, possesses the flow properties of a gas and the solvent properties of a liquid. The density of a supercritical fluid can be changed over a wide range by adjusting the pressure; this, in turn, changes its solubility, which can thus be optimized for a particular application. The picture at the right shows a commercial laboratory device used for carrying out chemical reactions under supercritical conditions.
Supercritical carbon dioxide is widely used to dissolve the caffeine out of coffee beans and as a dry-cleaning solvent. Supercritical water has recently attracted interest as a medium for chemically decomposing dangerous environmental pollutants such as PCBs. Supercritical fluids are being increasingly employed as as substitutes for organic solvents (so-called "green chemistry") in a range of industrial and laboratory processes. Applications that involve supercritical fluids include extractions, nano particle and nano structured film formation, supercritical drying, carbon capture and storage, as well as enhanced oil recovery studies. | textbooks/chem/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.06%3A_Real_Gases_and_Critical_Phenomena.txt |
The pages present an overview of the condensed states of matter. Although there is more detail than can be found in standard textbooks, the level is still suitable for first-year college and advanced high school courses. These pages should also be helpful as review material for students in more advanced courses in chemistry, geology, and materials science.
• 7.1: Matter under the Microscope
Gases, liquids, and especially solids surround us and give form to our world. Chemistry at its most fundamental level is about atoms and the forces that act between them to form larger structural units. But the matter that we experience with our senses is far removed from this level. This unit will help you see how these macroscopic properties of matter depend on the microscopic particles of which it is composed.
• 7.2: Intermolecular Interactions
Liquids and solids differ from gases in that they are held together by forces that act between the individual molecular units of which they are composed. In this lesson we will take a closer look at these forces so that you can more easily understand, and in many cases predict, the diverse physical properties of the many kinds of solids and liquids we encounter in the world.
• 7.3: Hydrogen-Bonding and Water
In this section we will learn why this tiny combination of three nuclei and ten electrons possesses special properties that make it unique among the more than 15 million chemical species we presently know.
• 7.4: Liquids and their Interfaces
The molecular units of a liquid, like those of solids, are in direct contact, but never for any length of time and in the same locations. Rapid chemical change requires intimate contact between the agents undergoing reaction, but these agents, along with the reaction products, must be free to move away to allow new contacts and further reaction to take place. This is why so much of what we do with chemistry takes place in the liquid phase.
• 7.5: Changes of State
A given substance will exist in the form of a solid, liquid, or gas, depending on the temperature and pressure. In this unit, we will learn what common factors govern the preferred state of matter under a particular set of conditions, and we will examine the way in which one phase gives way to another when these conditions change.
• 7.6: Introduction to Crystals
Crystallography is of importance not only to chemists and physicists, but also to geologists, amateur minerologists and "rock-hounds". In this lesson we will see how the external shape of a crystal can reveal much about the underlying arrangement of its constituent atoms, ions, or molecules.In this lesson we will see how the external shape of a crystal can reveal much about the underlying arrangement of its constituent atoms, ions, or molecules.
• 7.7: Ionic and Ion-Derived Solids
In this section we deal mainly with a very small but imporant class of solids that are commonly regarded as composed of ions. We will see how the relative sizes of the ions determine the energetics of such compounds. And finally, we will point out that not all solids that are formally derived from ions can really be considered "ionic" at all.
• 7.8: Cubic Lattices and Close Packing
When substances form solids, they tend to pack together to form ordered arrays of atoms, ions, or molecules that we call crystals. Why does this order arise, and what kinds of arrangements are possible? We will limit our discussion to cubic crystals, which form the simplest and most symmetric of all the lattice types. Cubic lattices are also very common — they are formed by many metallic crystals, and also by most of the alkali halides, several of which we will study as examples.
• 7.9: Polymers and Plastics
Synthetic polymers, which includes the large group known as plastics, came into prominence in the early twentieth century. Chemists' ability to engineer them to yield a desired set of properties (strength, stiffness, density, heat resistance, electrical conductivity) has greatly expanded the many roles they play in the modern industrial economy. This Module deals mostly with synthetic polymers, but will include a synopsis of some of the more important natural polymers.
• 7.10: Colloids and their Uses
Colloids occupy an intermediate place between [particulate] suspensions and solutions, both in terms of their observable properties and particle size. In a sense, they bridge the microscopic and the macroscopic. As such, they possess some of the properties of both, which makes colloidal matter highly adaptable to specific uses and functions. Colloid science is central to biology, food science and numerous consumer products.
07: Solids and Liquids
Learning Objectives
• State the major feature that characterizes a condensed state of matter.
• Describe some of the major observable properties that distinguish gases, liquid and solids, and state their relative magnitudes in these three states of matter.
• Describe the dominant forces and the resulting physical properties that distinguish ionic, covalent, metallic, and molecular solids.
• Explain the difference between crystalline and amorphous solids, and cite some examples of each.
• Name some of the basic molecular units from which solids of different type can be composed.
• What is meant by an "extended" or "infinite-molecule solid"?
• Describe some of the special properties of graphite and their structural basis.
Gases, liquids, and especially solids surround us and give form to our world. Chemistry at its most fundamental level is about atoms and the forces that act between them to form larger structural units. But the matter that we experience with our senses is far removed from this level. This unit will help you see how these macroscopic properties of matter depend on the microscopic particles of which it is composed.
Solids, Liquids and Gases
What distinguishes solids, liquids, and gases– the three major states of matter— from each other? Let us begin at the microscopic level, by reviewing what we know about gases, the simplest state in which matter can exist. At ordinary pressures, the molecules of a gas are so far apart that intermolecular forces have an insignificant effect on the random thermal motions of the individual particles. As the temperature decreases and the pressure increases, intermolecular attractions become more important, and there will be an increasing tendency for molecules to form temporary clusters. These are so short-lived, however, that even under extreme conditions, gases cannot be said to possess “structure” in the usual sense.
The contrast at the microscopic level between solids, liquids and gases is most clearly seen in the simplified schematic views above. The molecular units of crystalline solids tend to be highly ordered, with each unit occupying a fixed position with respect to the others. In liquids, the molecules are able to slip around each other, introducing an element of disorder and creating some void spaces that decrease the density. Gases present a picture of almost total disorder, with practically no restrictions on where any one molecule can be.
Having lived our lives in a world composed of solids, liquids, and gases, few of us ever have any difficulty deciding into which of these categories a given sample of matter falls. Our decision is most commonly based on purely visual cues:
• a gas has no definite boundaries other than those that might be imposed by the walls of a confining vessel.
• Liquids and solids possess a clearly delineated phase boundary that gives solids their definite shapes and whose light-reflecting properties enable us to distinguish one phase from another.
• Solids can have any conceivable shape, and their surfaces are usually too irregular to show specular (mirror-like) reflection of light. Liquids, on the other hand, are mobile; except when in the form of tiny droplets, liquids have no inherent shape of their own, but assume the shape of their container and show an approximately flat upper surface.
Our experience also tells us that these categories are quite distinct; a phase, which you will recall is a region of matter having uniform intensive properties, is either a gas, a liquid, or a solid. Thus the three states of matter are not simply three points on a continuum; when an ordinary solid melts, it usually does so at a definite temperature, without apparently passing through any states that are intermediate between a solid and a liquid.
Limiting Behavior
Although these common-sense perceptions are usually correct, they are not infallible, and in fact there are solids such as glasses and many plastics that do not have sharp melting points, but instead undergo a gradual transition from solid to liquid known as softening, and when subject to enough pressure, solids can exhibit something of the flow properties of liquids (glacial ice, for example).
A more scientific approach would be to compare the macroscopic physical properties of the three states of matter, but even here we run into difficulty. It is true, for example, that the density of a gas is usually about a thousandth of that of the liquid or solid at the same temperature and pressure; thus one gram of water vapor at 100°C and 1 atm pressure occupies a volume of 1671 mL; when it condenses to liquid water at the same temperature, it occupies only 1.043 mL.
Table $1$: Comparison of the molar volumes of neon in its three states.
Phase Density
gas 22,400 cm3/mol total volume
(42 cm3/mol excluded volume)
liquid 16.8 cm3/mol
solid 13.9 cm3/mol
Table $1$ compares the molar volumes of neon in its three states. For the gaseous state, P = 1 atm and T = 0°C. The excluded volume is the volume actually taken up by the neon atoms according to the van der Waals equation of state model. It is this extreme contrast with the gaseous states that leads to the appellation “condensed states of matter” for liquids and solids. However, gases at very high pressures can have densities that exceed those of other solid and liquid substances, so density alone is not a sufficiently comprehensive criterion for distinguishing between the gaseous and condensed states of matter. Similarly, the density of a solid is usually greater than that of the corresponding liquid at the same temperature and pressure, but not always: you have certainly seen ice floating on water!
Example $1$: Density of Xenon Gas
Compare the density of gaseous xenon (molar mass 131 g) at 100 atm and 0°C with that of a hydrocarbon liquid for which $ρ = 0.104\, g/mL$ at the same temperature.
Solution
For simplicity, we will assume that xenon approximates an ideal gas under these conditions, which it really does not.
The ideal molar volume at 0° C and 1 atm is 22.4 L; at 100 atm, this would be reduced to 0.22 L or 220 mL, giving a density
$ρ = \dfrac{131\, g}{224\, mL} = 0.58\, g/mL. \nonumber$
In his autobiographical Uncle Tungsten, the physician/author Oliver Sacks describes his experience with xenon-filled balloons of "astonishing density — as near to 'lead balloons" as could be [imagined]. If one twirled these xenon balloons in one's hand, then stopped, the heavy gas, by its own momentum, would continue rotating for a minute, almost as if it were a liquid."
Other physical properties, such as the compressibility, surface tension, and viscosity, are somewhat more useful for distinguishing between the different states of matter. Even these, however, provide no well-defined dividing lines between the various states. Rather than try to develop a strict scheme for classifying the three states of matter, it will be more useful to simply present a few generalizations.
Table $2$: Relative magnitudes of some properties of the three states of matter
property
gas
liquid
solid
density very small large large
thermal expansion coefficient large (= R/P) small small
cohesiveness nil small large
surface tension nil medium very large
viscosity small medium very large
kinetic energy per molecule large small smaller
disorder random medium small
Condensed States of Matter
Some of these deal with macroscopic properties (that is, properties such as the density that relate to bulk matter), and others with microscopic properties that refer to the individual molecular units. Even the most casual inspection of the above table shows that solids and liquids possess an important commonality that distinguishes them from gases: in solids and liquids, the molecules are in contact with their neighbors. As a consequence, these condensed states of matter generally possess much higher densities than gases.
In our study of gases, we showed that the macroscopic properties of a gas (the pressure, volume, and temperature) are related through an equation of state, and that for the limiting case of an ideal gas, this equation of state can be derived from the relatively small set of assumptions of the kinetic molecular theory. To the extent that a volume of gas consists mostly of empty space, all gases have very similar properties. Equations of state work for gases because gases consist mostly of empty space, so intermolecular interactions can be largely neglected. In condensed matter, these interactions dominate, and they tend to be unique to each particular substance, so there is no such thing as a generally useful equation of state of liquids and solids.
Is there a somewhat more elaborate theory that can predict the behavior of the other two principal states of matter, liquids and solids? Very simply, the answer is "no"; despite much effort, no one has yet been able to derive a general equation of state for condensed states of matter. The best one can do is to construct models based on the imagined interplay of attractive and repulsive forces, and then test these models by computer simulation. Nevertheless, the very factors that would seem to make an equation of state for liquids and solids impossibly complicated also give rise to new effects that are easily observed, and which ultimately define the distinguishing characteristics of the gaseous, liquid, and solid states of matter. In this unit, we will try to learn something about these distinctions, and how they are affected by the chemical constitution of a substance.
Liquids
Crystalline solids and gases stand at the two extremes of the spectrum of perfect order and complete chaos. Liquids display elements of both qualities, and both in limited and imperfect ways. Liquids and solids share most of the properties of having their molecular units in direct contact as discussed in the previous section on condensed states of matter. At they same time, liquids, like gases, are fluids, meaning that their molecular units can move more or less independently of each other. But whereas the volume of a gas depends entirely on the pressure (and thus generally on the volume within which it is confined), the volume of a liquid is largely independent of the pressure. Here we offer just enough to help you see how they relate to the other major states of matter.
Solids
Of the four ancient elements of "fire, air, earth and water", it is the many forms of solids ("earths") that we encounter in daily life and which give form, color and variety to our visual world. The solid state, being the form of any substance that prevails at lower temperatures, is one in which thermal motion plays an even smaller role than in liquids. The thermal kinetic energy that the individual molecular units do have at temperatures below their melting points allows them to oscillate around a fixed center whose location is determined by the balance between local forces of attraction and repulsion due to neighboring units, but only very rarely will a molecule jump out of the fixed space allotted to it in the lattice. Thus solids, unlike liquids, exhibit long-range order, cohesiveness, and rigidity, and possess definite shapes.
Classification of solids
Most people who have lived in the world long enough to read this have already developed a rough way of categorizing solids on the basis of macroscopic properties they can easily observe; everyone knows that a piece of metal is fundamentally different from a rock or a chunk of wood. Unfortunately, nature's ingenuity is far too versatile to fit into any simple system of classifying solids, especially those composed of more than a single chemical substance.
Classification Scheme 1: According to bond type
The most commonly used classification is based on the kinds of forces that join the molecular units of a solid together. We can usually distinguish four major categories on the basis of properties such as general appearance, hardness, and melting point.
type of solid
molecular units
dominant forces
typical properties
ionic ions coulombic high-melting, hard, brittle
covalent atoms of electronegative elements chemical bonds non-melting (decompose), extremely hard
metallic atoms of electropositive elements mobile electrons moderate-to-high melting, deformable, conductive, metallic luster
molecular molecules van der Waals low-to-moderate mp, low hardness
It's important to understand that these four categories are in a sense idealizations that fail to reflect the diversity found in nature. The triangular diagram shown here illustrates this very nicely by displaying examples of binary compounds whose properties suggest that they fall somewhere other than at a vertex of the triangle.
The triangle shown above excludes what is probably the largest category: molecular solids that are bound by van der Waals forces. One way of including these is to expand the triangle to a tetrahedron (the so-called Laing tetrahedron). Although this illustrates the concept, it is visually awkward to include many examples of the intermediate cases.
Classification Scheme 2: By type of Molecular Unit
Solids, like the other states of matter, can be classified according to whether their fundamental molecular units are atoms, electrically-neutral molecules, or ions. But solids possess an additional property that gases and liquids do not: an enduring structural arrangement of their molecular units. Over-simplifying only a bit, we can draw up a rough classification of solids according to the following scheme:
structure
atoms
molecules
array of discrete units noble gas solids, metals molecular solids
array of linked units metals and covalent solids "extended molecule" compounds
disordered arrangement alternative forms of some elements (e.g. S, Se) polymers, glasses
Classification Scheme 3: Classification by Dominant Attractive Force
Notice how the boiling points in the following selected examples reflect the major type of attractive force that binds the molecular units together. Bear in mind, however, that more than one type of attractive force can be operative in many substances.
substance bp °C molecular units dominant attractive force separation distance (pm) attraction energy (kJ/mol)
sodium fluoride 990 Na+ F coulombic 18.8 657
sodium hydroxide 318 Na+ OH ion-dipole 21.4 90.4
water 100 H2O dipole-dipole 23.7 20.2
neon 249 Ne dispersion 33.0 0.26
Crystalline Solids
In a solid comprised of identical molecular units, the most favored (lowest potential energy) locations occur at regular intervals in space. If each of these locations is actually occupied, the solid is known as a perfect crystal. What really defines a crystalline solid is that its structure is composed of repeating unit cells each containing a small number of molecular units bearing a fixed geometric relation to one another. The resulting long-range order defines a three-dimensional geometric framework known as a lattice.
Geometric theory shows that only fourteen different types of lattices are possible in three dimensions, and that just six different unit cell arrangements can generate these lattices. The regularity of the external faces of crystals, which in fact correspond to lattice planes, reflects the long-range order inherent in the underlying structure. Perfection is no more attainable in a crystal than in anything else; real crystals contain defects of various kinds, such as lattice positions that are either vacant or occupied by impurities, or by abrupt displacements or dislocations of the lattice structure. Most pure substances, including the metallic elements, form crystalline solids. But there are some important exceptions.
Metallic Solids
In metals the valence electrons are free to wander throughout the solid, instead of being localized on one atom and shared with a neighboring one. The valence electrons behave very much like a mobile fluid in which the fixed lattice of atoms is immersed. This provides the ultimate in electron sharing, and creates a very strong binding effect in solids composed of elements that have the requisite number of electrons in their valence shells. The characteristic physical properties of metals such as their ability to bend and deform without breaking, their high thermal and electrical conductivities and their metallic sheen are all due to the fluid-like behavior of the valence electrons.
Molecular solids
Recall that a "molecule" is defined as a discrete aggregate of atoms bound together sufficiently tightly (that is, by directed covalent forces) to allow it to retain its individuality when the substance is dissolved, melted, or vaporized.
The two words italicized in the preceding sentence are important; covalent bonding implies that the forces acting between atoms within the molecule are much stronger than those acting between molecules, and the directional property of covalent bonding confers on each molecule a distinctive shape which affects a number of its properties. Most compounds of carbon — and therefore, most chemical substances, fall into this category.
Many simpler compounds also form molecules; H2O, NH3, CO2, and PCl5 are familiar examples. Some of the elements, such as H2, O2, O3, P4 and S8 also occur as discrete molecules. Liquids and solids that are composed of molecules are held together by van der Waals forces, and many of their properties reflect this weak binding. Thus molecular solids tend to be soft or deformable, have low melting points, and are often sufficiently volatile to evaporate (sublime) directly into the gas phase; the latter property often gives such solids a distinctive odor.
Iodine
Iodine is a good example of a volatile molecular crystal. The solid (mp 114° C , bp 184°) consists of I2 molecules bound together only by dispersion forces. If you have ever worked with solid iodine in the laboratory, you will probably recall the smell and sight of its purple vapor which is easily seen in a closed container.
Because dispersion forces and the other van der Waals forces increase with the number of atoms, larger molecules are generally less volatile, and have higher melting points, than do the smaller ones. Also, as one moves down a column in the periodic table, the outer electrons are more loosely bound to the nucleus, increasing the polarizibility of the atom and thus its susceptibility to van der Waals-type interactions. This effect is particularly apparent in the progression of the boiling points of the successively heavier noble gas elements.
Covalent Solids
These are a class of extended-lattice compounds (see Section 6 below) in which each atom is covalently bonded to its nearest neighbors. This means that the entire crystal is in effect one super-giant “molecule”. The extraordinarily strong binding forces that join all adjacent atoms account for the extreme hardness of such substances; these solids cannot be broken or abraded without cleaving a large number of covalent chemical bonds. Similarly, a covalent solid cannot “melt” in the usual sense, since the entire crystal is its own giant molecule. When heated to very high temperatures, these solids usually decompose into their elements.
Diamond
Diamond is the hardest material known, defining the upper end of the 1-10 scale known as Mohs Hardness. Diamond cannot be melted; above 1700°C it is converted to graphite the more stable form of carbon.
The diamond unit cell is face-centered cubic and contains 8 carbon atoms. The four darkly shaded ones are contained within the cell and are completely bonded to other members of the cell. The other carbon atoms (6 in faces and 4 at corners) have some bonds that extend to atoms in other cells. (Two of the carbons nearest the viewer are shown as open circles in order to more clearly reveal the bonding arrangement.)
Other covalent solids
Boron nitride BN is similar to carbon in that it exists as a diamond-like cubic polymorph as well as in a hexagonal form analogous to graphite. Cubic BN is the second hardest material after diamond, and finds use in industrial abrasives and cutting tools. Recent interest in BN has centered on its carbon-like ability to form nanotubes and related nanostructures.
Silicon carbide SiC is also known as carborundum. Its structure is very much like that of diamond with every other carbon replaced by silicon. On heating at atmospheric pressure, it decomposes at 2700°C, but has never been observed to melt. Structurally, it is very complex; at least 70 crystalline forms have been identified. Its extreme hardness and ease of synthesis have led to a diversity of applications — in cutting tools and abrasives, high-temperature semiconductors, and other high-temperature applications, manufacture of specialty steels, jewelry, and many more. Silicon carbide is an extremely rare mineral on the earth, and comes mostly from meteorites which are believed to have their origins in carbonaceous stars. The first synthetic SiC was made accidently by E.G. Acheson in 1891 who immediately recognized its industrial prospects and founded the Carborundum Co.
Tungsten carbide WC is probably the most widely-encountered covalent solid owing to its use in "carbide" cutting tools and as the material used to make the rotating balls in ball-point pens. It's high-melting (2870°C) form has a structure similar to that of diamond and is only slightly less hard. In many of its applications it is embedded in a softer matrix of cobalt or coated with titanium compounds.
Amorphous Solids
In some solids there is so little long-range order that the substance cannot be considered crystalline at all; such a solid is said to be amorphous. Amorphous solids possess short-range order but are devoid of any organized structure over longer distances; in this respect they resemble liquids. However, their rigidity and cohesiveness allow them to retain a definite shape, so for most practical purposes they can be considered to be solids.
Glasses refers generally to solids formed from their melts that do not return to their crystalline forms on cooling, but instead form hard, and often transparent amorphous solids. Although some organic substances such as sugar can form glasses ("rock candy"), the term more commonly describes inorganic compounds, especially those based on silica, SiO2. Natural silica-based glasses, known as obsidian, are formed when certain volcanic magmas cool rapidly.
Ordinary glass is composed mostly of SiO2, which usually exists in nature in a crystalline form known as quartz. If quartz (in the form of sand) is melted and allowed to cool, it becomes so viscous that the molecules are unable to move to the low potential energy positions they would occupy in the crystal lattice, so that the disorder present in the liquid gets “frozen into” the solid. In a sense, glass can be regarded as a supercooled liquid. Glasses are transparent because the distances over which disorder appears are small compared to the wavelength of visible light, so there is nothing to scatter the light and produce cloudiness.
Ordinary glass is made by melting silica sand to which has been added some calcium and sodium carbonates. These additives reduce the melting point and make it more difficult for the SiO2 molecules to arrange themselves into crystalline order as the mixture cools. Glass is believed to have first been made in the Middle East at least as early as 3000 BCE. Its workability and ease of coloring has made it one of mankind's most important and versatile materials.
Types of molecular units
Molecules
Molecules, not surprisingly, are the most common building blocks of pure substances. Most of the 15-million-plus chemical substances presently known exist as distinct molecules. Chemists commonly divide molecular compounds into "small" and "large-molecule" types, the latter usually falling into the class of polymers (see below.) The dividing line between the two categories is not very well defined, and tends to be based more on the properties of the substance and how it is isolated and purified.
Atoms
We usually think of atoms as the building blocks of molecules, so the only pure substances that consist of plain atoms are those of some of the elements — mostly the metallic elements, and also the noble-gas elements. The latter do form liquids and crystalline solids, but only at very low temperatures. Although the metallic elements form crystalline solids that are essentially atomic in nature, the special properties that give rise to their "metallic" nature puts them into a category of their own. Most of the non-metallic elements exist under ordinary conditions as small molecules such as O2 or S6, or as extended structures that can have a somewhat polymeric nature. Many of these elements can form more than one kind of structure, each one stable under different ranges of temperature and pressure. Multiple structures of the same element are known as allotropes, although the more general term polymorph is now preferred.
Ions
Ions, you will recall, are atoms or molecules that have one or more electrons missing (positive ions) or in excess (negative ions), and therefore possess an electric charge. A basic law of nature, the electroneutrality principle, states that bulk matter cannot acquire more than a trifling (and chemically insignificant) net electric charge. So one important thing to know about ions is that in ordinary matter, whether in the solid, liquid, or gaseous state, any positive ions must be accompanied by a compensating number of negative ions. Ionic substances such as sodium chloride form crystalline solids that can be regarded as made of ions. These solids tend to be quite hard and have high melting points, reflecting the strong forces between oppositely-charged ions. Solid metal oxides, such as CaO and MgO which are composed of doubly-charged ions don't melt at all, but simply dissociate into the elements at very high temperatures.
Polymers
Plastics and natural materials such as rubber or cellulose are composed of very large molecules called polymers; many important biomolecules are also polymeric in nature. Owing to their great length, these molecules tend to become entangled in the liquid state, and are unable to separate to form a crystal lattice on cooling. In general, it is very difficult to get such substances to form anything other than amorphous solids.
Extended solids
actually exist in their solid forms as linked assemblies of these basic units arranged in chains or layers that extend indefinitely in one, two, or three dimensions. Thus the very simple models of chemical bonding that apply to the isolated molecules in gaseous form must be modified to account for bonding in some of these solids. The terms "one-dimensional" and "two-dimensional", commonly employed in this context, should more accurately be prefixed by "quasi-"; after all, even a single atom occupies three-dimensional space!
One-dimensional solids
Atoms of some elements such as sulfur and selenium can bond together in long chains of indefinite length, thus forming polymeric, amorphous solids. The most well known of these is the amorphous "plastic sulfur" formed when molten sulfur is cooled rapidly by pouring it into water.These are never the most common (or stable) forms of these elements, which prefer to form discrete molecules.
Rubber-like strands of plastic sulfur formed by pouring hot molten sulfur into cold water. After a few days, it will revert to ordinary crystalline sulfur. But small molecules can also form extended chains. Sulfur trioxide is a gas above room temperature, but when it freezes at 17°C the solid forms long chains in which each S atom is coordinated to four oxygen atoms.
Multi-dimensional solids
Many inorganic substances form crystalline solids which are built up from parallel chains in which the basic formula units are linked by weak bonds involving dipole-dipole and dipole-induced dipole interactions. Neighboring chains are bound mainly by dispersion forces.
Layer or sheet-like structures
Solid cadmium chloride is a good example of a layer structure. The Cd and Cl atoms occupy separate layers; each of these layers extends out in a third dimension to form a sheet. The CdCl2 crystal is built up from stacks of these layers held together by van der Waals forces.
It's worth pointing out that although salts such as CuCl2 and CdCl2 are dissociated into ions when in aqueous solution, the solids themselves should not be regarded as "ionic solids". See also this section of the lesson on ionic solids.
Graphite
Graphite is a polymorph of carbon and its most stable form. It consists of sheets of fused benzene rings stacked in layers. The spacing between layers is sufficient to admit molecules of water vapor and other atmospheric gases which become absorbed in the interlamellar spaces and act as lubricants, allowing the layers to slip along each other. Thus graphite itself often has a flake-like character and is commonly used as a solid lubricant, although it loses this property in a vacuum.
As would be expected from its anistropic structure, the electric and thermal conductivity of graphite are much greater in directions parallel to the layers than across the layers. The melting point of 4700-5000°C makes graphite useful as a high-temperature refractory material.
Graphite is the most common form of relatively pure carbon found in nature. Its name comes from the same root as the Greek word for "write" or "draw", reflecting its use as pencil "lead" since the 16th century. (The misnomer, which survives in common use, is due to its misidentification as an ore of the metallic element of the same name at a time long before modern chemistry had developed.)
Graphene
Graphene is a two-dimensional material consisting of a single layer of graphite — essentially "chicken wire made of carbon" that was discovered in 2004. Small fragments of graphene can be obtained by several methods; one is to attach a piece of Scotch Tape™ to a piece of graphite and then carefully pull it off (a process known as exfoliation.) Fragments of graphene are probably produced whenever one writes with a pencil.
Graphene has properties that are uniquely different from all other solids. It is the strongest known material, and it exhibits extremely high electrical conductivity due to its massless electrons which are apparently able to travel at relativistic velocities through the layer. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.01%3A_Matter_under_the_Microscope.txt |
Learning Objectives
• Name five kinds of molecular units that condensed matter can be composed of.
• Sketch out a potential energy curve, showing clearly the equilibrium separation and potential energy minimum.
• State the difference between bonded and non-bonded attractions.
• Explain the meaning and significance of the dipole moment of a molecule.
• Define induced dipole and polarizability.
• State the six kinds of intermolecular attractive forces and their relative strengths.
Liquids and solids differ from gases in that they are held together by forces that act between the individual molecular units of which they are composed. In this lesson we will take a closer look at these forces so that you can more easily understand, and in many cases predict, the diverse physical properties of the many kinds of solids and liquids we encounter in the world.
The very existence of condensed states of matter suggests that there are attractive forces acting between the basic molecular units of solids and liquids. The term molecular unit refers to the smallest discrete structural unit that makes up the liquid or solid. In most of the over 15 million chemical substances that are presently known, these structural units are actual molecules— that is, aggregates of atoms that have their own distinguishing properties, formulas, and molecular weights. But the molecular units can also be individual atoms, ions and more extended units. As with most artificial classifications, these distinctions tend to break down in extreme cases: most artificial polymers ("plastics") are composed of molecules of various sizes and shapes, some metal alloys contain identifiable molecular units, and it is not too much of a stretch to regard a diamond or a crystal of NaCl as a "molecule" in itself.
Potential Energy Curves
On the atomic or molecular scale, all particles exert both attractive and repulsive forces on each other. If the attractive forces between two or more atoms are strong enough to make them into an enduring unit with its own observable properties, we call the result a "molecule" and refer to the force as a "chemical bond".
The two diatomic molecules depicted in Figure $1$ have come into close contact with each other, but the attractive force that acts between them is not strong enough to bind them into a new molecular unit, so we call this force a non-bonding attraction. In the absence of these non-bonding attractions, all matter would exist in the gaseous state only; there would be no condensed phases.
The distinction between bonding- and non-bonding attractions can be seen by comparing the potential energy plot for a pair of hydrogen atoms with that for two argon atoms (Figure $2$). As two hydrogen atoms are brought together, the potential energy falls to a minimum and then rises rapidly as the two electron clouds begin to repel each other. The potential energy minimum defines the energy and the average length of the H–H bond — two of its unique measurable properties.
The potential energy of a pair of argon atoms also falls as they are brought together, but not enough to hold them together. (e.g., the laws of quantum mechanics do not allow this noble gas element to form stable $Ar_2$ molecules.) However, these non-bonding attractions enable argon to exist as a liquid and solid at low temperatures, but are unable to withstand disruptions caused by thermal energy at ordinary temperatures, so we commonly know argon as a gas.
Thermal Effects
From a classic pictures, at temperatures above absolute zero, all molecular-scale particles possess thermal energy that keeps them in constant motion (and from a quantum picture, motion does not stop even at absolute zero due to Heisenberg Uncertainly principle). The average thermal energy is given by the product of the gas constant R and the absolute temperature. At 25°C, this works out to
$RT = (8.314 \,J \,K^{–1} mol^{–1}) (298\, K) = 2,480\, J\, mol^{–1} \approx 2.5\, kJ\, mol^{–1}$
A functional chemical bond is much stronger than this (typically over 100 kJ/mol), so the effect of thermal motion is simply to cause the bond to vibrate; only at higher temperatures (where the value of RT is larger) will most bonds begin to break. Non-bonding attractive forces between pairs of atoms are generally too weak to sustain even a single vibration. In addition to unique distinguishing properties such as bond energy, bond length and stretching frequencies, covalent bonds usually have directional properties that depend on the orbital structures of the component atoms. The much-weaker non-bonding attractions possess none of these properties.
The shape of a potential energy curve (often approximated as a "Morse" curve) shows how repulsive and attractive forces affect the potential energy in opposite ways: repulsions always raise this energy, and attractions reduce it. The curve passes through a minimum when the attractive and repulsive forces are exactly in balance. As we stated above, all particles exert both kinds of forces on one another; these forces are all basically electrical in nature and they manifest themselves in various ways and with different strengths.
The distance corresponding to the minimum potential energy is known as the equilibrium distance. This is the average distance that will be maintained by the two particles if there are no other forces acting on them, such as might arise from the presence of other particles nearby. A general empirical expression for the interaction potential energy curve between two particles can be written as
$E = Ar^{-n} + Br^{-m} \label{7.2.1}$
$A$ and $B$ are proportionality constants and $n$ and $m$ are integers. This expression is sometimes referred to as the Mie equation. The first term, $A$, corresponds to repulsion is always positive, and $n$ must be larger than $m$, reflecting the fact that repulsion always dominates at small separations. The $B$ coefficient is negative for attractive forces, but it will become positive for electrostatic repulsion between like charges. The larger the value of one of these exponents, the closer the particles must come before the force becomes significant. Table $1$ lists the exponents for the types of interactions we will describe in this lesson.
Table $1$: Classification of intermolecular forces
Species Involved
Type of Force
n
m
ions Coulombic - 1
ion - polar molecule ion-dipole - 2
two polar molecules dipole-dipole - 3
ion - nonpolar molecule ion - induced dipole - 4
polar and nonpolar molecule dipole - induced dipole - 6
nonpolar molecules dispersion - 6
repulsions quantum 9 -
Note: the blue-shaded interactions are known collectively as van der Waals interactions
The Universal Repulsive Force
The value of $n$ for the repulsive force in Figure $3$ is 9; this may be the highest inverse-power law to be found in nature. The magnitude of such a force is negligible until the particles are almost in direct contact, but once it kicks in, it becomes very strong; if you want to get a feel for it, try banging your head into a concrete wall. Because the repulsive force is what prevents two atoms from occupying the same space, this is just what you would expect. If the repulsive force did not always win out against all attractive forces, all matter would collapse into one huge glob! The universal repulsive force arises directly from two main aspects of quantum theory.
1. First, the Heisenberg uncertainty principle tells us that the electrons situated within the confines of an atom possess kinetic energy that would exert an outward pressure were it not for the compensating attractive force of the positively-charged nucleus. But even the very slight decrease in volume that would result from squeezing the atom into a smaller space will raise this pressure so as to effectively resist this change in volume. This is the basic reason that condensed states of matter have extremely small compressibilities.
2. Working in concert with this is the Pauli exclusion principle each electron must have a different set of quantum numbers. So as two particles begin to intrude upon each other, the volume their electrons occupy gets divided up between each spin-pair, and the ones forced into higher quantum states would normally occupy even greater volumes. The effect is again to massively raise the potential energy as the particles begin to squeeze too close together.
In a wonderful article (Science 187 605-612 1975), the physicist Victor Weiskopf showed how these considerations, combined with a few fundamental constants, leads to realistic estimates of such things as the hardness and compressibility of solids, the heights of mountains, the lengths of ocean waves, and the sizes of stars.
Ion-Ion Interactions
Electrostatic attraction between electrically-charged particles is the strongest of all the intermolecular forces. These Coulombic forces (as they are often called) cause opposite charges to attract and like charges to repel.
Coulombic forces are involved in all forms of chemical bonding; when they act between separate charged particles (ion-ion interactions) they are especially strong. Thus the energy required to pull a mole of Na+ and Cl ions apart in the sodium chloride crystal is greater than that needed to break the covalent bond in $H_2$ (Figure $1$). The effects of ion-ion attraction are seen most directly in solids such as NaCl which consist of oppositely-charged ions arranged in two inter-penetrating crystal lattices.
According to Coulomb's Law the force between two charged particles is given by
$F= \dfrac{q_1q_2}{4\pi\epsilon_0 r^2} \label{7.2.2}$
Instead of using SI units, chemists often prefer to express atomic-scale distances in picometers and charges as electron charge (±1, ±2, etc.) Using these units, the proportionality constant $1/4\pi\epsilon$ works out to $2.31 \times 10^{16}\; J\; pm$. The sign of $F$ determines whether the force will be attractive (–) or repulsive (+); notice that the latter is the case whenever the two q's have the same sign.
Equation $\ref{7.2.2}$ is an example of an inverse square law; the force falls off as the square of the distance. A similar law governs the manner in which the illumination falls off as you move away from a point light source; recall this the next time you walk away from a street light at night, and you will have some feeling for what an inverse square law means.
The stronger the attractive force acting between two particles, the greater the amount of work required to separate them. Work represents a flow of energy, so the foregoing statement is another way of saying that when two particles move in response to a force, their potential energy is lowered. This work, as you may recall if you have studied elementary mechanics, is found by integrating the negative force with respect to distance over the distance moved. Thus the energy that must be supplied in order to completely separate two oppositely-charged particles initially at a distance r0 is given by
$w= - \int _{r_o} ^{\infty} \dfrac{q_1q_2}{4\pi\epsilon_0 r}dr =- \dfrac{q_1q_2}{4\pi\epsilon_0 r_o} \label{7.2.3}$
Example $1$
When sodium chloride is melted, some of the ion pairs vaporize and form neutral NaCl molecules. How much energy would be released when one mole of Na+ and Cl ions are brought together in this way?
Solution
The energy released will be the same as the work required to separate
\begin{align*} E &= \dfrac{(2.31 \times 10^{16} J pm) (+1) (–1)}{276\; pm} \[4pt] &= –8.37 \times 10^{–19}\; J \end{align*}
The ion-ion interaction is the simplest of electrostatic interactions and other higher order interactions exists as discussed below.
Dipoles
According to Coulomb's law (Equation $\ref{7.2.1}$), the electrostatic force between an ion and an uncharged particle having Q = 0 should be zero. Bear in mind, however, that this formula assumes that the two particles are point charges having zero radii. A real particle such as an atom or a molecule occupies a certain volume of space. Even if the electric charges of the protons and electrons cancel out (as they will in any neutral atom or molecule), it is possible that the spatial distribution of the electron cloud representing the most loosely-bound [valence] electrons might be asymmetrical, giving rise to an electric dipole moment. There are two kinds of dipole moments:
• Permanent electric dipole moments can arise when bonding occurs between elements of differing electronegativities.
• Induced (temporary) dipole moments are created when an external electric field distorts the electron cloud of a neutral molecule.
An electric dipole refers to a separation of electric charge. An idealized electric dipole consists of two point charges of magnitude +q and –q separated by a distance r. Even though the overall system is electrically neutral, the charge separation gives rise to an electrostatic effect whose strength is expressed by the electric dipole moment given by
$μ = q \times r \label{$5$}$
Dipole moments possess both magnitude and direction, and are thus vectorial quantities; they are conventionally represented by arrows whose heads are at the negative end.
Permanent dipole moments
These are commonly referred to simply as "dipole moments". The most well-known molecule having a dipole moment is ordinary water. The charge imbalance arises because oxygen, with its nuclear charge of 8, pulls the electron cloud that comprises each O–H bond toward itself. These two "bond moments" add vectorially to produce the permanent dipole moment denoted by the red arrow. Note the use of the δ (Greek delta) symbol to denote the positive and negative ends of the dipoles.
When an electric dipole is subjected to an external electric field, it will tend to orient itself so as to minimize the potential energy; that is, its negative end will tend to point toward the higher (more positive) electric potential. In liquids, thermal motions will act to disrupt this ordering, so the overall effect depends on the temperature. In condensed phases the local fields due to nearby ions or dipoles in a substance play an important role in determining the physical properties of the substance, and it is in this context that dipolar interactions are of interest to us here. We will discuss each kind of interaction in order of decreasing strength.
Induced dipoles
Even if a molecule is electrically neutral and possesses no permanent dipole moment, it can still be affected by an external electric field. Because all atoms and molecules are composed of charged particles (nuclei and electrons), the electric field of a nearby ion will cause the centers of positive and negative charges to shift in opposite directions. This effect, which is called polarization, results in the creation of a temporary, or induced dipole moment. The induced dipole then interacts with the species that produced it, resulting in a net attraction between the two particles.
The larger an atom or ion, the more loosely held are its outer electrons, and the more readily will the electron cloud by distorted by an external field. A quantity known as the polarizability expresses the magnitude of the temporary dipole that can be induced in it by a nearby charge.
Ion-Dipole interactions
A dipole that is close to a positive or negative ion will orient itself so that the end whose partial charge is opposite to the ion charge will point toward the ion. This kind of interaction is very important in aqueous solutions of ionic substances; H2O is a highly polar molecule, so that in a solution of sodium chloride, for example, the Na+ ions will be enveloped by a shell of water molecules with their oxygen-ends pointing toward these ions, while H2O molecules surrounding the Cl ions will have their hydrogen ends directed inward. As a consequence of ion-dipole interactions, all ionic species in aqueous solution are hydrated; this is what is denoted by the suffix in formulas such as K+(aq), etc.
The strength of ion-dipole attraction depends on the magnitude of the dipole moment and on the charge density of the ion. This latter quantity is just the charge of the ion divided by its volume. Owing to their smaller sizes, positive ions tend to have larger charge densities than negative ions, and they should be more strongly hydrated in aqueous solution. The hydrogen ion, being nothing more than a bare proton of extremely small volume, has the highest charge density of any ion; it is for this reason that it exists entirely in its hydrated form H3O+ in water.
Dipole-dipole interactions
As two dipoles approach each other, they will tend to orient themselves so that their oppositely-charged ends are adjacent. Two such arrangements are possible: the dipoles can be side by side but pointing in opposite directions, or they can be end to end. It can be shown that the end-to-end arrangement gives a lower potential energy.
Dipole-dipole attraction is weaker than ion-dipole attraction, but it can still have significant effects if the dipole moments are large. The most important example of dipole-dipole attraction is hydrogen bonding.
Ion-induced dipole Interactions
The most significant induced dipole effects result from nearby ions, particularly cations (positive ions). Nearby ions can distort the electron clouds even in polar molecules, thus temporarily changing their dipole moments. The larger ions (especially negative ones such as SO22– and ClO42–) are highly polarizable, and the dipole moments induced in them by a cation can play a dominant role in compound formation.
Dipole-induced dipole interactions
A permanent dipole can induce a temporary one in a species that is normally nonpolar, and thus produce a net attractive force between the two particles (Figure $9$). This attraction is usually rather weak, but in a few cases it can lead to the formation of loosely-bound compounds. This effect explains the otherwise surprising observation that a wide variety of neutral molecules such as hydrocarbons, and even some of the noble gas elements, form stable hydrate compounds with water.
Dispersion (London) Forces
The fact that noble gas elements and completely non-polar molecules such as H2 and N2 can be condensed to liquids or solids tells us that there must be yet another source of attraction between particles that does not depend on the existence of permanent dipole moments in either particle (Figure $10$). To understand the origin of this effect, it is necessary to realize that when we say a molecule is “nonpolar”, we really mean that the time-averaged dipole moment is zero. This is the same kind of averaging we do when we draw a picture of an orbital, which represents all the locations in space in which an electron can be found with a certain minimum probability. On a very short time scale, however, the electron must be increasingly localized; not even quantum mechanics allows it to be in more than one place at any given instant. As a consequence, there is no guarantee that the distribution of negative charge around the center of an atom will be perfectly symmetrical at every instant; every atom therefore has a weak, fluctuating dipole moment that is continually disappearing and reappearing in another direction.
Dispersion or London forces can be considered to be "spontaneous dipole - induced dipole" interactions.
Although these extremely short-lived fluctuations quickly average out to zero, they can still induce new dipoles in a neighboring atom or molecule, which helps sustain the original dipole and gives rise to a weak attractive force known as the dispersion or London force. Although dispersion forces are the weakest of all the intermolecular attractions, they are universally present. Their strength depends to a large measure on the number of electrons in a molecule. This can clearly be seen by looking at the noble gas elements in Table $2$, whose ability to condense to liquids and freeze to solids is entirely dependent on dispersion forces.
Table $2$: Intermolecular interactions in the noble gases
element He Ne Ar Kr Xe
atomic number 2 10 18 36 54
boiling point, K 27 87 120 165 211
critical temperature, K 5 44 151 209.5 290
heat of vaporization, kJ mol–1 0.08 1.76 6.51 59 12.6
It is important to note that dispersion forces are additive; if two elongated molecules find themselves side by side, dispersion force attractions will exist all along the regions where the two molecules are close. This can produce quite strong attractions between large polymeric molecules even in the absence of any stronger attractive forces.
"van der Waals" forces is a catch all Term
Although nonpolar molecules are by no means uncommon, many kinds of molecules possess permanent dipole moments, so liquids and solids composed of these species will be held together by a combination of dipole-dipole, dipole-induced dipole, and dispersion forces. These weaker forces (that is, those other than Coulombic attractions) are known collectively as van der Waals forces. These include attraction and repulsions between atoms, molecules, and surfaces, as well as other intermolecular forces. The term includes:
• force between two permanent dipoles and higher order moments like quadrupole
• force between a permanent dipole and a corresponding induced dipole
• force between two instantaneously induced dipoles (dispersion forces)
Table $3$ shows some estimates of the contributions of the various types of van der Waals forces that act between several different types of molecules. Note particularly how important dispersion forces are in all of these examples, and how this, in turn, depends on the polarizability.
Table $3$: Intermolecular interactions in the noble gases
Dubstance
Boiling Point °C
Dipole Moment D
Polarizability
% Dipole-induced Dipole
% Dipole-Dipole
% Dispersion
Ar –186 0 1.6 0 0 100
CO –190 0.1 2.0 0 0 100
HCl –84 1.0 2.6 4.2 14.4 81.4
HBr –67 0.8 3.6 2.2 3.3 94.5
HI –35 0.4 5.4 0.4 0.1 99.5
NH3 –33 1.5 2.6 5.4 44.6 50.0
H2O 100 1.8 1.5 4.0 77.0 19.0 | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.02%3A_Intermolecular_Interactions.txt |
Learning Objectives
• Identify three special properties of water that make it unusual for a molecule of its size, and explain how these result from hydrogen bonding.
• Explain what is meant by hydrogen bonding and the molecular structural features that bring it about.
• Describe the "structure", such as it is, of liquid water.
• Sketch out structural examples of hydrogen bonding in three small molecules other than H2O.
• Describe the roles of hydrogen bonding in proteins and in DNA.
Most students of chemistry quickly learn to relate the structure of a molecule to its general properties. Thus we generally expect small molecules to form gases or liquids, and large ones to exist as solids under ordinary conditions. And then we come to H2O, and are shocked to find that many of the predictions are way off, and that water (and by implication, life itself) should not even exist on our planet! In this section we will learn why this tiny combination of three nuclei and ten electrons possesses special properties that make it unique among the more than 15 million chemical species we presently know.
In water, each hydrogen nucleus is covalently bound to the central oxygen atom by a pair of electrons that are shared between them. In H2O, only two of the six outer-shell electrons of oxygen are used for this purpose, leaving four electrons which are organized into two non-bonding pairs. The four electron pairs surrounding the oxygen tend to arrange themselves as far from each other as possible in order to minimize repulsions between these clouds of negative charge. This would ordinarily result in a tetrahedral geometry in which the angle between electron pairs (and therefore the H-O-H bond angle) is 109.5°. However, because the two non-bonding pairs remain closer to the oxygen atom, these exert a stronger repulsion against the two covalent bonding pairs, effectively pushing the two hydrogen atoms closer together. The result is a distorted tetrahedral arrangement in which the H—O—H angle is 104.5°.
Water's large dipole moment leads to hydrogen bonding
The H2O molecule is electrically neutral, but the positive and negative charges are not distributed uniformly. This is illustrated by the gradation in color in the schematic diagram here. The electronic (negative) charge is concentrated at the oxygen end of the molecule, owing partly to the nonbonding electrons (solid blue circles), and to oxygen's high nuclear charge which exerts stronger attractions on the electrons. This charge displacement constitutes an electric dipole, represented by the arrow at the bottom; you can think of this dipole as the electrical "image" of a water molecule.
Opposite charges attract, so it is not surprising that the negative end of one water molecule will tend to orient itself so as to be close to the positive end of another molecule that happens to be nearby. The strength of this dipole-dipole attraction is less than that of a normal chemical bond, and so it is completely overwhelmed by ordinary thermal motions in the gas phase. However, when the H2O molecules are crowded together in the liquid, these attractive forces exert a very noticeable effect, which we call (somewhat misleadingly) hydrogen bonding. And at temperatures low enough to turn off the disruptive effects of thermal motions, water freezes into ice in which the hydrogen bonds form a rigid and stable network.
Notice that the hydrogen bond (shown by the dashed green line) is somewhat longer than the covalent O—H bond. It is also much weaker, about 23 kJ mol–1 compared to the O–H covalent bond strength of 492 kJ mol–1.
Forty-one anomalies of water" — some of them rather esoteric.
Water has long been known to exhibit many physical properties that distinguish it from other small molecules of comparable mass. Although chemists refer to these as the "anomalous" properties of water, they are by no means mysterious; all are entirely predictable consequences of the way the size and nuclear charge of the oxygen atom conspire to distort the electronic charge clouds of the atoms of other elements when these are chemically bonded to the oxygen.
Boiling point
The most apparent peculiarity of water is its very high boiling point for such a light molecule. Liquid methane CH4 (molecular weight 16) boils at –161°C. As you can see from this diagram, extrapolation of the boiling points of the various Group 16 hydrogen compounds to H2O suggests that this substance should be a gas under normal conditions.
Surface Tension
Compared to most other liquids, water also has a high surface tension. Have you ever watched an insect walk across the surface of a pond? The water strider takes advantage of the fact that the water surface acts like an elastic film that resists deformation when a small weight is placed on it. (If you are careful, you can also "float" a small paper clip or steel staple on the surface of water in a cup.) This is all due to the surface tension of the water. A molecule within the bulk of a liquid experiences attractions to neighboring molecules in all directions, but since these average out to zero, there is no net force on the molecule. For a molecule that finds itself at the surface, the situation is quite different; it experiences forces only sideways and downward, and this is what creates the stretched-membrane effect.
The distinction between molecules located at the surface and those deep inside is especially prominent in H2O, owing to the strong hydrogen-bonding forces. The difference between the forces experienced by a molecule at the surface and one in the bulk liquid gives rise to the liquid's surface tension. This drawing highlights two H2O molecules, one at the surface, and the other in the bulk of the liquid. The surface molecule is attracted to its neighbors below and to either side, but there are no attractions pointing in the 180° solid angle angle above the surface. As a consequence, a molecule at the surface will tend to be drawn into the bulk of the liquid. But since there must always be some surface, the overall effect is to minimize the surface area of a liquid.
The geometric shape that has the smallest ratio of surface area to volume is the sphere, so very small quantities of liquids tend to form spherical drops. As the drops get bigger, their weight deforms them into the typical tear shape.
Ice floats on water
The most energetically favorable configuration of H2O molecules is one in which each molecule is hydrogen-bonded to four neighboring molecules. Owing to the thermal motions described above, this ideal is never achieved in the liquid, but when water freezes to ice, the molecules settle into exactly this kind of an arrangement in the ice crystal. This arrangement requires that the molecules be somewhat farther apart then would otherwise be the case; as a consequence, ice, in which hydrogen bonding is at its maximum, has a more open structure, and thus a lower density than water.
Here are three-dimensional views of a typical local structure of water (left) and ice (right.) Notice the greater openness of the ice structure which is necessary to ensure the strongest degree of hydrogen bonding in a uniform, extended crystal lattice. The more crowded and jumbled arrangement in liquid water can be sustained only by the greater amount of thermal energy available above the freezing point.
When ice melts, the more vigorous thermal motion disrupts much of the hydrogen-bonded structure, allowing the molecules to pack more closely. Water is thus one of the very few substances whose solid form has a lower density than the liquid at the freezing point. Localized clusters of hydrogen bonds still remain, however; these are continually breaking and reforming as the thermal motions jiggle and shove the individual molecules. As the temperature of the water is raised above freezing, the extent and lifetimes of these clusters diminish, so the density of the water increases.
At higher temperatures, another effect, common to all substances, begins to dominate: as the temperature increases, so does the amplitude of thermal motions. This more vigorous jostling causes the average distance between the molecules to increase, reducing the density of the liquid; this is ordinary thermal expansion.
Because the two competing effects (hydrogen bonding at low temperatures and thermal expansion at higher temperatures) both lead to a decrease in density, it follows that there must be some temperature at which the density of water passes through a maximum. This temperature is 4° C; this is the temperature of the water you will find at the bottom of an ice-covered lake in which this most dense of all water has displaced the colder water and pushed it nearer to the surface.
Structure of Liquid Water
The nature of liquid water and how the H2O molecules within it are organized and interact are questions that have attracted the interest of chemists for many years. There is probably no liquid that has received more intensive study, and there is now a huge literature on this subject. The following facts are well established:
• H2O molecules attract each other through the special type of dipole-dipole interaction known as hydrogen bonding
• a hydrogen-bonded cluster in which four H2Os are located at the corners of an imaginary tetrahedron is an especially favorable (low-potential energy) configuration, but...
• the molecules undergo rapid thermal motions on a time scale of picoseconds (10–12 second), so the lifetime of any specific clustered configuration will be fleetingly brief.
A variety of techniques including infrared absorption, neutron scattering, and nuclear magnetic resonance have been used to probe the microscopic structure of water. The information garnered from these experiments and from theoretical calculations has led to the development of around twenty "models" that attempt to explain the structure and behavior of water. More recently, computer simulations of various kinds have been employed to explore how well these models are able to predict the observed physical properties of water.
This work has led to a gradual refinement of our views about the structure of liquid water, but it has not produced any definitive answer. There are several reasons for this, but the principal one is that the very concept of "structure" (and of water "clusters") depends on both the time frame and volume under consideration. Thus, questions of the following kinds are still open:
• How do you distinguish the members of a "cluster" from adjacent molecules that are not in that cluster?
• Since individual hydrogen bonds are continually breaking and re-forming on a picosecond time scale, do water clusters have any meaningful existence over longer periods of time? In other words, clusters are transient, whereas "structure" implies a molecular arrangement that is more enduring. Can we then legitimately use the term "clusters" in describing the structure of water?
• The possible locations of neighboring molecules around a given H2O are limited by energetic and geometric considerations, thus giving rise to a certain amount of "structure" within any small volume element. It is not clear, however, to what extent these structures interact as the size of the volume element is enlarged. And as mentioned above, to what extent are these structures maintained for periods longer than a few picoseconds?
In the 1950's it was assumed that liquid water consists of a mixture of hydrogen-bonded clusters (H2O)n in which n can have a variety of values, but little evidence for the existence of such aggregates was ever found. The present view, supported by computer-modeling and spectroscopy, is that on a very short time scale, water is more like a "gel" consisting of a single, huge hydrogen-bonded cluster. On a 10–12-10–9 sec time scale, rotations and other thermal motions cause individual hydrogen bonds to break and re-form in new configurations, inducing ever-changing local discontinuities whose extent and influence depends on the temperature and pressure.
Ice
Ice, like all solids, has a well-defined structure; each water molecule is surrounded by four neighboring H2Os. two of these are hydrogen-bonded to the oxygen atom on the central H2O molecule, and each of the two hydrogen atoms is similarly bonded to another neighboring H2O.
Ice forms crystals having a hexagonal lattice structure, which in their full development would tend to form hexagonal prisms very similar to those sometimes seen in quartz. This does occasionally happen, and anyone who has done much winter mountaineering has likely seen needle-shaped prisms of ice crystals floating in the air. Under most conditions, however, the snowflake crystals we see are flattened into the beautiful fractal-like hexagonal structures that are commonly observed.
Snowflakes
The H2O molecules that make up the top and bottom plane faces of the prism are packed very closely and linked (through hydrogen bonding) to the molecules inside. In contrast to this, the molecules that make up the sides of the prism, and especially those at the hexagonal corners, are much more exposed, so that atmospheric H2O molecules that come into contact with most places on the crystal surface attach very loosely and migrate along it until they are able to form hydrogen-bonded attachments to these corners, thus becoming part of the solid and extending the structure along these six directions. This process perpetuates itself as the new extensions themselves acquire a hexagonal structure.
Why is ice slippery?
At temperatures as low as 200 K, the surface of ice is highly disordered and water-like. As the temperature approaches the freezing point, this region of disorder extends farther down from the surface and acts as a lubricant.
The illustration is taken from from an article in the April 7, 2008 issue of C&EN honoring the physical chemist Gabor Somorjai who pioneered modern methods of studying surfaces.
"Pure" water
To a chemist, the term "pure" has meaning only in the context of a particular application or process. The distilled or de-ionized water we use in the laboratory contains dissolved atmospheric gases and occasionally some silica, but their small amounts and relative inertness make these impurities insignificant for most purposes. When water of the highest obtainable purity is required for certain types of exacting measurements, it is commonly filtered, de-ionized, and triple-vacuum distilled. But even this "chemically pure" water is a mixture of isotopic species: there are two stable isotopes of both hydrogen (H1 and H2, the latter often denoted by D) and oxygen (O16 and O18) which give rise to combinations such as H2O18, HDO16, etc., all of which are readily identifiable in the infrared spectra of water vapor. And to top this off, the two hydrogen atoms in water contain protons whose magnetic moments can be parallel or antiparallel, giving rise to ortho- and para-water, respectively. The two forms are normally present in a o/p ratio of 3:1.
The amount of the rare isotopes of oxygen and hydrogen in water varies enough from place to place that it is now possible to determine the age and source of a particular water sample with some precision. These differences are reflected in the H and O isotopic profiles of organisms. Thus the isotopic analysis of human hair can be a useful tool for crime investigations and anthropology research.
More about hydrogen bonding
Hydrogen bonds form when the electron cloud of a hydrogen atom that is attached to one of the more electronegative atoms is distorted by that atom, leaving a partial positive charge on the hydrogen. Owing to the very small size of the hydrogen atom, the density of this partial charge is large enough to allow it to interact with the lone-pair electrons on a nearby electronegative atom. Although hydrogen bonding is commonly described as a form of dipole-dipole attraction, it is now clear that it involves a certain measure of electron-sharing (between the external non-bonding electrons and the hydrogen) as well, so these bonds possess some covalent character.
Hydrogen bonds are longer than ordinary covalent bonds, and they are also weaker. The experimental evidence for hydrogen bonding usually comes from X-ray diffraction studies on solids that reveal shorter-than-normal distances between hydrogen and other atoms.
Hydrogen bonding in small molecules
The following examples show something of the wide scope of hydrogen bonding in molecules.
Ammonia (mp –78, bp –33°C) is hydrogen-bonded in the liquid and solid states.
Hydrogen bonding is responsible for ammonia's remarkably high solubility in water.
Many organic (carboxylic) acids form hydrogen-bonded dimers in the solid state.
Here the hydrogen bond acceptor is the π electron cloud of a benzene ring. This type of interaction is important in maintaining the shape of proteins.
Hydrogen fluoride (mp –92, bp 33°C) is another common substance that is strongly hydrogen-bonded in its condensed phases.
The bifluoride ion (for which no proper Lewis structure can be written) can be regarded as a complex ion held together by the strongest hydrogen bond known: about 155 kJ mol–1.
"As slow as molasses in the winter!" Multiple hydroxyl groups provide lots of opportunities for hydrogen bonding and lead to the high viscosities of substances such as glycerine and sugar syrups.
Hydrogen bonding in biopolymers
Hydrogen bonding plays an essential role in natural polymers of biological origin in two ways:
• Hydrogen bonding between adjacent polymer chains (intermolecular bonding);
• Hydrogen bonding between different parts of the same chain (intramolecular bonding;
• Hydrogen bonding of water molecules to –OH groups on the polymer chain ("bound water") that helps maintain the shape of the polymer.
The examples that follow are representative of several types of biopolymers.
Cellulose
Cellulose is a linear polymer of glucose (see above), containing 300 to over 10,000 units, depending on the source. As the principal structural component of plants (along with lignin in trees), cellulose is the most abundant organic substance on the earth. The role of hydrogen bonding is to cross-link individual molecules to build up sheets as shown here. These sheets than stack up in a staggered array held together by van der Waals forces. Further hydrogen-bonding of adjacent stacks bundles them together into a stronger and more rigid structure.
Proteins
These polymers made from amino acids R—CH(NH2)COOH depend on intramolecular hydrogen bonding to maintain their shape (secondary and tertiary structure) which is essential for their important function as biological catalysts (enzymes). Hydrogen-bonded water molecules embedded in the protein are also important for their structural integrity.
The principal hydrogen bonding in proteins is between the -N—H groups of the "amino" parts with the -C=O groups of the "acid" parts. These interactions give rise to the two major types of the secondary structure which refers to the arrangement of the amino acid polymer chain:
[images]
beta-sheet
Although carbon is not usually considered particularly electronegative, C—H----X hydrogen bonds are also now known to be significant in proteins.
DNA (Deoxyribonucleic acid)
Who you are is totally dependent on hydrogen bonds! DNA, as you probably know, is the most famous of the biopolymers owing to its central role in defining the structure and function of all living organisms. Each strand of DNA is built from a sequence of four different nucleotide monomers consisting of a deoxyribose sugar, phosphate groups, and a nitrogenous base conventionally identified by the letters A,T, C and G. DNA itself consists of two of these polynucleotide chains that are coiled around a common axis in a configuration something like the protein alpha helix depicted above. The sugar-and-phosphate backbones are on the outside so that the nucleotide bases are on the inside and facing each other. The two strands are held together by hydrogen bonds that link a nitrogen atom of a nucleotide in one chain with a nitrogen or oxygen on the nucleotide that is across from it on the other chain.
Efficient hydrogen bonding within this configuration can only occur between the pairs A-T and C-G, so these two complementary pairs constitute the "alphabet" that encodes the genetic information that gets transcribed whenever new protein molecules are built. Water molecules, hydrogen-bonded to the outer parts of the DNA helix, help stabilize it. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.03%3A_Hydrogen-Bonding_and_Water.txt |
Learning Objectives
• Liquids are both fluids and condensed phases. Explain what this tells us about liquids, and what other states of matter fit into each of these categories.
• Define viscosity, and comment on the molecular properties that correlate with viscosity.
• Define surface tension and explain its cause.
• State the major factors that determine the extent to which a liquid will wet a solid surface.
• Explain what a surfactant is, and how it reduces the surface tension of water and aids in cleaning.
• Explain the origins of capillary rise and indicate the major factors that affect it.
• Describe the structure of a soap bubble, and comment on the role of the "soap" molecules in stabilizing it.
• Comment on the applicability of the term "structure" when describing a pure liquid phase.
The molecular units of a liquid, like those of solids, are in direct contact, but never for any length of time and in the same locations. Whereas the molecules or ions of a solid maintain the same average positions, those of liquids are continually jumping and sliding to new ones, giving liquids something of the mobility of gases. From the standpoint of chemistry, this represents the best of two worlds; rapid chemical change requires intimate contact between the agents undergoing reaction, but these agents, along with the reaction products, must be free to move away to allow new contacts and further reaction to take place. This is why so much of what we do with chemistry takes place in the liquid phase.
Liquids occupy a rather peculiar place in the trinity of solid, liquid and gas. A liquid is the preferred state of a substance at temperatures intermediate between the realms of the solid and the gas. Howeve, if one look at the melting and boiling points of a variety of substances Figure $1$, you will notice that the temperature range within which many liquids can exist tends to be rather small. In this, and in a number of other ways, the liquid state appears to be somewhat tenuous and insecure, as if it had no clear right to exist at all, and only does so as an oversight of Nature. Certainly the liquid state is the most complicated of the three states of matter to analyze and to understand. But just as people whose personalities are more complicated and enigmatic are often the most interesting ones to know, it is these same features that make the liquid state of matter the most fascinating to study.
Anyone can usually tell if a substance is a liquid simply by looking at it. What special physical properties do liquids possess that make them so easy to recognize? One obvious property is their mobility, which refers to their ability to move around, to change their shape to conform to that of a container, to flow in response to a pressure gradient, and to be displaced by other objects. But these properties are shared by gases, the other member of the two fluid states of matter. The real giveaway is that a liquid occupies a fixed volume, with the consequence that a liquid possesses a definite surface. Gases, of course, do not; the volume and shape of a gas are simply those of the container in which it is confined. The higher density of a liquid also plays a role here; it is only because of the large density difference between a liquid and the space above it that we can see the surface at all. (What we are really seeing are the effects of reflection and refraction that occur when light passes across the boundary between two phases differing in density, or more precisely, in their refractive indexes.)
Viscosity: Resistance to Flow
The term viscosity is a measure of resistance to flow. It can be measured by observing the time required for a given volume of liquid to flow through the narrow part of a viscometer tube. The viscosity of a substance is related to the strength of the forces acting between its molecular units. In the case of water, these forces are primarily due to hydrogen bonding. Liquids such as syrups and honey are much more viscous because the sugars they contain are studded with hydroxyl groups (–OH) which can form multiple hydrogen bonds with water and with each other, producing a sticky disordered network.
Table $1$: Specific viscosity (i.e., relative to water) of some liquids at 20°C.
substance
viscosity
water H(OH) 1.00
diethyl ether (CH3-CH2)2O 0.23
benzene C6H6 0.65
glycerin C3H2(OH)3 280
mercury 1.5
motor oil, SAE30 200
honey ~10,000
molasses ~5000
pancake syrup ~3000
Even in the absence of hydrogen bonding, dispersion forces are universally present (as in mercury). Because these forces are additive, they can be very significant in long carbon-chain molecules such as those found in oils used in cooking and for lubrication. Most "straight-chain" molecules are really bent into complex shapes, and dispersion forces tend to preserve their spaghetti-like entanglements with their neighbors.
The temperature dependence of the viscosity of liquids is well known to anyone who has tried to pour cold syrup on a pancake. Because the forces that give rise to viscosity are weak, they are easily overcome by thermal motions, so it is no surprise that viscosity decreases as the temperature rises.
Table $2$: Viscosity of Water as a Function of Temperature
T/°C 0 10 20 40 60 80 100
viscosity/cP 1.8 1.3 1.0 0.65 0.47 0.36 0.28
Motor Oil
Automotive lubricating oils can be too viscous at low temperatures (making it harder for your car to operate on a cold day), while losing so much viscosity at engine operating temperatures that their lubricating properties become impaired. These engine oils are sold in a wide range of viscosities; the higher-viscosity oils are used in warmer weather and the lower-viscosity oils in colder weather. The idea is to achieve a fairly constant viscosity that is ideal for the particular application. By blending in certain ingredients, lubricant manufacturers are able to formulate “multigrade” oils whose viscosities are less sensitive to temperatures, thus making a single product useful over a much wider temperature range.
The next time you pour a viscous liquid over a surface, notice how different parts of the liquid move at different rates and sometimes in different directions. To flow freely, the particles making up a fluid must be able to move independently. Intermolecular attractive forces work against this, making it difficult for one molecule to pull away from its neighbors and force its way in between new neighbors.
The pressure drop that is observed when a liquid flows through a pipe is a direct consequence of viscosity. Those molecules that happen to find themselves near the inner walls of a tube tend to spend much of their time attached to the walls by intermolecular forces, and thus move forward very slowly. Movement of the next layer of molecules is impeded as they slip and slide over the slow-movers; this process continues across successive layers of molecules as we move toward the center of the tube, where the velocity is greatest. This effect is called viscous drag, and is directly responsible for the pressure drop that can be quite noticeable when you are taking a shower bath and someone else in the house suddenly turns on the water in the kitchen.
Liquids and gases are both fluids and exhibit resistance to flow through a confined space. However, it is interesting (and not often appreciated) that their viscosities have entirely different origins, and that they vary with temperature in opposite ways. Why should the viscosity of a gas increase with temperature?
Surface Tension
A molecule within the bulk of a liquid experiences attractions to neighboring molecules in all directions, but since these average out to zero, there is no net force on the molecule because it is, on the average, as energetically comfortable in one location within the liquid as in another. Liquids ordinarily do have surfaces, however, and a molecule that finds itself in such a location is attracted to its neighbors below and to either side, but there is no attraction operating in the 180° solid angle above the surface. As a consequence, a molecule at the surface will tend to be drawn into the bulk of the liquid. Conversely, work must be done in order to move a molecule within a liquid to its surface.
Clearly there must always be some molecules at the surface, but the smaller the surface area, the lower the potential energy. Thus intermolecular attractive forces act to minimize the surface area of a liquid. The geometric shape that has the smallest ratio of surface area to volume is the sphere, so very small quantities of liquids tend to form spherical drops. As the drops get bigger, their weight deforms them into the typical tear shape.
Think of a bubble as a hollow drop. Surface tension acts to minimize the surface, and thus the radius of the spherical shell of liquid, but this is opposed by the pressure of vapor trapped within the bubble.
The imbalance of forces near the upper surface of a liquid has the effect of an elastic film stretched across the surface. You have probably seen water striders and other insects take advantage of this when they walk across a pond. Similarly, you can carefully "float" a light object such as a steel paperclip on the surface of water in a cup.
Surface tension is defined as the amount of work that must be done in order to create unit area of surface. The SI units are J m–2 (or N m–1), but values are more commonly expressed in mN m–1 or in cgs units of dyn cm–1 or erg cm–2. Table $3$ compares the surface tensions of several liquids at room temperature. Note especially that:
• hydrocarbons and non-polar liquids such as ether have rather low values
• one of the main functions of soaps and other surfactants is to reduce the surface tension of water
• mercury has the highest surface tension of any liquid at room temperature. It is so high that mercury does not flow in the ordinary way, but breaks into small droplets that roll independently.
Table $3$: Surface Tensions of select liquids
substance
surface tension (dyne/cm)
water H(OH) 72.7
diethyl ether (CH3-CH2)2O 17.0
benzene C6H6 40.0
glycerin C3H2(OH)3 63
mercury (15°C) 487
n-octane 21.8
sodium chloride solution (6M in water) 82.5
sucrose solution (85% in water) 76.4
sodium oleate (soap) solution in water 25
Surface tension and viscosity are not directly related, as you can verify by noting the disparate values of these two quantities for mercury. Viscosity depends on intermolecular forces within the liquid, whereas surface tension arises from the difference in the magnitudes of these forces within the liquid and at the surface. Surface tension is also affected by the electrostatic charge of a body. This is most dramatically illustrated by the famous "mercury beating heart" demonstration..
Surface tension always decreases with temperature as thermal motions reduce the effect of intermolecular attractions (Table $4$). This is one reason why washing with warm water is more effective; the lower surface tension allows water to more readily penetrate a fabric.
Table $4$: Surface Tension of water as a Function of Temperature
°C dynes/cm
0 75.9
20 72.7
50 67.9
100 58.9
"Tears" in a wine glass: effects of a surface tension gradient
Why do "tears" form inside a wine glass? You have undoubtedly noticed this; pour some wine into a glass, and after a few minutes, droplets of clear liquid can be seen forming on the inside walls of the glass about a centimeter above the level of the wine. This happens even when the wine and the glass are at room temperature, so it has nothing to do with condensation.
The explanation involves Raoult's law, hydrogen bonding, adsorption, and surface tension, so this phenomenon makes a good review of much you have learned about liquids and solutions. The tendency of a surface tension gradient to draw water into the region of higher surface tension is known as the Maringoni effect
First, remember that both water and alcohol are hydrogen-bonding liquids; as such, they are both strongly attracted to the oxygen atoms and -OH groups on the surface of the glass. This causes the liquid film to creep up the walls of the glass. Alcohol, the more volatile of the two liquids, vaporizes more readily, causing the upper (and thinnest) part of the liquid film to become enriched in water. Because of its stronger hydrogen bonding, water has a larger surface tension than alcohol, so as the alcohol evaporates, the surface tension of the upper part of the liquid film increases. This that part of the film draw up more liquid and assume a spherical shape which gets distorted by gravity into a "tear", which eventually grows so large that gravity wins out over adsorption, and the drop falls back into the liquid, soon to be replaced by another.
Interfacial effects in liquids
The surface tension discussed immediately above is an attribute of a liquid in contact with a gas (ordinarily the air or vapor) or a vacuum. But if you think about it, the molecules in the part of a liquid that is in contact with any other phase (liquid or solid) will experience a different balance of forces than the molecules within the bulk of the liquid. Thus surface tension is a special case of the more general interfacial tension which is defined by the work associated with moving a molecule from within the bulk liquid to the interface with any other phase.
Wetting
Take a plastic mixing bowl from your kitchen, and splash some water around in it. You will probably observe that the water does not cover the inside surface uniformly, but remains dispersed into drops. The same effect is seen on a dirty windshield; running the wipers simply breaks hundreds of drops into thousands. By contrast, water poured over a clean glass surface will wet it, leaving a uniform film.
When a molecule of a liquid is in contact with another phase, its behavior depends on the relative attractive strengths of its neighbors on the two sides of the phase boundary. If the molecule is more strongly attracted to its own kind, then interfacial tension will act to minimize the area of contact by increasing the curvature of the surface. This is what happens at the interface between water and a hydrophobic surface such as a plastic mixing bowl or a windshield coated with oily material. A liquid will wet a surface if the angle at which it makes contact with the surface is less than 90°. The value of this contact angle can be predicted from the properties of the liquid and solid separately.
A clean glass surface, by contrast, has –OH groups sticking out of it which readily attach to water molecules through hydrogen bonding; the lowest potential energy now occurs when the contact area between the glass and water is maximized. This causes the water to spread out evenly over the surface, or to wet it.
Surfactants
The surface tension of water can be reduced to about one-third of its normal value by adding some soap or synthetic detergent. These substances, known collectively as surfactants, are generally hydrocarbon molecules having an ionic group on one end. The ionic group, being highly polar, is strongly attracted to water molecules; we say it is hydrophilic. The hydrocarbon (hydrophobic) portion is just the opposite; inserting it into water would break up the local hydrogen-bonding forces and is therefore energetically unfavorable. What happens, then, is that the surfactant molecules migrate to the surface with their hydrophobic ends sticking out, effectively creating a new surface. Because hydrocarbons interact only through very weak dispersion forces, this new surface has a greatly reduced surface tension.
Washing
How do soaps and detergents help get things clean? There are two main mechanisms. First, by reducing water's surface tension, the water can more readily penetrate fabrics (see the illustration under "Water repellency" below.) Secondly, much of what we call "dirt" consists of non-water soluble oils and greasy materials which the hydrophobic ends of surfactant molecules can penetrate. When they do so in sufficient numbers and with their polar ends sticking out, the resulting aggregate can hydrogen-bond to water and becomes "solubilized".
Washing is usually more effective in warm water; higher temperatures reduce the surface tension of the water and make it easier for the surfactant molecules to penetrate the material to be removed.
Can magnets reduce the surface tension of water?
The answer is no, but claims that they can are widely circulated in promotions of dubious products such as "magnetic laundry disks" which are supposed to reduce the need for detergents.
Water repellency
In Gore-Tex, one of the more successful water-proof fabrics, the fibers are made non-wettable by coating them with a Teflon-like fluoropolymer.
Water is quite strongly attracted to many natural fibers such as cotton and linen through hydrogen-bonding to their cellulosic hydroxyl groups. A droplet that falls on such a material will flatten out and be drawn through the fabric. One way to prevent this is to coat the fibers with a polymeric material that is not readily wetted. The water tends to curve away from the fibers so as to minimize the area of contact, so the droplets are supported on the gridwork of the fabric but tend not to fall through.
Capillary rise
If the walls of a narrow tube can be efficiently wetted by a liquid, then the the liquid will be drawn up into the tube by capillary action. This effect is only noticeable in narrow containers (such as burettes) and especially in small-diameter capillary tubes. The smaller the diameter of the tube, the higher will be the capillary rise. A clean glass surface is highly attractive to most molecules, so most liquids display a concave meniscus in a glass tube.
To help you understand capillary rise, the above diagram shows a glass tube of small cross-section inserted into an open container of water. The attraction of the water to the inner wall of the tube pulls the edges of the water up, creating a curved meniscus whose surface area is smaller than the cross-section area of the tube. The surface tension of the water acts against this enlargement of its surface by attempting to reduce the curvature, stretching the surface into a flatter shape by pulling the liquid farther up into the tube. This process continues until the weight of the liquid column becomes equal to the surface tension force, and the system reaches mechanical equilibrium.
Capillary rise results from a combination of two effects: the tendency of the liquid to wet (bind to) the surface of the tube (measured by the value of the contact angle), and the action of the liquid's surface tension to minimize its surface area.
In the formula shown at the left (which you need not memorize!)
h = elevation of the liquid (m)
γ = surface tension (N/m)
θ = contact angle (radians)
ρ = density of liquid (kg/m3)
g = acceleration of gravity (m/s–2)
r = radius of tube (m)
The contact angle between water and ordinary soda-lime glass is essentially zero; since the cosine of 0 radians is unity, its capillary rise is especially noticable. In general, water can be drawn very effectively into narrow openings such as the channels between fibers in a fabric and into porous materials such as soils.
Note that if θ is greater than 90° (π/2 radians), the capillary "rise" will be negative — meaning that the molecules of the liquid are more strongly attracted to each other than to the surface. This is readily seen with mercury in a glass container, in which the meniscus is upwardly convex instead of concave.
Capillary action and trees
Capillary rise is the principal mechanism by which water is able to reach the highest parts of trees. Water strongly bonds to the narrow (25 μM) cellulose channels in the xylem. (Osmotic pressure and "suction" produced by loss of water vapor through the leaves also contribute to this effect, and are the main drivers of water flow in smaller plants.)
Bubbles
Bubbles can be thought of as "negative drops" — spherical spaces within a liquid containing a gas, often just the vapor of the liquid. Bubbles within pure liquids such as water (which we see when water boils) are inherently unstable because the liquid's surface tension causes them to collapse. But in the presence of a surfactant, bubbles can be stabilized and given an independent if evanescent existence.
The pressure of the gas inside a bubble Pin must be sufficient to oppose the pressure outside of it (Pout, the atmospheric pressure plus the hydrostatic pressure of any other fluid in which the bubble is immersed. But the force caused by surface tension γ of the liquid boundary also tends to collapse the bubble, so Pin must be greater than Pout by the amount of this force, which is given by 4γ/r:
$P_{in}=P_{out} + \dfrac{4\gamma}{r}$
The most important feature of this relationship (known as LaPlace's law) is the that the pressure required to maintain the bubble is inversely proportional to its radius. This means that the smallest bubbles have the greatest internal gas pressures! This might seem counterintuitive, but if you are an experienced soap-bubble blower, or have blown up a rubber balloon (in which the elastic of the rubber has an effect similar to the surface tension in a liquid), you will have noticed that you need to puff harder to begin the expansion.
Soap bubbles
All of us at one time or another have enjoyed the fascination of creating soap bubbles and admiring their intense and varied colors as they drift around in the air, seemingly aloof from the constraints that govern the behavior of ordinary objects — but only for a while! Their life eventually comes to an abrupt end as they fall to the ground or pop in mid-flight.
The walls of these bubbles consist of a thin layer of water molecules sandwiched between two layers of surfactant molecules. Their spherical shape is of course the result of water's surface tension. Although the surfactant (soap) initially reduces the surface tension, expansion of the bubble spreads the water into a thinner layer and spreads the surfactant molecules over a wider area, deceasing their concentration. This, in turn, allows the water molecules to interact more strongly, increasing its surface tension and stabilizing the bubble as it expands.
The bright colors we see in bubbles arises from interference between light waves that are reflected back from the inner and outer surfaces, indicating that the thickness of the water layer is comparable the range of visible light (around 400-600 nm).
Once the bubble is released, it can endure until it strikes a solid surface or collapses owing to loss of the water layer by evaporation. The latter process can be slowed by adding a bit of glycerine to the liquid. A variety of recipes and commercial "bubble-making solutions" are available; some of the latter employ special liquid polymers which slow evaporation and greatly extend the bubble lifetimes. Bubbles blown at very low temperatures can be frozen, but these eventually collapse as the gas diffuses out.
Bubbles, surface tension, and breathing
The sites of gas exchange with the blood in mammalian lungs are tiny sacs known as alveoli. In humans there are about 150 million of these, having a total surface area about the size of a tennis court. The inner surface of each alveolus is about 0.25 mm in diameter and is coated with a film of water, whose high surface tension not only resists inflation, but would ordinarily cause the thin-walled alveoli to collapse. In order to counteract this effect, special cells in the alveolar wall secrete a phospholipid pulmonary surfactant that reduces the surface tension of the water film to about 35% of its normal value. But there is another problem: the alveoli can be regarded physically as a huge collection of interconnected bubbles of varying sizes. As noted above, the surface tension of a surfactant-stabilized bubble increases with their size. So by making it easier for the smaller alveoli to expand while inhibiting the expansion of the larger ones, the surfactant helps to equalize the volume changes of all the alveoli as one inhales and exhales.
Pulmonary surfactant is produced only in the later stages of fetal development, so premature infants often do not have enough and are subject to respiratory distress syndrome which can be fatal.
Structure of liquids
You can think of a simple liquid such as argon or methane as a collection of loosely-packed marbles that can assume various shapes.Although the overall arrangement of the individual molecular units is entirely random, there is a certain amount of short-range order: the presence of one molecule at a given spot means that the neighboring molecules must be at least as far away as the sum of the two radii, and this in turn affects the possible locations of more distant concentric shells of molecules.
An important consequence of the disordered arrangement of molecules in a liquid is the presence of void spaces. These, together with the increased kinetic energy of colliding molecules which helps push them apart, are responsible for the approximately 15-percent decrease in density that is observed when solids based on simple spherical molecules such as Ne and Hg melt into liquids. These void spaces are believed to be the key to the flow properties of liquids; the more “holes” there are in the liquid, the more easily the molecules can slip and slide over one another.
As the temperature rises, thermal motions of the molecules increase and the local structure begins to deteriorate, as shown in the plots below.
This plot shows the relative probability of finding a mercury atom at a given distance from another atom located at distance 0. You can see that as thermal motions increase, the probabilities even out at greater distances. It is very difficult to design experiments that yield the kind of information required to define the microscopic arrangement of molecules in the liquid state.
Many of our current ideas on the subject come from computer simulations based on hypothetical models. In a typical experiment, the paths of about 1000 molecules in a volume of space are calculated. The molecules are initially given random kinetic energies whose distribution is consistent with the Boltzmann distribution for a given temperature. The trajectories of all the molecules are followed as they change with time due to collisions and other interactions; these interactions must be calculated according to an assumed potential energy-vs.-distance function that is part of the particular model being investigated.
These computer experiments suggest that whatever structure simple liquids do possess is determined mainly by the repulsive forces between the molecules; the attractive forces act in a rather nondirectional, general way to hold the liquid together. It is also found that if spherical molecules are packed together as closely as geometry allows (in which each molecule would be in contact with twelve nearest neighbors), the collection will have a long-range order characteristic of a solid until the density is decreased by about ten percent, at which point the molecules can slide around and move past one another, thus preserving only short-range order. In recent years, experimental studies based on ultra-short laser flashes have revealed that local structures in liquids have extremely short lifetimes, of the order of picoseconds to nanoseconds.
It has long been suspected that the region of a liquid that bounds a solid surface is more ordered than within the bulk liquid. This has been confirmed for the case of water in contact with silicon, in which the liquid's layers form layers, similar to what is found in liquid crystals. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.04%3A_Liquids_and_their_Interfaces.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the terms in the context of this topic.
• Describe what is meant by the escaping tendency of molecules from a solid, liquid, or gas. What experimentally-observable quantity serves as its measure?
• The terms "vapor pressure" and "pressure of the vapor above a solid or liquid" are easily confused. Explain the difference between them, and state under what conditions they will have identical values.
• Define relative humidity and calculate its value, given the partial pressure of water vapor and a suitable vapor pressure table or plot for water.
• Explain the difference between evaporation and boiling, and why liquids may not begin to boil until the temperature exceeds the boiling point.
• Given a phase diagram of a pure substance, label all of the lines and the regions they enclose, identify the normal melting and boiling points, the triple point and the critical point, and state the physical significance of the latter two.
• Conversely, sketch out a properly-labeled phase diagram for a pure substance, given the parameters mentioned above, along with information about the relative densities of the solid and liquid phases.
A given substance will exist in the form of a solid, liquid, or gas, depending on the temperature and pressure. In this unit, we will learn what common factors govern the preferred state of matter under a particular set of conditions, and we will examine the way in which one phase gives way to another when these conditions change.
Phase Stability
Earlier in the morning, the droplets of water in Figure $1$ were tiny crystals of ice, but even though the air temperature is still around 0°C and will remain so all day, the sun's warmth has rendered them into liquid form, bound up by surface tension into reflective spheres. By late afternoon, most of the drops will be gone, their H2O molecules now dispersed as a tenuous atmospheric gas.
Solid, liquid, and gas — these are the basic three states, or phases, in which the majority of small-molecule substances can exist. At most combinations of pressure and temperature, only one of these phases will be favored; this is the phase that is most thermodynamically stable under these conditions. A proper explanation of why most substances have well-defined melting and boiling points needs to invoke some principles of thermodynamics and quantum mechanics. A full explanation of this would go beyond the scope of what most students who see this lesson are familiar with, but the following greatly over-simplified explanation should convince you that it is something more than black magic.
All atoms and molecules at temperatures above absolute zero possess thermal energy that keeps them in constant states of motion. A fundamental law of nature mandates that this energy tends to spread out and be shared as widely as possible. Within a single molecular unit, this spreading and sharing can occur by dispersing the energy into the many allowed states of motion (translation, vibration, rotation) of the molecules of the substance itself. There are a huge number of such states, and they are quantized, meaning that they all require different amounts of thermal energy to come into action. Temperature is a measure of the intensity of thermal energy, so the higher the temperature, the greater will be the number of states that can be active, and the more extensively will the energy be dispersed among these allowed states.
In solids, the molecular units are bound into fixed locations, so the kinds of motion (and thus the number of states) that can be thermally activated is relatively small. Because the molecules of solids possess the lowest potential energies, solids are the most stable states at low temperatures. At the other extreme are gas molecules which are not only free to vibrate and rotate, but are in constant translational motion. The corresponding number of quantum states is hugely greater for gases, providing a nearly-endless opportunity to spread energy. But this can only happen if the temperature is high enough to populate this new multitude of states. Once it does, the gaseous state wins out by a landslide.
Escaping Tendency and Vapor Pressure
Escaping tendency is more formally known as free energy. Bear in mind also that changes in state always involve changes in enthalpy and internal energy. In much the same way that tea spreads out from a tea bag into the larger space of the water in which it is immersed, molecules that are confined within a phase (liquid, solid, or gas) will tend to spread themselves (and the thermal energy they carry with them) as widely as possible. This fundamental law of nature is manifested in what we will call the escaping tendency of the molecules from the phase. The escaping tendency is a quantity of fundamental importance in understanding all chemical equilibria and transformations. We need not define the term in a formal way at this point. What is important for now is how we can observe and compare escaping tendencies.
Think first of a gas: what property of the gas constitutes the best measure of its tendency to escape from a container? It does not require much reflection to conclude that the greater the pressure of the gas, the more frequently will its molecules collide with the walls of the container and possibly find their way through an opening to the outside. Thus the pressure confining a gas is a direct measure of the tendency of molecules to escape from a gaseous phase.
What about liquids and solids? Although we think of the molecules of condensed phases as permanently confined within them, these molecules still possess some thermal energy, and there is always a chance that one that is near the surface will occasionally fly loose and escape into the space outside the solid or liquid. We can observe the tendency of molecules to escape into the gas phase from a solid or liquid by placing the substance in a closed, evacuated container connected to a manometer for measuring gas pressure (Figure $2$).
If we do this for water (Figure $3$), the partial pressure of water Pw in the vapor space will initially be zero (1). Gradually, Pw will rise as molecules escape from the substance and enter the vapor phase. But at the same time, some of the vapor molecules will "escape" back into the liquid phase (2). But because this latter process is less favorable (at the particular temperature represented here), Pw continues to rise. Eventually a balance is reached between the two processes (3), and Pw eventually stabilizes at a fixed value Pvap that depends on the substance and on the temperature and is known as the equilibrium vapor pressure, or simply as the “vapor pressure” of the liquid or solid. The vapor pressure is a direct measure of the escaping tendency of molecules from a condensed state of matter.
Note carefully that if the container is left open to the air, it is unlikely that many of the molecules in the vapor phase will return to the liquid phase. They will simply escape from the entire system and the partial pressure of water vapor Pw will never reach Pvap; the liquid will simply evaporate without any kind of equilibrium ever being achieved.
The escaping tendency of molecules from a phase always increases with the temperature; therefore the vapor pressure of a liquid or solid will be greater at higher temperatures. As Figure $4$ shows, the variation of the vapor pressure with the temperature is not linear.
It's important that you be able to interpret vapor pressure plots such as the three shown here. Take special note of how boiling points can be found from these plots. You will recall that the normal boiling point is the temperature at which the liquid is in equilibrium with its vapor at a partial pressure of 1 atm (760 torr). Thus the intercepts of each curve with the blue dashed 760-torr line indicate the normal boiling points of each liquid. Similarly, you can easily estimate the boiling points these liquids would have in Denver, Colorado where the atmospheric pressure is 630 torr by simply constructing a horizontal line corresponding to this pressure.
Definition: The Normal Boiling Point
The normal boiling point is the temperature at which the liquid is in equilibrium with its vapor at a partial pressure of 1 atm. This is when the vapor pressure is at atmospheric pressure.
Vapor pressure of water
The great importance of H2O in our world merits a more detailed look at its vapor pressure properties. Because the vapor pressure of water varies greatly over the range of temperatures in which the liquid can exist (Figure $5$).
The larger plot in Figure $5$ covers the lowest temperatures, while the inset shows the complete range of pressure values. Note particularly that
• The normal boiling point is the temperature at which the vapor pressure is the same as that of the standard atmosphere, 760 torr.
• The boiling point at any other pressure can be found by dropping a vertical line from the curve to the temperature axis.
• As seen on the inset plot, the vapor pressure curve of water ends at the critical point.
Relative humidity
The vapor pressure of water at 22°C is about 20 torr, or around 0.026 atm (2.7 kPa). This is the partial pressure of H2O that will be found in the vapor space within a closed container of water at this temperature; the air in this space is said to be saturated with water vapor. Humid air is sometimes described as "heavy", but this is misleading; the average molar mass of dry air is 29, but that of water is only 18, so humid air is actually less dense. The feeling of "heaviness" probably relates to the reduced ability of perspiration to evaporate in humid air. In ordinary air, the partial pressure of water vapor is normally less than its saturation or equilibrium value. The ratio of the partial pressure of H2O in the air to its (equilibrium) vapor pressure at any given temperature is known as the relative humidity. Water enters the atmosphere through evaporation from the ocean and other bodies of water, and from water-saturated soils. The resulting vapor tends to get dissipated and diluted by atmospheric circulation, so the relative humidity rarely reaches 100 percent. When it does and the weather is warm, we are very uncomfortable because vaporization of water from the skin is inhibited; if the air is already saturated with water, then there is no place for our perspiration to go, other than to drip down our face.
Why the indoor air is so dry in the winter?
Because the vapor pressure increases with temperature, a parcel of air containing a fixed partial pressure of water vapor will have a larger relative humidity at low temperatures than at high temperatures. Thus when cold air enters a heated house, its water content remains unchanged but the relative humidity drops. In climates with cold winters, this promotes increased moisture loss from house plants and from mucous membranes, leading to wilting of the former and irritation of the latter.
Example $1$
The vapor pressure of water is 3.9 torr at –2°C and 20 torr at 22°C. What will be the relative humidity inside a house maintained at 22°C when the outside air temperature is –2°C and the relative humidity is 70%?
Solution
At 70 percent relative humidity, the partial pressure of the –2° air is (0.7 × 3.9 torr) = 2.7 torr. When this air enters the house, its relative humidity will be (2.7 torr)/(20 torr) = 0.14 or 14%.
In the evening, especially on clear nights, solid objects (even spider webs!) lose heat to the sky more rapidly than does the air. It is often important to know what temperature such objects must drop to so that atmospheric moisture will condense out on them (Figure $1$). The dew point is the temperature at which the relative humidity is 100 percent — that is, the temperature at which the vapor pressure of water becomes equal to its partial pressure at a given [higher] temperature and relative humidity. For water to condense directly out of the atmosphere as rain, the air must be at or below the dew point, but this is not of itself the only requirement for the formation of rain, as we will see shortly.
Vapor pressures of Solid Hydrates
Many solid salts incorporate water molecules into their crystal lattices; the resulting compounds are known as hydrates. These solid hydrates possess definite vapor pressures that correspond to an equilibrium between the hydrated and anhydrous compounds and water vapor. For example Strontium chloride hexahydrate:
$\ce{SrCl2 \cdot 6H2O(s) \rightarrow SrCl2(s) + 6H2O(g)} \label{$1$}$
The vapor pressure of this hydrate is 8.4 torr at 25°C. Only at this unique partial pressure of water vapor can the two solids coexist at 25°C. If the partial pressure of water in the air is greater than 8.4 torr, a sample of anhydrous SrCl2 will absorb moisture from the air and change into the hydrate. In fact, when fully hydrates, water is responsible for 40% of the hydrate's mass.
Example $2$
What will be the relative humidity of air in an enclosed vessel containing solid SrCl2·6H2O at 25°C?
Solution
What fraction of the vapor pressure of water at this temperature (23.8 torr) is the vapor pressure of the hydrate (8.4 torr)? Expressed in percent, this is the relative humidity.
If the partial pressure of H2O in the air is less than the vapor pressure of the hydrate, the latter will tend to lose moisture and revert back to its anhydrous form. This process is sometimes accompanied by a breakup of the crystal into a powdery form, an effect known as efflorescence.
Condensation and boiling: Nucleation
Evaporation and boiling of a liquid, and condensation of a gas (vapor) are such ordinary parts of our daily life that we hardly give them a thought. Every time we boil water to make a pot of tea and see the cloud of steam above the teapot, we are observing this most common of all phase changes. How can we understand these changes in terms of vapor pressure?
Figure $6$ plots the vapor-pressure as a function of temperature can represent water or any other liquid. When we say this is a vapor-pressure plot, we mean that each point on the curve represents a combination of temperature and vapor pressure at which the liquid (green) and the vapor (blue) can coexist. Thus at the normal boiling point, defined as the temperature at which the vapor pressure is 1 atm, the state of the system corresponds to the point labeled 3.
Suppose that we select an arbitrary point 1 at a temperature and pressure at which only the gaseous state is stable. We then decrease the temperature so as to move the state point toward point 2 in the liquid region. When the state point falls on the vapor pressure line, the two phases can coexist and we would expect some liquid to condense. Once the state point moves to the left of the vapor pressure line, the substance will be entirely in the liquid phase. This is supposedly what happens when "steam" (actually tiny water drops) forms above a pot of boiling water.
The reverse process should work the same way: starting with a temperature in the liquid region, nothing happens until we reach the vapor pressure line, at which point the liquid begins to change into vapor. At higher temperatures, only the vapor remains. This is the theory, but it is not complete. The fact is that a vapor will generally not condense to a liquid at the boiling point (also called the condensation point or dew point), and a liquid will generally not boil at its boiling point.
Bubbles and Drops
The reason for the discrepancy is that the vapor pressure, as we normally use the term and as it is depicted by the liquid-vapor line on the phase diagram, refers to the partial pressure of vapor in equilibrium with a liquid whose surface is reasonably flat, as it would be in a partially filled container. In a drop of liquid or in a bubble of vapor within a liquid, the surface of the liquid is not flat, but curved. For drops or bubbles that are of reasonable size, this does not make much difference, but these drops and bubbles must grow from smaller ones, and these from tinier ones still. Eventually, one gets down to the primordial drops and bubbles having only a few molecular dimensions, and it is here that we run into a problem: this is the problem of nucleation — the formation and growth of the first tiny drop (in the vapor) or of a bubble (in a liquid).
The vapor pressure of a liquid is determined by the attractive forces that act over a 180° solid angle at the surface of a liquid. In a very small drop, the liquid surface is curved in such a way that each molecule experiences fewer nearest-neighbor attractions than is the case for the bulk liquid. The outermost molecules of the liquid are bound to the droplet less tightly, and the drop has a larger vapor pressure than does the bulk liquid. If the vapor pressure of the drop is greater than the partial pressure of vapor in the gas phase, the drop will evaporate. Thus it is highly unlikely that a droplet will ever form within a vapor as it is cooled.
A bubble, like a drop, must start small and grow larger, but there is a diffculty here that is similar to the one with bubbles. A bubble is a hole in a liquid; molecules at the liquid boundary are curved inward, so that they experience nearest-neighbor attractions over a solid angle greater than 180°. As a consequence, the vapor pressure of the liquid facing into a bubble Pw is always less than that of the bulk liquid Pw at the same temperature. When the bulk liquid is at its boiling point (that is, when its vapor pressure is 1 atm), the pressure of the vapor within the bubble will be less than 1 atm, so the bubble will tend to collapse. Also, since the bubble is formed within the liquid, the hydrostatic pressure of the overlaying liquid will add to this effect. For both of these reasons, a liquid will not boil until the temperature is raised slightly above the boiling point, a phenomenon known as superheating. Once the boiling begins, it will continue to do so at the liquid's proper boiling point.
These plots show how, in the case of water, the vapor pressure of a very small bubble or drop varies with its radius of curvature; the quantity being plotted is the ratio of the actual vapor pressure P to Po, the vapor pressure of a flat surface.
Condensation of Liquids
If the tiniest of drops are destined to self-destruct, why do vapors ever condense (e.g., why does it rain)?
• If you cool a vapor in a container, condensation takes place not within the vapor itself, but on the inner surface of the container. What happens here is that intermolecular attractions between the solid surface will cause vapor molecules to adsorb to the surface and stabilize the incipient drop until it grows to a size at which it can be self-sustaining. This is the origin of the condensation on the outside of a cool drink, or of the dew that appears on the grass.
• In the case of the cloud of steam you see over the boiling water, the first few droplets form on tiny dust particles in the air — the ones you can see by scattered light when a sunbeam shines through a darkened room.
Clouds and precipitation In the region of the atmosphere where rain forms there are large numbers of solid particles, mostly of microscopic size. Some of these are particles of salt produced by evaporation of spray from the ocean surface. Many condensation nuclei are of biological origin; these include bacteria, spores, and particles of ammonium sulfate. There is volcanic and meteor dust, and of course there is dust and smoke due to the activities of humans. These particles tend to adsorb water vapor, and some may even dissolve to form a droplet of concentrated solution. In either case, the vapor pressure of the water is reduced below its equilibrium value, thus stabilizing the aggregate until it can grow to self-sustaining size and become fog, rain, or snow.
This, by the way, is why fog is an irritant to the nose and throat; each fog droplet carries within it a particle of dust or (in air polluted by the burning of sulfur-containing fossil fuels) a droplet of sulfuric acid, which it effciently deposits on your sensitive mucous membranes. If you own a car which is left outside on a foggy night, you may have noticed how dirty the windshield is in the morning.
Superheating and boiling of liquids
What is the difference between the evaporation and boiling of a liquid? When a liquid evaporates at a temperature below its boiling point, the molecules that enter the vapor phase do so directly from the surface. When a liquid boils, bubbles of vapor form in the interior of the liquid, and are propelled to the surface by their lower density (buoyancy). As they rise, the diminishing hydrostatic pressure causes the bubbles to expand, reducing their density (and increasing their buoyancy) even more.
But as we explained above, getting that first bubble to form and survive is often sufficiently difficult that liquids commonly superheat before they begin to boil. If you have had experience in an organic chemistry laboratory, you probably know this as “bumping”, and have been taught to take precautions against it. In large quantities, superheated liquids can be very dangerous, because the introduction of an impurity (such as release of an air bubble from the container surface) or even a mechanical disturbance can trigger nucleation and cause boiling to occur suddenly and almost explosively (Video $1$).
Exploding water in the microwave
Many people have been seriously burned after attempting to boil water in a microwave oven, or after having added powdered material such as instant coffee to such water. When water is heated on a stove, the bottom of the container superheats only the thin layer of water immediately in contact with it, producing localized "microexplosions" that you can hear just before regular smooth boiling begins; these bubbles quickly disperse and serve as nucleation centers for regular boiling. In a microwave oven, however, the energy is absorbed by the water itself, so that the entire bulk of the water can become superheated. If this happens, the slightest disturbance can produce an explosive flash into vapor.
Sublimation
Some solids have such high vapor pressures that heating leads to a substantial amount of direct vaporization even before the melting point is reached. This is the case for solid iodine, for example. I2 melts at 115°C and boils at 183°C, is easily sublimed at temperatures around 100°C. Even ice has a measurable vapor pressure near its freezing point, as evidenced by the tendency of snow to evaporate in cold dry weather. There are other solids whose vapor pressure overtakes that of the liquid before melting can occur. Such substances sublime without melting; a common example is solid carbon dioxide (“Dry Ice”) at 1 atm (see the CO2phase diagram below).
Phase Diagrams
The temperatures and pressures at which a given phase of a substance is stable (that is, from which the molecules have the lowest escaping tendency) is an important property of any substance. Because both the temperature and pressure are factors, it is customary to plot the regions of stability of the various phases in P - T coordinates, as in this generic phase diagram for a hypothetical substance.
Because pressures and temperatures can vary over very wide ranges, it is common practice to draw phase diagrams with non-linear or distorted coordinates. This enables us to express a lot of information in a compact way and to visualize changes that could not be represented on a linearly-scaled plot. It is important that you be able to interpret a phase diagram, or alternatively, construct a rough one when given the appropriate data. Take special note of the following points:
1. The three colored regions on the diagram are the ranges of pressure and temperature at which the corresponding phase is the only stable one.
2. The three lines that bound these regions define all values of (P,T) at which two phases can coexist (i.e., be in equilibrium). Notice that one of these lines is the vapor pressure curve of the liquid as described above. The "sublimation curve" is just a vapor pressure curve of the solid. The slope of the line depends on the difference in density of the two phases.
3. In order to depict the important features of a phase diagram over the very wide range of pressures and temperatures they encompass, the axes are not usually drawn to scale, and are usually highly distorted. This is the reason that the "melting curve" looks like a straight line in most of these diagrams.
4. Where the three named curves intersect, all three phases can coexist. This condition can only occur at a unique value of (P,T ) known as the triple point. Since all three phases are in equilibrium at the triple point, their vapor pressures will be identical at this temperature.
5. The line that separates the liquid and vapor regions ends at the critical point. At temperatures and pressures greater than the critical temperature and pressure, no separate liquid phase exists. We refer to this state simply as a fluid, although the term supercritical liquid is also commonly used.
The best way of making sure you understand a phase diagram is to imagine that you are starting at a certain temperature and pressure, and then change just one of these parameters, keeping the other constant. You will be traversing a horizontal or vertical path on the phase diagram, and there will be a change in state every time your path crosses a line. Of special importance is the horizontal path (shown by the blue line on the diagram above) corresponding to a pressure of 1 atmosphere; this line defines the normal melting and boiling temperatures of a substance.
Notice the following features for the phase diagram of water (Figure $11$):
• The slope of the line 2 separating the solid and liquid regions is negative; this reflects the unusual property that the density of the liquid is greater than that of the solid, and it means that the melting point of ice decreases as the pressure increases. Thus if ice at 0°C is subjected to a high pressure, it will find itself above its melting point and it will melt. (Contrary to what is sometimes said, however, this is not the reason that ice melts under the pressure of ice skates or skis, providing a lubricating film which makes these modes of transportation so enjoyable. The melting in these cases arises from frictional heating).
• The dashed line 1 is the extension of the liquid vapor pressure line below the freezing point. This represents the vapor pressure of supercooled water — a metastable state of water which can temporarily exist down to about –20°C. (If you live in a region subject to "freezing rain", you will have encountered supercooled water!)
• 3 The triple point (TP) of water is just 0.0075° above the freezing point; only at this temperature and pressure can all three phases of water coexist indefinitely.
• 4 Above the critical point (CP) temperature of 374°C, no separate liquid phase of water exists.
Dry ice, solid carbon dioxide, is widely used as a refrigerant and the phase diagram in Figure $12$ shows why it is “dry”. The triple point pressure is at 5.11 atm, so below this pressure, liquid CO2 cannot exist; the solid can only sublime directly to vapor. Gaseous carbon dioxide at a partial pressure of 1 atm is in equilibrium with the solid at 195K (−79 °C, 1); this is the normal sublimation temperature of carbon dioxide. The surface temperature of dry ice will be slightly less than this, since the partial pressure of CO2 in contact with the solid will usually be less than 1 atm. Notice also that the critical temperature of CO2 is only 31°C. This means that on a very warm day, the CO2 in a fire extinguisher will be entirely vaporized; the vessel must therefore be strong enough to withstand a pressure of 73 atm.
This view of the carbon dioxide phase diagram employs a logarithmic pressure scale and thus encompasses a much wider range of pressures, revealing the upper boundary of the fluid phase (liquid and supercritical). Supercritical carbon dioxide (CO2 above its critical temperature) possesses the solvent properties of a liquid and the penetrating properties of a gas; one major use is to remove caffeine from coffee beans.
Elemental iodine, I2, forms dark gray crystals that have an almost metallic appearance. It is often used in chemistry classes as an example of a solid that is easily sublimed; if you have seen such a demonstration or experimented with it in the lab, its phase diagram might be of interest.
The most notable feature of iodine's phase behavior is the small difference (less than a degree) between the temperatures of its triple point 1 and melting point 2. Contrary to the impression many people have, there is nothing really special about iodine's tendency to sublime, which is shared by many molecular crystals including ice and naphthalene ("moth balls".) The vapor pressure of iodine at room temperature is really quite small — only about 0.3 torr (40 Pa).The fact that solid iodine has a strong odor and is surrounded by a purple vapor in a closed container is mainly a consequence of its strong ability to absorb green light (this leaves blue and red which make purple) and the high sensitivity of our noses to its vapor.
Phase diagram of Sulfur
Sulfur exhibits a very complicated phase behavior that has puzzled chemists for over a century; what you see here is the greatly simplified phase diagram shown in most textbooks. The difficulty arises from the tendency of S8 molecules to break up into chains (especially in the liquid above 159°C) or to rearrange into rings of various sizes (S6 to S20). Even the vapor can contain a mixture of species S2 through S10.
The phase diagram of sulfur contains a new feature: there are two solid phases, rhombic and monoclinic. The names refer to the crystal structures in which the S8 molecules arrange themselves. This gives rise to three triple points, indicated by the numbers on the diagram.
Exercise $1$
From the phase diagram in Figure $14$ identify one example of three phases that never coexist in sulfur ($S_8$ (hint there are several correct answers)?
When rhombic sulfur (the stable low-temperature phase) is heated slowly, it changes to the monoclinic form at 114°C, which then melts at 119°. But if the monoclinic form is heated rapidly the molecules do not have time to rearrange themselves, so the rhombic arrangement persists as a metastable phase until it melts at 119-120°. Formation of more than one solid phase is not uncommon — in fact, if one explores into the very high pressures (see below), it seems to be the rule.
Extremes Pressure and Temperatures
We tend to think of the properties of substances as they exist under the conditions we encounter in everyday life, forgetting that most of the matter that makes up our world is situated inside the Earth, where pressures are orders of magnitude higher (Figure $15$). Geochemists and planetary scientists need to know about the phase behavior of substances at high temperatures and pressures to develop useful models to test their theories about the structure and evolution of the Earth and of the solar system.
What ranges of temperatures and pressures are likely to be of interest — and more importantly, are experimentally accessible?
Figure $16$ shows several scales (all of which, please note, are logarithmic) that cover respectively the temperature range for the universe; the low temperatures of importance to chemistry (note the green line indicating the temperatures at which liquid water can exist); the higher temperatures, showing the melting and boiling points of several elements for reference. The highest temperatures that can be produced in the laboratory are achieved (but only for very short time intervals) by light pulses from laser or synchrotron radiation.
The study of low temperatures is limited by the laws of physics that prohibit reaching absolute zero. But the fact that there is no limit to how close one can approach 0 K has encouraged a great deal of creative experimentation.
The study of matter at high pressures is not an easy task. The general techniques were pioneered between 1908-1960 by P.W. Bridgeman of Harvard University, whose work won him the 1946 Nobel Prize in physics. The more recent development of the diamond anvil cell has greatly extended the range of pressures attainable and the kinds of observations that can be made. Shock-wave techniques have made possible the production of short-lived pressures in the tPa range.
High pressure laboratory studies have revealed that many molecular substances such as hydrogen and water change to solid phases having melting points well above room temperature at very high pressures; there is a solid form of ice that remains frozen even at 100°C. At still higher pressures, many of these substances become metals. It is believed that much of the inner portion of the largest planets consists of metallic hydrogen — and, in fact, that all substances can become metallic at sufficiently high pressures.
Figure $19$: Phase diagram of carbon: diamond and graphite
Graphite is the stable form of solid carbon at low pressures; diamond is only stable above about 104 atm. But once it is in this form, the rate at which diamond converts back to graphite is immeasurably slow under ordinary environmental conditions; there is simply not enough thermal energy available to break all of those carbon-carbon bonds. So the diamonds we admire in jewelry and pay dearly for are said to be metastable. [image: WikiMedia]
Synthetic Diamonds
To transform graphite into diamond at a reasonable rate, a pressure of 200,000 atm and a temperature of about 4000 K would be required.Since no apparatus can survive these conditions, the process, known as is high pressure high temperature synthesis (HPHT) is carried out commercially at 70,000 atm and 2300 K in a solution of molten nickel, which also acts as a catalyst. Traces of Ni in the finished product serve to distinguish synthetic diamonds from natural ones. However, most synthetic diamonds are too small (only a few millimeters) and too flawed for gem quality, and are used mainly to fabricate industrial grinding and cutting tools.
Figure $20$: Phase diagram of carbon: diamond and graphite
More recently, thin diamond films have become important for engineering applications and semiconductor fabrication. These are most commonly made by condensation of gaseous carbon onto a suitable substrate (chemical vapor deposition, CVD). The conditions under which synthetic diamonds are made are depicted on the above phase diagram from Bristol University.
Video $2$: Chemist Roy Gat explains how he uses phase diagrams to synthesize synthetic diamonds at low pressure and temperatures.
Helium Phase weirdness
Helium is unique in that quantum phenomena, which normally apply only to tiny objects such as atoms and electrons, extend to and dominate its macroscopic properties. A glance at the phase diagram of $\ce{^4He}$ reveals some of this quantum weirdness. The main points to notice are:
• Helium can be frozen only at high pressures;
• The solid and gas cannot coexist (be in equilibrium) under any conditions;
• There are two liquid phases, helium I (an ordinary liquid) and helium II (a highly unusual ordered liquid);
• The λ (lambda) line represents the (P,T) values at which the two phases can coexist;
• Helium-II behaves as a superfluid — essentially a quantum liquid.
Why cannot liquid helium freeze at low pressures? The low mass of the He atoms and their close confinement in the solid provides them with a very high zero-point energy (the Heisenberg uncertainty principle in action!) that allows them to vibrate with such amplitude that they overcome the dispersion forces that would otherwise hold the solid together, thus keeping the atoms too separated to form a solid. Only by applying a high pressure (25 atm) can this effect be overcome.
Helium-II, quantum liquids and superfluidity
We usually need quantum theory only to describe the properties of tiny objects such as electrons, but with liquid He-II, it extends to the macroscopic scale of the bulk liquid. $\ce{^4He}$ atoms (99.99+ percent of natural helium) are bosons, which means that at low temperatures, they can all occupy the same quantum state (all other normal atoms are fermions and are subject to the Pauli exclusion principle).
All together, now
Objects that occupy the same quantum state all possess the same momentum. Thus when one atom moves, they all move together. In a sense, this means that the entire bulk of liquid He-II acts as a single entity. This property, known as superfluidity, gives rise to a number of remarkable effects, most notably:
• The liquid can flow through a narrow channel without friction, up to a critical velocity that depends on the ratio of flow rate to channel width;
• When placed in a container, they form a film that climbs up the walls and down the outside, seeking the same level outside the container as inside if this is possible;
• A small molecule dissolved in helium-II behaves the same as it would in a vacuum.
In liquid helium-II, only about 10% of the atoms are in such a state, but it is enough to give the liquid some of the weird properties of a quantum liquid.
He3 exhibits similar properties, but owing to its low natural abundance, it was not extensively studied until the 1940s when large amounts became available as a byproduct of nuclear weapons manufacture. Finally, in 1996, its superfluidity was observed at a temperature of 2 nK. Although He3 atoms are fermions, only those that pair up (and thus assume the properties of bosons) give rise to superfluidity.
Water at high pressures
Water, like most other substances, exhibits many solid forms at higher pressures (Figure $22$). So far, fifteen distinct ice phases have been identified; these are designated by Roman numerals ice-I through ice-XV. Ice-I can exist in two modifications; the crystal lattice of ice-Ic is cubic, while that of ice-lh is hexagonal. The latter corresponds to the ordinary ice that we all know. It's interesting to note that several high-pressure phases of ice can exist at temperatures in excess of 100°C. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.05%3A_Changes_of_State.txt |
Learning Objectives
• Identify the three kinds of rotational symmetry axes of a cube.
• State what is meant by a crystal's habit, and identify some factors that might affect it.
• Explain why the angles between adjacent faces (of even a broken crystal) tend to have the same small set of values.
• What is a unit cell, and how does it relate to a crystal lattice?
• What are Bravais lattices, and why are they important?
• Find the Miller index of a line or plane in a unit cell, or sketch the line or plane having a given Miller index.
The delicately faceted surfaces of large crystals that occur in nature have always been a source of fascination and delight. In some ways they seem to represent a degree of perfection that is not apparent in other forms of matter. But in the realm of pure solid substances, crystals are the rule rather than the exception, although this may not be apparent unless they are observed under a hand-lens or a microscope. It is remarkable that the visual examination of crystals was able to establish a fairly mature science of crystallography (applied mainly to the study of minerals) by the end of the 19th Century, even before the atomic theory of matter had been universally accepted. Today this aspect of crystallography is of importance not only to chemists and physicists, but also to geologists, amateur minerologists and "rock-hounds" who maintain some of the best Web resources on crystals. In this lesson we will see how the external shape of a crystal can reveal much about the underlying arrangement of its constituent atoms, ions, or molecules.
The first thing we notice about a crystal is the presence of planes — called faces — which constitute the external boundaries of the solid. Of course, any solid, including non-crystalline glass, can be carved, molded or machined to display planar faces; examples of these can be found in any "dollar store" display of costume jewelry. What distinguishes and defines a true crystal is that these faces develop spontaneously and naturally as the solid forms from a melt or from solution. The multiple faces invariably display certain geometrical relationships to one another, resulting in a symmetry that attracts our attention and delights the eye.
Symmetry elements
One of the most apparent elements of this geometrical regularity are the sets of parallel faces that many crystals display. Nowhere is this more apparent than in the cubes that develop when sodium chloride crystallizes from solution. We usually think of a cubic shape in terms of the equality of its edge lengths and the 90° angles between its sides, but there is a more fundamental way of classifying shapes that chemists find very useful. This is to look at what geometric transformations (such as rotations around an axis) we can perform that leave the appearance unchanged.
Cubic symmetry
For example, you can rotate a cube 90° around an axis perpendicular to any pair of its six faces without making any apparent change to it. We say that the cube possesses three mutually perpendicular four-fold rotational axes, abbreviated C4 axes. But if you think about it, a cube can also be rotated around an axis that extends between opposite corners; in this case, it takes three 120° rotations to go through a complete circle, so these axes (also four in number) are three-fold or C3 axes. And finally, there are two-fold (C2) axes that pass diagonally through the centers of the six pairs of opposite edges.
In addition, there are imaginary symmetry planes that mirror the portions of the cube that lie on either side of them. Three of these are parallel to the three major axes of the crystal, and an additional six pass diagonally through opposite edges.
All told, there are 13 rotational axes and 9 mirror planes (only a few of which are shown above) that define cubic symmetry. Why is this important? Although anyone can recognize a cube when they see one, it turns out that many crystals, both natural and synthetic, are for one reason or another unable to develop all of their faces equally. Thus a crystal that forms on the bottom of a container will be unable to grow any faces that project downward for the simple reason that there is no supply of ions or molecules from that direction. The same effect occurs when a mineral crystal tries to grow in contact with other solids. Finally, the presence of certain impurities that selectively adsorb to one or more faces can block the addition of more material to them, thus either completely inhibiting their formation or forcing them to grow at slower rates. These alternative shapes that can develop from a single basic crystal type are known as habits.
Crystal habits
Sodium chloride grown from pure aqueous solution forms simple cubes, but the addition of various impurities can result in habits that can be regarded as cubes that have been truncated along planes normal to some of the symmetry axes. (The same effects can sometimes be seen as a crystal slowly dissolves and material is released more rapidly from some directions than others.)
In this example, the perfect cube 1 develops triangular faces at the corners 2. If these enlarge beyond their maximum size 3, the triangular faces meet in a new set of edges that are hexagonal 4. Eventually we are left with the eight faces of what is obviously a regular octahedron 5. One might think that these five shapes bear no relationship to one another, but in fact they all possess the same set of set of symmetry elements as the simple cube and are thus various habits of the same underlying cubic structure and belong to the cubic crystal system described further below.
The Law of Constant Angles
Even though a given crystal may be distorted or broken, the angles between corresponding faces remain the same. Thus you can crush a crystal underfoot or break it up with a hammer, but you will always find that the fragments possess a limited set of interfacial angles.
This fundamental law, discovered by Nicholas Steno in 1669, was a major key development in crystallography. About 100 years later, a protractor-like device (the contact goniometer) was invented to enable more accurate measurements than the rather crude ones that had formerly been traced out on paper.
Cleavage Planes
When a crystal is broken by applying a force in certain directions (as opposed to being pulverized by a hammer) it will often be seen to break cleanly into two pieces along what are known as cleavage planes. The new faces thus formed always correspond to the symmetry planes associated with a particular crystal type, and of course make constant angles with any other faces that may be present.
Scientific crystallography began with an accident
Cleavage planes were first described in the late 17th century, but nothing much was thought about their significance until about a hundred years later when the Abbe Hauy accidently dropped a friend's sample of calcite and noticed how cleanly it broke. Further experimentation showed that other calcite crystals, even ones of different initial shapes (habits), displayed similar rhombohedral shapes upon cleavage, and that these in turn produced similar shapes when they were cleaved. This led Haüy to suggest that continued cleavages would ultimately lead to the smallest possible unit which would be the fundamental building block of the crystal. (Remember that the atomic theory of matter had not developed at this time.)
Haüy's elaborately drawn figures (published in 1784) showed how external faces of a crystal could be produced by stacking the units in various ways. For example, by omitting rows from a cubic stack of primal cubelets, one could arrive at the various stages between the cube and the octahedra for sodium chloride that we saw earlier on this page.
The modern interpretation of these observations replaces Haüy's primal shapes with atoms or molecules, or more generally with points in space that these define the possible locations of atoms or molecules. It is easy to see how plane faces can develop along some directions and not others if one assumes that the new faces must follow a linear sequence of points.
Square Lattice:
x = y, 90° angles
Parallelogram lattice
xy, angles < 90°
Rectangular lattice
Rhombic or centered-rectangle lattice: x = y, angles neither 60° or 90°;
Hexagonal lattice
(but unit cell is a rhombus with x = yand angles 60°)
Although everyone has seen and admired the huge variety of patterns on printed fabrics or wallpapers, few are aware that these are all based on one of five types of two-dimensional "unit cells" that form the basis for these infinitely-extendable patterns. One of the most remarkable uses of this principle is in the work of the Dutch artist Maurits Escher (1888-1972).
Shown below are two-dimensional views of the unit cells for two very common types of crystal lattices, one having cubic symmetry and the other being hexagonal. Although we could use a hexagon for the second of these lattices, the rhombus is preferred because it is simpler.
Notice that in both of these lattices, the corners of the unit cells are centered on a lattice point. This means that an atom or molecule located on this point in a real crystal lattice is shared with its neighboring cells. As is shown more clearly here for a two-dimensional square-packed lattice, a single unit cell can claim "ownership" of only one-quarter of each molecule, and thus "contains" 4 × ¼ = 1 molecule.
The unit cell of the graphite form of carbon is also a rhombus, in keeping with the hexagonal symmetry of this arrangement.Notice that to generate this structure from the unit cell, we need to shift the cell in both the x- and y- directions in order to leave empty spaces at the correct spots. We could alternatively use regular hexagons as the unit cells, but the x+y shifts would still be required, so the simpler rhombus is usually preferred.
This image nicely illustrates the relations between the unit cell, the lattice structure, and the actual packing of atoms in a typical crystal.
Crystal systems and Bravais lattices
We saw above that five basic cell shapes can reproduce any design motif in two dimensions. If we go to the three-dimensional world of crystals, there are just seven possible basic lattice types, known as crystal systems, that can produce an infinite lattice by successive translations in three-dimensional space so that each lattice point has an identical environment. Each system is defined by the relations between the axis lengths and angles of its unit cell. For example, if the three edge lengths are identical and all corner angles are 90°, a crystal belongs to the cubic system.
The simplest possible cube is defined by the eight lattice points at its corner, but variants are also possible in which additional lattice points exist in the faces ("face-centered cubic") or in the center of the cube ("body-centered cubic"). If variants of this kind are taken into account, the total number of possible lattices is fourteen; these are known as the fourteen Bravais lattices.
Crystal System
Bravais lattices (P = primitive, I= body-centered, F = face-centered)
cubic
a = b = c
α = β = γ = 90°
The F cell corresponds to closest cubical packing, a very common and important structure.
tetragonal
a = bc
α = β = γ = 90°
A cube that has been extended in one direction, creating a unique c-axis. An F cell would simply be a network of joined I cells.
orthorhombic
abc
α = β = γ = 90°
Three unequal axes at right angles. The "C" form has atoms in the two faces that cut the c-axis.
hexagonal
a = bc
α = β = 60°,
γ = 120°
Just as in the 2-dimensional examples given above, the unit cell of the hexagonal lattice has a rhombic cross-section; the entire hexagonal unit is built from three of these rhombic prisms.
trigonal(rhombohedral)
a = b = c
α = β = γ ≠ 90°,
Think of this as a cube that has been skewed or distorted to one side so that opposite faces remain parallel to each other. This can also be regarded as a special case of the hexagonal system, and is often classified as such by U.S. minerologists who recognize only six crystal systems. The rhombohedral form of the hexagonal system is difficult to visualize.
monoclinic
abc
α = γ = 90°,
β > 90°
Two 90° angles, one > 90°, with all sides of different lengths. A C cell (also seen in the orthorhombic class) has additional points in the center of each end. Monoclinic I and F cells can be constructed from C cells.
triclinic
abc
α ≠ β ≠ γ ≠ 90°
This is the most generalized of the crystal systems, with all lengths and angles unequal, and no right angles.
Notes on the above diagrams:
• The labels a,b,c along the unit cell axes represent the dimensions of the unit cell. Visual examination of a crystal does not allow us to determine their actual values, but merely to know whether any two (or all three) are the same.
• When a = b, both axes may be given "a" labels, since neither is unique.
• The angles α, β and γ are those between the b-c, a-c, and a-b axes, respectively. Similarly in the cube, all axes are a axes.
Lines and planes in unit cells: the Miller index
In any kind of repeating pattern, it is useful to have a convenient way of specifying the orientation of elements relative to the unit cell. This is done by assigning to each such element a set of integer numbers known as its Miller index.
Indexing lines in two-dimensions
To understand indexing, it will be easier to begin with a unit cell plane that we are viewing from above, along the [invisible] z-axis. The drawing shows such a plane with three lines crossing it at various slopes. The index of each line is found by first determining the points where it intersects the x and y axes as fractions of the unit cell parameters a and b. Thus in the above example:
• Line starts at the origin and extends to the lower right-hand corner which corresponds to intersections of the x axis at one unit cell length of both a and b — that is, at 1×a and 1×b. We can abbreviate this set of intersections as [1,1].
• Line intersects the x axis at one-half the unit cell distance a, or at a/2. This line is parallel to the y axis, so it never intersects it; this is mathematically the same as saying that it intersects it at infinity, or in terms of unit cell increments, at ∞×b. We can describe this line in terms of the unit cell intercepts by the pair of values [½,∞].
• Line starts at the upper right corner of the cell which corresponds to the coordinates (0,1), but this is equivalent to the origin (0,0) of the neighboring unit cell on the right. So in terms of our coordinate system (which repeats for each unit cell), this line extends in the negative-y direction and intersects this axis at –b/2. The x-intercept is a. These intercepts correspond to [1,–2].
The Miller indices of the lines are given by the reciprocals of these values:
Line 1 [1,1] → (11) Line 2 [½,∞]→ (20) Line 3 [1,–2] → 1,2)
Miller indices are written in parentheses with no spaces between numbers. Negative values are indicated by an overbar as in 1.
Indexing planes in three-dimensions
We proceed in exactly the same way, except that we now have 3-digit Miller indices corresponding to the axes a, b and c.
It is important to note that multiple parallel planes that repeat at the same interval have identical Miller indices. This simply reflects the fact that we can repeat the coordinate axes at any regular interval.
Identifying crystal faces
We mentioned previously that the plane faces of crystals are their most important visually-distinctive property, so it is important to have a convenient way of referring to any given face. First, we define a set of reference directions (x,y,z) which are known as the crystallographic axes. In most cases these axes correspond to directions that are fairly apparent on visual examination of one or more crystals of a given kind. They are parallel to actual or possible edges of the crystal, and they are not necessarily orthogonal. We now know, as Haüy first suggested, that these directions correspond to rows of lattice points in the underlying structure of the crystal.
We also define three lattice parameters (a,b,c) which mark out the boundaries of the unit cell along the crystallographic axes. The index of a particular face is determined by the fractional values of (a,b,c) at which the face intersects the axes (x,y,z). Study the examples shown below for three different habits of a cubic lattice.
Below is a more complicated example of one particular habit of an orthorhombic crystal. The figure at the right shows how the (113) face is indexed.
In this case, the plane at the top of the crystal is extended downward to the (x,y) plane. This extended plane cuts the (x,y,z) axes at (2a, 2b, 2/3c). The corresponding inverses would be (½,½,3/2). In order to make them into proper Miller indices (which should always be integers) we multiply everything by 2, yielding (113).
Why do Miller indices use mostly small numbers?
It is remarkable that the faces that bound real crystals generally have small Miller indices. The low values for the indices suggest that a given lattice plane has a high density of lattice points per unit area, a logical consequence of each molecule being surrounded and held by its closely-packed neighbors. In the 2-dimensional projection below, compare the facial lattice-point density in the (11) plane with that of the (31) plane.
Crystals with a single long unit-cell axis tend to form planes with the long axis normal to the plane, so that the major faces of the crystal are planes containing the short-axis translations. Similarly, crystals with a single, short unit-cell axis tend to be needles. The main faces, on the sides of the needles, contain the short lattice translation — a high density of lattice points. In general, it is found that crystals have linear dimensions that mirror the reciprocals of the lattice parameters.
Factors affecting crystal growth habits
Faces having a lower density of lattice points (as in the (31) face shown above) can acquire new layers more rapidly, and thus grow more rapidly than faces having a high lattice-point density. The faces that can potentially develop in a crystal are determined entirely by the symmetry properties of the underlying lattice. But the faces that actually develop under specific conditions — and thus the overall shape of the crystal — is determined by the relative rates of growth of the various faces. The slower the growth rate, the larger the face.
This relation can be understood by noting that faces that grow normal to shorter unit cell axes (as in the needle-shaped crystal shown above) present a larger density of lattice points to the surface (that is, more points per unit surface area.) This means that more time is required for diffusion of enough new particles to build out a new layer on such a surface.
An interesting experiment is to grind a large crystal of salt into a spherical shape and immerse it in a saturated solution of sodium chloride. At first, the most disturbed and exposed parts on the surface dissolve, revealing a large variety of underlying plane faces. As growth resumes, the smaller of these are rapidly replaced by larger faces. Eventually, the fast-growing faces eliminate themselves and the high-lattice point density faces that correspond to the sides of the cube win out.
In addition to these structural effects, the conditions under which a crystal is grown can affect its habit. Temperature, degree of supersaturation, nature of the solvent all have their effects, and these may affect the growth of different faces in different ways. The presence of impurities in the solution can radically alter the habit of a crystal, as seen in the following table for the growth of sodium chloride:
Table: Impurity Effect on the Growth of Sodium Chloride Crystals
no impurity Fe(CN)64– formamide Pb2+, Cd2+ polyvinyl alcohol
cubes dendrites large crystals needles
These effects presumably come about because these substances preferentially adsorb to certain faces, impeding their growth. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.06%3A_Introduction_to_Crystals.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above.
• What is an ionic solid, what are its typical physical properties, and what kinds of elements does it contain?
• Define the lattice energy of an ionic solid in terms of the energetic properties of its component elements.
• Make a rough sketch that describes the structure of solid sodium chloride.
• Describe the role that the relative ionic radii play in contributing to the stability of an ionic solids.
• Give examples of some solids that can form when ionic solutions are evaporated, but which do not fall into the category of "ionic" solids.
In this section we deal mainly with a very small but important class of solids that are commonly regarded as composed of ions. We will see how the relative sizes of the ions determine the energetics of such compounds. And finally, we will point out that not all solids that are formally derived from ions can really be considered "ionic" at all.
Ionic Solids
The idealized ionic solid consists of two interpenetrating lattices of oppositely-charged point charges that are held in place by a balance of coulombic forces. But because real ions occupy space, no such "perfect" ionic solid exists in nature. Nevertheless, this model serves as a useful starting point for understanding the structure and properties of a small group of compounds between elements having large differences in electronegativity.
Chemists usually apply the term "ionic solid" to binary compounds of the metallic elements of Groups 1-2 with one of the halogen elements or oxygen. As can be seen from the diagram, the differences in electronegativity between the elements of Groups 1-2 and those of Group 17 (as well as oxygen in Group 16) are sufficiently great that the binding in these solids is usually dominated by Coulombic forces and the crystals can be regarded as built up by aggregation of oppositely-charged ions.
Sodium Chloride (rock-salt) Structure
The most well known ionic solid is sodium chloride, also known by its geological names as rock-salt or halite. We can look at this compound in both structural and energetic terms.
Structurally, each ion in sodium chloride is surrounded and held in tension by six neighboring ions of opposite charge. The resulting crystal lattice is of a type known as simple cubic, meaning that the lattice points are equally spaced in all three dimensions and all cell angles are 90°.
In Figure $2$, we have drawn two imaginary octahedra centered on ions of different kinds and extending partially into regions outside of the diagram. (We could equally well have drawn them at any of the lattice points, but show only two in order to reduce clutter.) Our object in doing this is to show that each ion is surrounded by six other ions of opposite charge; this is known as (6,6) coordination. Another way of stating this is that each ion resides in an octahedral hole within the cubic lattice.
How can one sodium ion surrounded by six chloride ions (or vice versa) be consistent with the simplest formula NaCl? The answer is that each of those six chloride ions also sits at the center of its own octahedron defined by another six sodium ions. You might think that this corresponds to Na6Cl6, but note that the central sodium ion shown in the diagram can claim only a one-sixth share of each of its chloride ion neighbors, so the formula NaCl is not just the simplest formula, but correctly reflects the 1:1 stoichiometry of the compound. But of course, as in all ionic structures, there are no distinguishable "molecular" units that correspond to the NaCl simplest formula. Bear in mind that large amount of empty space in diagrams depicting a crystal lattice structure can be misleading, and that the ions are really in direct contact with each other to the extent that this is geometrically possible.
Sodium Chloride Energetics
Sodium chloride, like virtually all salts, is a more energetically favored configuration of sodium and chlorine than are these elements themselves; in other words, the reaction
$Na_{(s)} + ½Cl_{2(g)} \rightarrow NaCl_{(s)}$
is accompanied by a release of energy in the form of heat. How much heat, and why? To help us understand, we can imagine the formation of one mole of sodium chloride from its elements proceeding in these hypothetical steps in which we show the energies explicitly:
Step 1: Atomization of sodium (breaking one mole of metallic sodium into isolated sodium atoms)
$\ce{ Na(s) + 108 kJ → Na(g)} \label{Step1}$
Step 2: Same thing with chlorine. This requires more energy because it involves breaking a covalent bond.
$\ce{ ½Cl2(g) + 127\, kJ → Cl(g)} \label{Step2}$
Step 3: We strip an electron from one mole of sodium atoms (this costs a lot of energy!)
$\ce{ Na(g) + 496\, kJ → Na^{+}(g) + e^{–}} \label{Step3}$
Step 4: Feeding these electrons to the chlorine atoms gives most of this energy back.
$\ce{ Cl(g) + e^{–} → Cl^{–}(g) + 348\, kJ}\label{Step4}$
Step 5: Finally, we bring one mole of the ions together to make the crystal lattice — with a huge release of energy.
$\ce{ Na^{+}(g) + Cl^{–}(g) → NaCl(s) + 787\, kJ} \label{Step5}$
If we add all of these equations together, we get
$\ce{Na(s) + 1/2Cl2(g) → NaCl(s)} + 404\; kJ$
In other words, the formation of solid sodium chloride from its elements is highly exothermic. As this energy is released in the form of heat, it spreads out into the environment and will remain unavailable to push the reaction in reverse. We express this by saying that "sodium chloride is more stable than its elements".
Looking at the equations above, you can see that Equation \ref{Step5} constitutes the big payoff in energy. The 787 kj/mol noted there is known as the NaCl lattice energy. Its large magnitude should be no surprise, given the strength of the coulombic force between ions of opposite charge.
It turns out that it is the lattice energy that renders the gift of stability to all ionic solids.Note that this lattice energy, while due principally to coulombic attraction between each ion and its eight nearest neighbors, is really the sum of all the interactions with the crystal. Lattice energies cannot be measured directly, but they can be estimated fairly well from the energies of the other processes described in the table immediately above.
How Geometry and Periodic Properties Interact
The most energetically stable arrangement of solids made up of identical molecular units (as in the noble gas elements and pure metals) are generally those in which there is a minimum of empty space; these are known as close-packed structures, and there are several kinds. In the case of ionic solids of even the simplest 1:1 stoichiometry, the positive and negative ions usually differ so much in size that packing is often much less efficient. This may cause the solid to assume lattice geometries that differ from the one illustrated above for sodium chloride.
By way of illustration, consider the structure of cesium chloride (the spelling cæsium is also used), CsCl. The radius of the Cs+ ion is 168 pm compared to 98 pm for Na+ and cannot possibly fit into the octahedral hole of a simple cubic lattice of chloride ions. The CsCl lattice therefore assumes a different arrangement.
Figure $3$ focuses on two of these cubic lattice elements whose tops and bottoms are shaded for clarity. It should be easy to see that each cesium ion now has eight nearest-neighbor chloride ions. Each chloride ion is also surrounded by eight cesium ions, so all the lattice points are still geometrically equivalent. We therefore describe this structure as having (8,8) coordination.
The two kinds of lattice arrangements exemplified by NaCl ("rock salt") and CsCl are found in a large number of other 1:1 ionic solids, and these names are used generically to describe the structures of these other compounds. There are of course many other fundamental lattice arrangements (not all of them cubic), but the two we have described here are sufficient to illustrate the point that the radius ratio (the ratio of the radii of the positive to the negative ion) plays an important role in the structures of simple ionic solids.
The Alkali Halides
The interaction of the many atomic properties that influence ionic binding are nicely illustrated by looking at a series of alkali halides, especially those involving extreme differences in atomic radii. The latter are all drawn to the same scale. On the energetic plots at the right, the lattice energies are shown in green. We will start with the one you already know very well.
Sodium chloride - NaCl ("rock-salt") mp/bp 801/1413 °C; coordination (6,6)
Lithium Fluoride - LiF - mp/bp 846/1676 °C, rock-salt lattice structure (6,6).
Tiny-tiny makes strong-strong! This is the most "ionic" of the alkali halides, with the largest lattice energy and highest melting and boiling points. The small size of these ions (and consequent high charge densities) together with the large electronegativity difference between the two elements places a lot of electronic charge between the atoms.
Even in this highly ionic solid, the electron that is "lost" by the lithium atom turns out to be closer to the Li nucleus than when it resides in the 2s shell of the neutral atom.
Cesium Fluoride, CsF - mp/bp 703/1231 °C, (8,8) coordination.
With five shells of electrons shielding its nucleus, the Cs+ ion with its low charge density resembles a big puff-ball which can be distorted by the highly polarizing fluoride ion. The resulting ion-induced dipoles (blue arrows) account for much of the lattice energy here.
The reverse of this would be a tiny metal ion trying to hold onto four relatively huge iodide ions like Lithium iodide.
Lithium iodide, LiI - mp/bp 745/1410 °C.
Negative ions can make even bigger puff-balls. The tiny lithium ion can't get very close to any of the iodides to generate a very strong coulombic binding, but does polarize them to create an ion-induced dipole component. It does not help that the negative ions are in contact with each other. The structural geometry is the same (6,6) coordination as NaCl.
Cesium iodide, CsI - mp/bp 626/1280 °C.
Even with the (8,8) coordination afforded by the CsCl structure, this is a pretty sorry combination owing to the low charge densities. The weakness of coulombic- compared to van der Waals interactions makes this the least-"ionic" of all the alkali halide solids.
Conclusion: Many of the alkali halide solids are not all that "ionic" in the sense that coulombic forces are the predominant actors; in many, such as the CsI illustrated above, ion-induced dipole forces are more important.
Some Properties of Ionic Solids
As noted above, ionic solids are generally hard and brittle. Both of these properties reflect the strength of the coulombic force. Hardness measures resistance to deformation. Because the ions are tightly bound to their oppositely-charged neighbors and, a mechanical force exerted on one part of the solid is resisted by the electrostatic forces operating over an extended volume of the crystal.
But by applying sufficient force, one layer of ions can be made to slip over another; this is the origin of brittleness. This slippage quickly propagates along a plane of the crystal (more readily in some directions than in others), weakening their attraction and leading to physical cleavage. Because the "ions" in ionic solids lack mobility, the solids themselves are electrical insulators.
Not all ion-derived solids are "ionic".
Even within the alkali halides, the role of coulombic attraction diminishes as the ions become larger and more polarizable or differ greatly in radii. This is especially true of the anions, which tend to be larger and whose electron clouds are more easily distorted. In solids composed of polyatomic ions such as (NH4)2SO4, SrClO4, NH4CO3, ion-dipole and ion-induced dipole forces may actually be stronger than the coulombic force. Higher ionic charges help, especially if the ions are relatively small. This is especially evident in the extremely high melting points of the Group 2 and higher oxides:
MgO (magnesia) CaO (lime) SrO (strontia) Al2O3 (alumina) ZrO2 (zirconia)
2830 °C 2610 °C 2430 °C 2050 °C 2715 °C
These substances are known as refractories, meaning that they retain their essential properties at high temperatures. Magnesia, for example, is used to insulate electric heating elements and, in the form of fire bricks, to line high-temperature furnaces. No boiling points have been observed for these compounds; on further heating, they simply dissociate into their elements. Their crystal structures can be very complex, and some (notably Al2O3) can have several solid forms. Even in the most highly ionic solids there is some electron sharing, so the idea of a “pure” ionic bond is an abstraction.
Many solids that are formally derived from ions cannot really be said to form "ionic" solids at all. For example, anhydrous copper(II) chloride consists of layers of copper atoms surrounded by four chlorine atoms in a square arrangement. Neighboring chains are offset so as to create an octahedral coordination of each copper atom. Similar structures are commonly encountered for other salts of transition metals. Similarly, most oxides and sulfides of metals beyond Group 2 tend to have structures dominated by other than ion-ion attractions.
Aluminum Halides
The trihalides of aluminum offer another example of the dangers of assuming ionic character of solids that are formally derived from ions. Aqueous solutions of what we assume to be AlF3, AlCl3, AlBr3, and AlI3 all exhibit the normal properties ionic solutions (they are electrically conductive, for example), but the solids are quite different: the melting point of AlF3 is 1290°C, suggesting that it is indeed ionic. But AlCl3 melts at 192°C — hardly consistent with ionic bonding, and the other two halides are also rather low-melting. Structural studies show that when AlCl3 vaporizes or dissolves in a non-polar solvent it forms a dimer Al2Cl6. The two other halides exist only as dimers in all states.
The structural formula of the Al2Cl6 molecule shows that the aluminum atoms are bonded to four chlorines, two of which are shared between the two metal atoms. The arrows represent coordinate covalent bonds in which the bonding electrons both come from the same atom (chlorine in this case.)
As shown at the right above, the aluminum atoms can be considered to be located at the centers of two tetrahedra that possess one edge in common. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.07%3A_Ionic_and_Ion-Derived_Solids.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• The difference between square and hexagonal packing in two dimensions.
• The definition and significance of the unit cell.
• Sketch the three Bravais lattices of the cubic system, and calculate the number of atoms contained in each of these unit cells.
• Show how alternative ways of stacking three close-packed layers can lead to the hexagonal or cubic close packed structures.
• Explain the origin and significance of octahedral and tetrahedral holes in stacked close-packed layers, and show how they can arise.
Close-Packing of Identical Spheres
Crystals are of course three-dimensional objects, but we will begin by exploring the properties of arrays in two-dimensional space. This will make it easier to develop some of the basic ideas without the added complication of getting you to visualize in 3-D — something that often requires a bit of practice. Suppose you have a dozen or so marbles. How can you arrange them in a single compact layer on a table top? Obviously, they must be in contact with each other in order to minimize the area they cover. It turns out that there are two efficient ways of achieving this:
The essential difference here is that any marble within the interior of the square-packed array is in contact with four other marbles, while this number rises to six in the hexagonal-packed arrangement. It should also be apparent that the latter scheme covers a smaller area (contains less empty space) and is therefore a more efficient packing arrangement. If you are good at geometry, you can show that square packing covers 78 percent of the area, while hexagonal packing yields 91 percent coverage.
If we go from the world of marbles to that of atoms, which kind of packing would the atoms of a given element prefer?
If the atoms are identical and are bound together mainly by dispersion forces which are completely non-directional, they will favor a structure in which as many atoms can be in direct contact as possible. This will, of course, be the hexagonal arrangement.
Directed chemical bonds between atoms have a major effect on the packing. The version of hexagonal packing shown at the right occurs in the form of carbon known as graphite which forms 2-dimensional sheets. Each carbon atom within a sheet is bonded to three other carbon atoms. The result is just the basic hexagonal structure with some atoms missing.
The coordination number of 3 reflects the sp2-hybridization of carbon in graphite, resulting in plane-trigonal bonding and thus the sheet structure. Adjacent sheets are bound by weak dispersion forces, allowing the sheets to slip over one another and giving rise to the lubricating and flaking properties of graphite.
Lattices
The underlying order of a crystalline solid can be represented by an array of regularly spaced points that indicate the locations of the crystal's basic structural units. This array is called a crystal lattice. Crystal lattices can be thought of as being built up from repeating units containing just a few atoms. These repeating units act much as a rubber stamp: press it on the paper, move ("translate") it by an amount equal to the lattice spacing, and stamp the paper again.
The gray circles represent a square array of lattice points.
The orange square is the simplest unit cell that can be used to define the 2-dimensional lattice.
Building out the lattice by moving ("translating") the unit cell in a series of steps,
Although real crystals do not actually grow in this manner, this process is conceptually important because it allows us to classify a lattice type in terms of the simple repeating unit that is used to "build" it. We call this shape the unit cell. Any number of primitive shapes can be used to define the unit cell of a given crystal lattice. The one that is actually used is largely a matter of convenience, and it may contain a lattice point in its center, as you see in two of the unit cells shown here. In general, the best unit cell is the simplest one that is capable of building out the lattice.
Shown above are unit cells for the close-packed square and hexagonal lattices we discussed near the start of this lesson. Although we could use a hexagon for the second of these lattices, the rhombus is preferred because it is simpler.
Notice that in both of these lattices, the corners of the unit cells are centered on a lattice point. This means that an atom or molecule located on this point in a real crystal lattice is shared with its neighboring cells. As is shown more clearly here for a two-dimensional square-packed lattice, a single unit cell can claim "ownership" of only one-quarter of each molecule, and thus "contains" 4 × ¼ = 1 molecule.
The unit cell of the graphite form of carbon is also a rhombus, in keeping with the hexagonal symmetry of this arrangement. Notice that to generate this structure from the unit cell, we need to shift the cell in both the x- and y- directions in order to leave empty spaces at the correct spots. We could alternatively use regular hexagons as the unit cells, but the x+y shifts would still be required, so the simpler rhombus is usually preferred. As you will see in the next section, the empty spaces within these unit cells play an important role when we move from two- to three-dimensional lattices.
Cubic crystals
In order to keep this lesson within reasonable bounds, we are limiting it mostly to crystals belonging to the so-called cubic system. In doing so, we can develop the major concepts that are useful for understanding more complicated structures (as if there are not enough complications in cubics alone!) But in addition, it happens that cubic crystals are very commonly encountered; most metallic elements have cubic structures, and so does ordinary salt, sodium chloride.
We usually think of a cubic shape in terms of the equality of its edge lengths and the 90° angles between its sides, but there is another way of classifying shapes that chemists find very useful. This is to look at what geometric transformations (such as rotations around an axis) we can perform that leave the appearance unchanged. For example, you can rotate a cube 90° around an axis perpendicular to any pair of its six faces without making any apparent change to it. We say that the cube possesses three mutually perpendicular four-fold rotational axes, abbreviated C4 axes. But if you think about it, a cube can also be rotated around the axes that extend between opposite corners; in this case, it takes three 120° rotations to go through a complete circle, so these axes (also four in number) are three-fold or C3 axes.
Cubic crystals belong to one of the seven crystal systems whose lattice points can be extended indefinitely to fill three-dimensional space and which can be constructed by successive translations (movements) of a primitive unit cell in three dimensions. As we will see below, the cubic system, as well as some of the others, can have variants in which additional lattice points can be placed at the center of the unit or at the center of each face.
The three types of cubic lattices
The three Bravais lattices which form the cubic crystal system are shown here.
Structural examples of all three are known, with body- and face-centered (BCC and FCC) being much more common; most metallic elements crystallize in one of these latter forms. But although the simple cubic structure is uncommon by itself, it turns out that many BCC and FCC structures composed of ions can be regarded as interpenetrating combinations of two simple cubic lattices, one made up of positive ions and the other of negative ions. Notice that only the FCC structure, which we will describe below, is a close-packed lattice within the cubic system.
Close-packed lattices in three dimensions
Close-packed lattices allow the maximum amount of interaction between atoms. If these interactions are mainly attractive, then close-packing usually leads to more energetically stable structures. These lattice geometries are widely seen in metallic, atomic, and simple ionic crystals.
As we pointed out above, hexagonal packing of a single layer is more efficient than square-packing, so this is where we begin. Imagine that we start with the single layer of green atoms shown below. We will call this the A layer. If we place a second layer of atoms (orange) on top of the A-layer, we would expect the atoms of the new layer to nestle in the hollows in the first layer. But if all the atoms are identical, only some of these void spaces will be accessible.
In the diagram on the left, notice that there are two classes of void spaces between the A atoms; one set (colored blue) has a vertex pointing up, while the other set (not colored) has down-pointing vertices. Each void space constitutes a depression in which atoms of a second layer (the B-layer) can nest. The two sets of void spaces are completely equivalent, but only one of these sets can be occupied by a second layer of atoms whose size is similar to those in the bottom layer. In the illustration on the right above we have arbitrarily placed the B-layer atoms in the blue voids, but could just as well have selected the white ones.
Two choices for the third layer lead to two different close-packed lattice types
Now consider what happens when we lay down a third layer of atoms. These will fit into the void spaces within the B-layer. As before, there are two sets of these positions, but unlike the case described above, they are not equivalent.
The atoms in the third layer are represented by open blue circles in order to avoid obscuring the layers underneath. In the illustration on the left, this third layer is placed on the B-layer at locations that are directly above the atoms of the A-layer, so our third layer is just a another A layer. If we add still more layers, the vertical sequence A-B-A-B-A-B-A... repeats indefinitely.
In the diagram on the right above, the blue atoms have been placed above the white (unoccupied) void spaces in layer A. Because this third layer is displaced horizontally (in our view) from layer A, we will call it layer C. As we add more layers of atoms, the sequence of layers is A-B-C-A-B-C-A-B-C..., so we call it ABC packing.
For the purposes of clarity, only three atoms of the A and C layers are shown in these diagrams. But in reality, each layer consists of an extended hexagonal array; the two layers are simply displaced from one another.
These two diagrams that show exploded views of the vertical stacking further illustrate the rather small fundamental difference between these two arrangements— but, as you will see below, they have widely divergent structural consequences. Note the opposite orientations of the A and C layers
The Hexagonal closed-packed structure
The HCP stacking shown on the left just above takes us out of the cubic crystal system into the hexagonal system, so we will not say much more about it here except to point out each atom has 12 nearest neighbors: six in its own layer, and three in each layer above and below it.
The cubic close-packed structure
Below we reproduce the FCC structure that was shown above.
You will notice that the B-layer atoms form a hexagon, but this is a cubic structure. How can this be? The answer is that the FCC stack is inclined with respect to the faces of the cube, and is in fact coincident with one of the three-fold axes that passes through opposite corners. It requires a bit of study to see the relationship, and we have provided two views to help you. The one on the left shows the cube in the normal isometric projection; the one on the right looks down upon the top of the cube at a slightly inclined angle.
Both the CCP and HCP structures fill 74 percent of the available space when the atoms have the same size. You should see that the two shaded planes cutting along diagonals within the interior of the cube contain atoms of different colors, meaning that they belong to different layers of the CCP stack. Each plane contains three atoms from the B layer and three from the C layer, thus reducing the symmetry to C3, which a cubic lattice must have.
The FCC unit cell
The figure below shows the the face-centered cubic unit cell of a cubic-close packed lattice.
How many atoms are contained in a unit cell? Each corner atom is shared with eight adjacent unit cells and so a single unit cell can claim only 1/8 of each of the eight corner atoms. Similarly, each of the six atoms centered on a face is only half-owned by the cell. The grand total is then (8 × 1/8) + (6 × ½) = 4 atoms per unit cell.
Interstitial Void Spaces
The atoms in each layer in these close-packing stacks sit in a depression in the layer below it. As we explained above, these void spaces are not completely filled. (It is geometrically impossible for more than two identical spheres to be in contact at a single point.) We will see later that these interstitial void spaces can sometimes accommodate additional (but generally smaller) atoms or ions.
If we look down on top of two layers of close-packed spheres, we can pick out two classes of void spaces which we call tetrahedral and octahedral holes.
Tetrahedral holes
If we direct our attention to a region in the above diagram where a single atom is in contact with the three atoms in the layers directly below it, the void space is known as a tetrahedral hole. A similar space will be be found between this single atom and the three atoms (not shown) that would lie on top of it in an extended lattice. Any interstitial atom that might occupy this site will interact with the four atoms surrounding it, so this is also called a four-coordinate interstitial space.
Don't be misled by this name; the boundaries of the void space are spherical sections, not tetrahedra. The tetrahedron is just an imaginary construction whose four corners point to the centers of the four atoms that are in contact.
Octahedral holes
Similarly, when two sets of three trigonally-oriented spheres are in close-packed contact, they will be oriented 60° apart and the centers of the spheres will define the six corners of an imaginary octahedron centered in the void space between the two layers, so we call these octahedral holes or six-coordinate interstitial sites. Octahedral sites are larger than tetrahedral sites.
An octahedron has six corners and eight sides. We usually draw octahedra as a double square pyramid standing on one corner (left), but in order to visualize the octahedral shape in a close-packed lattice, it is better to think of the octahedron as lying on one of its faces (right).
Each sphere in a close-packed lattice is associated with one octahedral site, whereas there are only half as many tetrahedral sites. This can be seen in this diagram that shows the central atom in the B layer in alignment with the hollows in the C and A layers above and below.
The face-centered cubic unit cell contains a single octahedral hole within itself, but octahedral holes shared with adjacent cells exist at the centers of each edge. Each of these twelve edge-located sites is shared with four adjacent cells, and thus contributes (12 × ¼) = 3 atoms to the cell. Added to the single hole contained in the middle of the cell, this makes a total of 4 octahedral sites per unit cell. This is the same as the number we calculated above for the number of atoms in the cell.
Common cubic close-packed structures
It can be shown from elementary trigonometry that an atom will fit exactly into an octahedral site if its radius is 0.414 as great as that of the host atoms. The corresponding figure for the smaller tetrahedral holes is 0.225.
Many pure metals and compounds form face-centered cubic (cubic close- packed) structures. The existence of tetrahedral and octahedral holes in these lattices presents an opportunity for "foreign" atoms to occupy some or all of these interstitial sites. In order to retain close-packing, the interstitial atoms must be small enough to fit into these holes without disrupting the host CCP lattice. When these atoms are too large, which is commonly the case in ionic compounds, the atoms in the interstitial sites will push the host atoms apart so that the face-centered cubic lattice is somewhat opened up and loses its close-packing character.
The rock-salt structure
Alkali halides that crystallize with the "rock-salt" structure exemplified by sodium chloride can be regarded either as a FCC structure of one kind of ion in which the octahedral holes are occupied by ions of opposite charge, or as two interpenetrating FCC lattices made up of the two kinds of ions. The two shaded octahedra illustrate the identical coordination of the two kinds of ions; each atom or ion of a given kind is surrounded by six of the opposite kind, resulting in a coordination expressed as (6:6).
How many NaCl units are contained in the unit cell? If we ignore the atoms that were placed outside the cell in order to construct the octahedra, you should be able to count fourteen "orange" atoms and thirteen "blue" ones. But many of these are shared with adjacent unit cells.
An atom at the corner of the cube is shared by eight adjacent cubes, and thus makes a 1/8 contribution to any one cell. Similarly, the center of an edge is common to four other cells, and an atom centered in a face is shared with two cells. Taking all this into consideration, you should be able to confirm the following tally showing that there are four AB units in a unit cell of this kind.
Orange Blue
8 at corners: 8 x 1/8 = 1 12 at edge centers: 12 x ¼ = 3
6 at face centers: 6 x ½ = 3 1 at body center = 1
total: 4 total: 4
If we take into consideration the actual sizes of the ions (Na+ = 116 pm, Cl = 167 pm), it is apparent that neither ion will fit into the octahedral holes with a CCP lattice composed of the other ion, so the actual structure of NaCl is somewhat expanded beyond the close-packed model.
The space-filling model on the right depicts a face-centered cubic unit cell of chloride ions (purple), with the sodium ions (green) occupying the octahedral sites.
The zinc-blende structure: using some tetrahedral holes
Since there are two tetrahedral sites for every atom in a close-packed lattice, we can have binary compounds of 1:1 or 1:2 stoichiometry depending on whether half or all of the tetrahedral holes are occupied. Zinc-blende is the mineralogical name for zinc sulfide, ZnS. An impure form known as sphalerite is the major ore from which zinc is obtained.
This structure consists essentially of a FCC (CCP) lattice of sulfur atoms (orange) (equivalent to the lattice of chloride ions in NaCl) in which zinc ions (green) occupy half of the tetrahedral sites. As with any FCC lattice, there are four atoms of sulfur per unit cell, and the the four zinc atoms are totally contained in the unit cell. Each atom in this structure has four nearest neighbors, and is thus tetrahedrally coordinated.
It is interesting to note that if all the atoms are replaced with carbon, this would correspond to the diamond structure.
The fluorite structure: all tetrahedral sites occupied
Fluorite, CaF2, having twice as many ions of fluoride as of calcium, makes use of all eight tetrahedral holes in the CPP lattice of calcium ions (orange) depicted here. To help you understand this structure, we have shown some of the octahedral sites in the next cell on the right; you can see that the calcium ion at A is surrounded by eight fluoride ions, and this is of course the case for all of the calcium sites. Since each fluoride ion has four nearest-neighbor calcium ions, the coordination in this structure is described as (8:4).
Although the radii of the two ions (F= 117 pm, Ca2+ = 126 pm does not allow true close packing, they are similar enough that one could just as well describe the structure as a FCC lattice of fluoride ions with calcium ions in the octahedral holes.
Simple- and body-centered cubic structures
In Section 4 we saw that the only cubic lattice that can allow close packing is the face-centered cubic structure. The simplest of the three cubic lattice types, the simple cubic lattice, lacks the hexagonally-arranged layers that are required for close packing. But as shown in this exploded view, the void space between the two square-packed layers of this cell constitutes an octahedral hole that can accommodate another atom, yielding a packing arrangement that in favorable cases can approximate true close-packing. Each second-layer B atom (blue) resides within the unit cell defined the A layers above and below it.
The A and B atoms can be of the same kind or they can be different. If they are the same, we have a body-centered cubic lattice. If they are different, and especially if they are oppositely-charged ions (as in the CsCl structure), there are size restrictions: if the B atom is too large to fit into the interstitial space, or if it is so small that the A layers (which all carry the same electric charge) come into contact without sufficient A-B coulombic attractions, this structural arrangement may not be stable.
The cesium chloride structure
CsCl is the common model for the BCC structure. As with so many other structures involving two different atoms or ions, we can regard the same basic structure in different ways. Thus if we look beyond a single unit cell, we see that CsCl can be represented as two interpenetrating simple cubic lattices in which each atom occupies an octahedral hole within the cubes of the other lattice. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.08%3A_Cubic_Lattices_and_Close_Packing.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the bold terms in the context of this topic.
• Aside from their high molar masses, how do synthetic polymers differ from ordinary molecular solids?
• Polymers can be classified according to their chemical composition, their physical properties, and their general application. For each of these three categories, name two examples that that might be considered when adapting a polymer to a particular end-use.
• and a thermoset, and comment on the molecular basis for their different properties, including crystallinity.
• Describe the two general methods of polymer synthesis.
• Name two kinds each of commercially-important synthetic thermoplastics and thermosets, and specify some of their principal uses.
• Name two kinds of commercially-important natural polymers.
• Describe some of the concerns and sources of small-molecule release from polymers.
• What are some of the problems connected with recycling or re-use of polymeric materials?
Plastics and natural materials such as rubber or cellulose are composed of very large molecules called polymers. Polymers are constructed from relatively small molecular fragments known as monomers that are joined together. Wool, cotton, silk, wood and leather are examples of natural polymers that have been known and used since ancient times. This group includes biopolymers such as proteins and carbohydrates that are constituents of all living organisms.
Synthetic polymers, which includes the large group known as plastics, came into prominence in the early twentieth century. Chemists' ability to engineer them to yield a desired set of properties (strength, stiffness, density, heat resistance, electrical conductivity) has greatly expanded the many roles they play in the modern industrial economy. This Module deals mostly with synthetic polymers, but will include a synopsis of some of the more important natural polymers. It will close with a summary of some of the very significant environmental problems created by the wide use of plastics.
Polymers and "pure substances"
Let's begin by looking at an artificial polymer that is known to everyone in the form of flexible, transparent plastic bags: polyethylene. It is also the simplest polymer, consisting of random-length (but generally very long) chains made up of two-carbon units.
You will notice some "fuzziness" in the way that the polyethylene structures are represented above. The squiggly lines at the ends of the long structure indicate that the same pattern extends indefinitely. The more compact notation on the right shows the minimal repeating unit enclosed in brackets overprinted with a dash; this means the same thing and is the preferred way of depicting polymer structures.
In most areas of chemistry, a "pure substance" has a definite structure, molar mass, and properties. It turns out, however, that few polymeric substances are uniform in this way. This is especially the case with synthetic polymers, whose molecular weights cover a range of values, as may the sequence, orientation, and connectivity of the individual monomers. So most synthetic polymers are really mixtures rather than pure substances in the ordinary chemical sense of the term. Their molecular weights are typically distributed over a wide range.
Figure \(1\): Polymers
Don't be misled by chemical formulas that depict polymers such as polyethylene as reasonably straight chains of substituted carbon atoms. Free rotation around C—C bonds allows long polymer molecules to curl up and and tangle very much like spaghetti (Figure \(2\)). Thus polymers generally form amorphous solids. There are, however, ways in which certain polymers can be partially oriented.
Classification of polymers
Polymers can be classified in ways that reflect their chemical makeup, or perhaps more importantly, their properties and applications. Many of these factors are strongly interdependent, and most are discussed in much more detail in subsequent sections of this page.
Chemistry
• Nature of the monomeric units
• Average chain length and molecular weight
• Homopolymers (one kind of monomeric unit) or copolymers;
• Chain topology: how the monomeric units are connected
• Presence or absence of cross-branching
• Method of polymerization
Properties
• Density
• Thermal properties — can they soften or melt when heated?
• Degree of crystallinity
• Physical properties such as hardness, strength, machineability.
• Solubility, permeability to gases
Applications
• molded and formed objects ("plastics")
• sheets and films
• elastomers (i.e., elastic polymers such as rubber)
• adhesives
• coatings, paints, inks
• fibers and yarns
Physical properties of polymers
The physical properties of a polymer such as its strength and flexibility depend on:
• chain length - in general, the longer the chains the stronger the polymer;
• side groups - polar side groups (including those that lead to hydrogen bonding) give stronger attraction between polymer chains, making the polymer stronger;
• branching - straight, unbranched chains can pack together more closely than highly branched chains, giving polymers that have higher density, are more crystalline and therefore stronger;
• cross-linking - if polymer chains are linked together extensively by covalent bonds, the polymer is harder and more difficult to melt.
Amorphous and crystalline polymers
The spaghetti-like entanglements of polymer molecules tend to produce amorphous solids, but it often happens that some parts can become sufficiently aligned to produce a region exhibiting crystal-like order, so it is not uncommon for some polymeric solids to consist of a random mixture of amorphous and crystalline regions. As might be expected, shorter and less-branched polymer chains can more easily organize themselves into ordered layers than have can long chains. Hydrogen-bonding between adjacent chains also helps, and is very important in fiber-forming polymers both synthetic (Nylon 6.6) and natural (cotton cellulose).
Pure crystalline solids have definite melting points, but polymers, if they melt at all, exhibit a more complex behavior. At low temperatures, the tangled polymer chains tend to behave as rigid glasses. For example, the natural polymer that we call rubber becomes hard and brittle when cooled to liquid nitrogen temperature. Many synthetic polymers remain in this state to well above room temperature.
The melting of a crystalline compound corresponds to a sudden loss of long-range order; this is the fundamental reason that such solids exhibit definite melting points, and it is why there is no intermediate form between the liquid and the solid states. In amorphous solids there is no long-range order, so there is no melting point in the usual sense. Such solids simply become less and less viscous as the temperature is raised.
In some polymers (known as thermoplastics) there is a fairly definite softening point that is observed when the thermal kinetic energy becomes high enough to allow internal rotation to occur within the bonds and to allow the individual molecules to slide independently of their neighbors, thus rendering them more flexible and deformable. This defines the glass transition temperature tg .
Depending on the degree of crystallinity, there will be a higher temperature, the melting point tm , at which the crystalline regions come apart and the material becomes a viscous liquid. Such liquids can easily be injected into molds to manufacture objects of various shapes, or extruded into sheets or fibers. Other polymers (generally those that are highly cross-linked) do not melt at all; these are known as thermosets. If they are to be made into molded objects, the polymerization reaction must take place within the molds — a far more complicated process. About 20% of the commercially-produced polymers are thermosets; the remainder are thermoplastics.
2 Thermoplastic polymer structures
Homopolymers and heteropolymers
Copolymerization is an invaluable tool for "tuning" polymers so that they have the right combination of properties for an application. For example, homopolymeric polystyrene is a rigid and very brittle transparent thermoplastic with a glass transition temperature of 97°C. Copolymerizing it with acrylonitrile yields an alternating "SAN" copolymer in which tg is raised to 107°, making it useable for transparent drink containers.
A polymer that is composed of identical monomeric units (as is polyethylene) is called a homopolymer. Heteropolymers are built up from more than one type of monomer. Artificial heteropolymers are more commonly known as copolymers.
Chain topology
Polymers may also be classified as straight-chained or branched, leading to forms such as these:
The monomers can be joined end-to-end, and they can also be cross-linked to provide a harder material:
If the cross-links are fairly long and flexible, adjacent chains can move with respect to each other, producing an elastic polymer or elastomer
Chain configuration and tacticity
In a linear polymer such as polyethylene, rotations around carbon-carbon single bonds can allow the chains to bend or curl up in various ways, resulting in the spaghetti-like mixture of these different conformations we alluded to above. But if one of the hydrogen atoms is replaced by some other entity such as a methyl group, the relative orientations of the individual monomer units that make up a linear section of any carbon chain becomes an important characteristic of the polymer.
Cis-trans isomerism occurs because rotation around carbon-carbon double bonds is not possible — unlike the case for single bonds. Any pair of unlike substituents attached to the two carbons is permanently locked into being on the same side (cis) or opposite sides (trans) of the double bond.
If the carbon chain contains double bonds, then cis-trans isomerism becomes possible, giving rise to two different possible configurations (known as diastereomers) at each unit of the chain. This seemingly small variable can profoundly affect the nature of the polymer. For example, the latex in natural rubber is made mostly of cis-polyisoprene, whereas the trans isomer (known as gutta percha latex) has very different (and generally inferior) properties.
Chirality
The tetrahedral nature of carbon bonding has an important consequence that is not revealed by simple two-dimensional structural formulas: atoms attached to the carbon can be on one side or on the other, and these will not be geometrically equivalent if all four of the groups attached to a single carbon atom are different. Such carbons (and the groups attached to them) are said to be chiral, and can exist in two different three-dimensional forms known as enantiomers.
For an individual carbon atom in a polymer chain, two of its attached groups will ordinarily be the chain segments on either side of the carbon. If the two remaining groups are different (say one hydrogen and the other methyl), then the above conditions are satisfied and this part of the chain can give rise to two enantiomeric forms.
A chain that can be represented as (in which the orange and green circles represent different groups) will have multiple chiral centers, giving rise to a huge number of possible enantiomers. In practice, it is usually sufficient to classify chiral polymers into the following three classes of stereoregularity, usually referred to as tacticity.
The tacticity of a polymer chain can have a major influence on its properties. Atactic polymers, for example, being more disordered, cannot crystallize.
One of the major breakthroughs in polymer chemistry occurred in the early 1950s when the German chemist Karl Ziegler discovered a group of catalysts that could efficiently polymerize ethylene. At about the same time, Giulio Natta (Italian) made the first isotactic (and crystalline) polyethylene. The Ziegler-Natta catalysts revolutionized polymer chemistry by making it possible to control the stereoregularity of these giant molecules. The two shared the 1963 Nobel Prize in Chemistry
3 How polymers are made
Polymers are made by joining small molecules into large ones. But most of these monomeric molecules are perfectly stable as they are, so chemists have devised two general methods to make them react with each other, building up the backbone chain as the reaction proceeds.
Condensation-elimination polymerization
This method (also known as step-growth) requires that the monomers possess two or more kinds of functional groups that are able to react with each other in such a way that parts of these groups combine to form a small molecule (often H2O) which is eliminated from the two pieces. The now-empty bonding positions on the two monomers can then join together .
This occurs, for example, in the synthesis of the Nylon family of polymers in which the eliminated H2O molecule comes from the hydroxyl group of the acid and one of the amino hydrogens:
Note that the monomeric units that make up the polymer are not identical with the starting components.
Addition polymerization
Addition or chain-growth polymerization involves the rearrangement of bonds within the monomer in such a way that the monomers link up directly with each other:
In order to make this happen, a chemically active molecule (called an initiator) is needed to start what is known as a chain reaction. The manufacture of polyethylene is a very common example of such a process. It employs a free-radical initiator that donates its unpaired electron to the monomer, making the latter highly reactive and able to form a bond with another monomer at this site.
In theory, only a single chain-initiation process needs to take place, and the chain-propagation step then repeats itself indefinitely, but in practice multiple initiation steps are required, and eventually two radicals react (chain termination) to bring the polymerization to a halt.
As with all polymerizations, chains having a range of molecular weights are produced, and this range can be altered by controlling the pressure and temperature of the process.
4 Gallery of common synthetic polymers
Thermoplastics
Note: the left panels below show the polymer name and synonyms, structural formula, glass transition temperature, melting point/decomposition temperature, and (where applicable) the resin identification symbol used to facilitate recycling.
Polycarbonate (Lexan®)
Tg = 145°C, Tm = 225°C.
This polymer was discovered independently in Germany and the U.S. in 1953. Lexan is exceptionally hard and strong; we see it most commonly in the form of compact disks. It was once widely used in water bottles, but concerns about leaching of unreacted monomer (bisphenol-A) has largely suppressed this market.
Polyethylene terephthalate (PET, Mylar)
Tg = 76°C, Tm = 250°C.
Thin and very strong films of this material are made by drawing out the molten polymer in both directions, thus orienting the molecules into a highly crystalline state that becomes "locked-in" on cooling. Its many applications include food packaging (in foil-laminated drink containers and microwaveable frozen-food containers), overhead-projector film, weather balloons, and as aluminum-coated reflective material in spacecraft and other applications.
Nylon (a polyamide)
Tg = 50°C, Tm = 255°C.
Nylon has a fascinating history, both scientific and cultural. It was invented by DuPont chemist Wallace Carothers (1896-1937). The common form Nylon 6.6 has six carbon atoms in both parts of its chain; there are several other kinds. Notice that the two copolymer sub-units are held together by peptide bonds, the same kinds that join amino acids into proteins.
Nylon 6.6 has good abrasion resistance and is self-lubricating, which makes it a good engineering material. It is also widely used as a fiber in carpeting, clothing, and tire cord.
For an interesting account of the development of Nylon, see Enough for One Liftetime: Wallace Carothers, Inventor of Nylonby Ann Gaines (1971)
Polyacrylonitrile (Orlon, Acrilan, "acrylic" fiber)
Tg = 85°C, Tm = 318°C.
Used in the form of fibers in rugs, blankets, and clothing, especially cashmere-like sweaters. The fabric is very soft, but tends to "pill" — i.e., produce fuzz-like blobs. Owing to its low glass transition temperature, it requires careful treatment in cleaning and ironing.
Polyethylene
Tg = –78°C, Tm = 100°C.
LDPE
HDPE
Control of polymerization by means of catalysts and additives has led to a large variety of materials based on polyethylene that exhibit differences in densities, degrees of chain branching and crystallinity, and cross-linking. Some major types are low-density (LDPE), linear low density (LLDPE), high-density (HDPE).
LDPE was the first commercial form (1933) and is used mostly for ordinary "plastic bags", but also for food containers and in six-pack soda can rings. Its low density is due to long-chain branching that inhibits close packing. LLDPE has less branching; its greater toughness allows its use in those annoyingly-thin plastic bags often found in food markets.
A "very low density" form (VLDPE) with extensive short-chain branching is now used for plastic stretch wrap (replacing the original component of Saran Wrap) and in flexible tubing.
HDPE has mostly straight chains and is therefore stronger. It is widely used in milk jugs and similar containers, garbage containers, and as an "engineering plastic" for machine parts.
Polymethylmethacrylate (Plexiglass, Lucite, Perspex)
Tg = 114°C, Tm = 130-140°C.
This clear, colorless polymer is widely used in place of glass, where its greater impact resistance, lighter weight, and machineability are advantages. It is normally copolymerized with other substances to improve its properties. Aircraft windows, plastic signs, and lighting panels are very common applications. Its compatibility with human tissues has led to various medical applications, such as replacement lenses for cataract patients.
Polypropylene
Tg = –10°C, Tm = 173°C.
PP
Polypropylene is used alone or as a copolymer, usually with with ethylene. These polymers have an exceptionally wide range of uses — rope, binder covers, plastic bottles, staple yarns, non-woven fabrics, electric kettles. When uncolored, it is translucent but not transparent. Its resistance to fatigue makes it useful for food containers and their lids, and flip-top lids on bottled products such as ketchup.
polystyrene
Tg = 95°C, Tm = 240°C.
PS
Polystyrene is transparent but rather brittle, and yellows under uv light.
Widely used for inexpensive packaging materials and "take-out trays", foam "packaging peanuts", CD cases, foam-walled drink cups, and other thin-walled and moldable parts.
polyvinyl acetate
Tg = 30°C
PVA is too soft and low-melting to be used by itself; it is commonly employed as a water-based emulsion in paints, wood glue and other adhesives.
polyvinyl chloride ("vinyl", "PVC")
Tg = 85°C, Tm = 240°C.
PVC
This is one of the world's most widely used polymers. By itself it is quite rigid and used in construction materials such as pipes, house siding, flooring. Addition of plasticizers make it soft and flexible for use in upholstery, electrical insulation, shower curtains and waterproof fabrics. There is some effort being made to phase out this polymer owing to environmental concerns (see below).
Synthetic rubbers
Neoprene (polychloroprene)
Tg = –70°C
Polybutadiene Tg < –90°C
Neoprene, invented in 1930, was the first mass-produced synthetic rubber. It is used for such things as roofing membranes and wet suits.
Polybutadiene substitutes a hydrogen for the chlorine; it is the major component (usually admixed with other rubbers) of tires. Synthetic rubbers played a crucial role in World War II
SBS (styrene-butadiene-styrene) rubber is a block copolymer whose special durability makes it valued for tire treads.
Polytetrafluroethylene (Teflon, PTFE)
Decomposes above 350°C.
This highly-crystalline fluorocarbon is exceptionally inert to chemicals and solvents. Water and oils do not wet it, which accounts for its use in cooking ware and other anti-stick applications, including personal care products.
These properties — non-adhesion to other materials, non-wetability, and very low coefficient of friction ("slipperyness") — have their origin in the highly electronegative nature of fluorine whose atoms partly shield the carbon chain. Fluorine's outer electrons are so strongly attracted to its nucleus that they are less available to participate in London (dispersion force) interactions.
Polyaramid (Kevlar)
Sublimation temperature 450°C.
Kevlar is known for its ability to be spun into fibers that have five times the tensile strength of steel. It was first used in the 1970s to replace steel tire cords. Bullet-proof vests are one of it more colorful uses, but other applications include boat hulls, drum heads, sports equipment, and as a replacement for asbestos in brake pads. It is often combined with carbon or glass fibers in composite materials.
The high tensile strength is due in part to the extensive hydrogen bonding between adjacent chains.
Kevlar also has the distinction of having been invented by a woman chemist, Stephanie Kwolek.
Thermosets
The thermoplastic materials described above are chains based on relatively simple monomeric units having varying degrees of polymerization, branching, bending, cross-linking and crystallinity, but with each molecular chain being a discrete unit. In thermosets, the concept of an individual molecular unit is largely lost; the material becomes more like a gigantic extended molecule of its own — hence the lack of anything like a glass transition temperature or a melting point.
These properties have their origins in the nature of the monomers used to produce them. The most important feature is the presence of multiple reactive sites that are able to form what amount to cross-links at every center. The phenolic resins, typified by the reaction of phenol with formaldehyde, illustrate the multiplicity of linkages that can be built.
Phenolic resins
These are made by condensing one or more types of phenols (hydroxy-substituted benzene rings) with formaldehyde, as illustrated above. This was the first commercialized synthetic molding plastic. It was developed in 1907-1909 by the Belgian chemist Leo Baekeland, hence the common name bakelite. The brown material (usually bulked up with wood powder) was valued for its electrical insulating properties (light fixtures, outlets and other wiring devices) as well as for consumer items prior to the mid-century. Since that time, more recently developed polymers have largely displaced these uses. Phenolics are still extensively used as adhesives in plywood manufacture, and for making paints and varnishes.
Urea resins
Condensation of formaldehyde with urea yields lighter-colored and less expensive materials than phenolics. The major use if urea-formaldehyde resins is in bonding wood particles into particle board. Other uses are as baked-on enamel coatings for kitchen appliances and to coat cotton and rayon fibers to impart wrinkle- water-, and stain-resistance to the finished fabrics.
Melamine resins
Melamine, with even more amino (–NH2) groups than urea, reacts with formaldehyde to form colorless solids that are harder then urea resins. The are most widely encountered in dinner-ware (plastic plates, cups and serving bowls) and in plastic laminates such as Formica.
Alkyd-polyester resins
An ester is the product of the reaction of an organic acid with an alcohol, so polyesters result when multifunctional acids such as phthalic acid react with polyhydric alcohols such as glycerol. The term alkyd derives from the two words alcohol and acid.
Alkyd resins were first made by Berzelius in 1847, and they were first commercialized as Glyptal (glycerine + phthalic acid) varnishes for the paint industry in 1902.
The later development of other polyesters greatly expanded their uses into a wide variety of fibers and molded products, ranging from clothing fabrics and pillow fillings to glass-reinforced plastics.
Epoxy resins
This large and industrially-important group of resins typically starts by condensing bisphenol-A with epichlorohydrin in the presence of a catalyst. (The -epiprefix refers to the epoxide group in which an oxygen atom that bridges two carbons.) These resins are usually combined with others to produce the desired properties. Epoxies are especially valued as glues and adhesives, as their setting does not depend on evaporation and the setting time can be varied over a wide range. In the two-part resins commonly sold for home use, the unpolymerized mixture and the hardener catalyst are packaged separately for mixing just prior to use. In some formulations the polymerization is initiated by heat ("heat curing"). Epoxy dental fillings are cured by irradiation with uv light.
Polyurethanes
Organic isocyanates R–NCO react with multifunctional alcohols to form polymeric carbamates, commonly referred to as polyurethanes. Their major use is in plastic foams for thermal insulation and upholstery, but a very large number of other applications, including paints and varnishes and plastic wheels used in fork-lift trucks, shopping carts and skateboards.
Silicones
Polysiloxanes (–Si–O–Si-) are the most important of the small class inorganic polymers. The commercial silicone polymers usually contained attached organic side groups that aid to cross-linking. Silicones can be made in a wide variety of forms; those having lower molecular weights are liquids, while the more highly polymerized materials are rubbery solids. These polymers have a similarly wide variety of applications: lubricants, caulking materials and sealants, medical implants, non-stick cookware coatings, hair-conditioners and other personal-care products.
Natural Polymers
Polymers derived from plants have been essential components of human existence for thousands of years. In this survey we will look at only those that have major industrial uses, so we will not be discussing the very important biopolymers proteins and nucleic acids.
Polysaccharides
Polysaccharides are polymers of sugars; they play essential roles in energy storage, signaling, and as structural components in all living organisms. The only ones we will be concerned with here are those composed of glucose, the most important of the six-carbon hexoses. Glucose serves as the primary fuel of most organisms.
Glucose, however, is highly soluble and cannot be easily stored, so organisms make polymeric forms of glucose to set aside as reserve storage, from which glucose molecules can be withdrawn as needed.
Glycogen
In humans and higher animals, the reserve storage polymer is glycogen. It consists of roughly 60,000 glucose units in a highly branched configuration. Glycogen is made mostly in the liver under the influence of the hormone insulin which triggers a process in which digested glucose is polymerized and stored mostly in that organ. A few hours after a meal, the glucose content of the blood begins to fall, and glycogen begins to be broken down in order to maintain the body's required glucose level.
Starch
In plants, these glucose-polymer reserves are known as starch. Starch granules are stored in seeds or tubers to provide glucose for the energy needs of newly-germinated plants, and in the twigs of deciduous plants to tide them over during the winter when photosynthesis (the process in which glucose is synthesized from CO2 and H2O) does not take place. The starches in food grains such as rice and wheat, and in tubers such as potatoes, are a major nutritional source for humans.
Plant starches are mixtures of two principal forms, amylose and amylopectin. Amylose is a largely-unbranched polymer of 500 to 20,000 glucose molecules that curls up into a helical form that is stabilized by internal hydrogen bonding. Amylopectin is a much larger polymer having up to two million glucose residues arranged into branches of 20 to 30 units.
Cellulose and its derivatives
Cellulose is the most abundant organic compound on the earth. Extensive hydrogen bonding between the chains causes native cellulose to be about 70% crystalline. It also raises the melting point (>280°C) to above its combustion temperature. The structures of starch and cellulose appear to be very similar; in the latter, every other glucose molecule is "upside-down". But the consequences of this are far-reaching; starch can dissolve in water and can be digested by higher animals including humans, whereas cellulose is insoluble and undigestible. Cellulose serves as the principal structural component of green plants and (along with lignin) in wood.
Cotton is one of the purest forms of cellulose and has been cultivated since ancient times. Its ability to absorb water (which increases its strength) makes cotton fabrics especially useful for clothing in very warm climates.
Cotton also serves (along with treated wood pulp) as the source the industrial production of cellulose-derived materials which were the first "plastic" materials of commercial importance.
• Nitrocellulose was developed in the latter part of the 19th Century. It is prepared by treating cotton with nitric acid, which reacts with the hydroxyl groups in the cellulose chain. It was first used to make molded objects the first material used for a photographic film base by Eastman Kodak. Its extreme flammability posed considerable danger in movie theaters, and its spontaneous slow decomposition over time had seriously degraded many early films before they were transferred to more stable media. Nitrocellulose was also used as an explosive and propellant, for which applications it is known as guncotton.
• Cellulose acetate was developed in the early 1900s and became the first artificial fiber that was woven into fabrics that became prized for their lustrous appearance and wearing comfort. Kodak developed it as a "safety film" base in the 1930's to replace nitrocellulose, but it did not come into wide use for this purpose until 1948. A few years later, is became the base material for magnetic recording tape.
• Viscose is the general term for "regenerated" forms of cellulose made from solutions of the polymer in certain strong solvents. When extruded into a thin film it becomes cellophane which has been used as a food wrapping since 1912 and is the base for transparent adhesive tapes such as Scotch Tape. Viscose solutions extruded through a spinneret produce fibers known as rayon. Rayon (right) was the first "artificial silk" and has been used for tire cord, apparel, and carpets. It was popular for womens' stockings before Nylon became available for this purpose.
Rubber
A variety of plants produce a sap consisting of a colloidal dispersion of cis-polyisoprene. This milky fluid is especially abundant in the rubber tree (Hevea), from which it drips when the bark is wounded. After collection, the latex is coagulated to obtain the solid rubber. Natural rubber is thermoplastic, with a glass transition temperature of –70°C.
cis-polyisoprene
Raw natural rubber tends to be sticky when warm and brittle when cold, so it was little more than a novelty material when first introduced to Europe around 1770. It did not become generally useful until the mid-nineteenth century when Charles Goodyear found that heating it with sulfur — a process he called vulcanization — could greatly improve its properties.
Why does a rubber band heat up when it is stretched, and why does it spontaneously snap back? It all has to do with entropy.
Vulcanization creates disulfide cross-links that prevent the polyisoprene chains from sliding over each other. The degree of cross-linking can be controlled to produce a rubber having the desired elasticity and hardness. More recently, other kinds of chemical treatment (such as epoxidation) have been developed to produce rubbers for special purposes.
Better things for better living... through chemistry" is a famous commercial slogan that captured the attitude of the public around 1940 when synthetic polymers were beginning to make a major impact in people's lives. What was not realized at the time, however, were some of the problems these materials would create as their uses multiplied and the world became more wary of "chemicals". (DuPont dropped the "through chemistry" part in 1982.)
Small-molecule release
Many kinds of polymers contain small molecules — either unreacted monomers, or substances specifically added (plasticizers, uv absorbers, flame retardants, etc.) to modify their properties. Many of these smaller molecules are able to diffuse through the material and be released into any liquid or air in contact with the plastic — and eventually into the aquatic environment. Those that are used for building materials (in mobile homes, for example) can build up in closed environments and contribute to indoor air pollution.
Residual monomer
Formation of long polymer chains is a complicated and somewhat random process that is never perfectly stoichiometric. It is therefore not uncommon for some unreacted monomer to remain in the finished product. Some of these monomers, such as formaldehyde, styrene (from polystyrene, including polystyrene foam food take-out containers), vinyl chloride, and bisphenol-A (from polycarbonates) are known carcinogens. Although there is little evidence that the small quantities that diffuse into the air or leach out into fluids pose a quantifiable health risk, people are understandably reluctant to tolerate these exposures, and public policy is gradually beginning to regulate them.
Perfluorooctanoic acid (PFOA), the monomer from which Teflon is made, has been the subject of a 2004 lawsuit against a DuPont factory that contaminated groundwater. Small amounts of PFOA have been detected in gaseous emissions from hot fluorocarbon products.
Plasticizers
These substances are compounded into certain types of plastics to render them more flexible by lowering the glass transition temperature. They accomplish this by taking up space between the polymer chains and acting as lubricants to enable the chains to more readily slip over each other. Many (but not all) are small enough to be diffusible and a potential source of health problems.
Polyvinyl chloride polymers are one of the most widely-plasticized types, and the odors often associated with flexible vinyl materials such as garden hoses, waterbeds, cheap shower curtains, raincoats and upholstery are testament to their ability to migrate into the environment.
The well-known "new car smell" is largely due to plasticizer release from upholstery and internal trim.
There is now an active movement to develop non-diffusible and "green" plasticizers that do not present these dangers.
Endocrine disrupters
Plastics-related compounds are not the only kind of endocrine disrupters found in the environment. Others include pesticide and fungicide residues, and industrial chemicals such as polychlorinated biphenols (PCBs).
To complicate matters even further, many of these small molecules have been found to be physiologically active owing to their ability to mimic the action of hormones or other signaling molecules, probably by fitting into and binding with the specialized receptor sites present in many tissues. The evidence that many of these chemicals are able to act in this way at the cellular level is fairly clear, but there is still some dispute whether many of these pose actual health risks to adult humans at the relatively low concentrations in which they commonly occur in the environment.
There is, however, some concern about the effects of these substances on non-adults and especially on fetuses, given that endocrines are intimately connected with sexual differentiation and neurological development which continues up through the late teens.
Decomposition products
Most commonly-used polymers are not readily biodegradable, particularly under the anaerobic conditions of most landfills. And what decomposition does occur will combine with rainwater to form leachates that can contaminate nearby streams and groundwater supplies. Partial photodecomposition, initiated by exposure to sunlight, is a more likely long-term fate for exposed plastics, resulting in tiny broken-up fragments. Many of these materials are less dense than seawater, and once they enter the oceans through coastal sewage outfalls or from marine vessel wastes, they tend to remain there indefinitely.
Open burning of polymeric materials containing chlorine (polyvinyl chloride, for example) is known to release compounds such as dioxins that persist in the environment. Incineration under the right conditions can effectively eliminate this hazard.
Disposed products containing fluorocarbons (Teflon-coated ware, some personal-care, waterproofing and anti-stick materials) break down into perfluorooctane sulfonate which has been shown to damage aquatic animals.
Hazards to animals
There are two general types of hazards that polymers can introduce into the aquatic environment. One of these relates to the release of small molecules that act as hormone disrupters as described above. It is well established that small aquatic animals such as fish are being seriously affected by such substances in many rivers and estuarine systems, but details of the sources and identities of these molecules have not been identified. One confounding factor is the release of sewage water containing human birth-control drugs (which have a feminizing effect on sexual development) into many waterways.
The other hazard relates to pieces of plastic waste that aquatic animals mistake for food or become entangled in.
This plastic bag (probably mistaken for a jellyfish, the sea turtle's only food) cannot be regurgitated and leads to intestinal blockage and a slow death.
Remains of an albatross that mistook bits of plastic junk for food
These dangers occur throughout the ocean, but are greatly accentuated in regions known as gyres. These are regions of the ocean in which a combination of ocean currents drives permanent vortices that tend to collect and concentrate floating materials. The most notorious of these are the Great Pacific Gyres that have accumulated astounding quantities of plastic waste.
Recycling
The huge quantity (one estimate is 108 metric tons per year) of plastic materials produced for consumer and industrial use has created a gigantic problem of what to do with plastic waste which is difficult to incinerate safely and which, being largely non-biodegradable, threatens to overwhelm the capacity of landfills. An additional consideration is that de novo production most of the major polymers consumes non-renewable hydrocarbon resources.
Plastic water bottles (left) present a special recycling problem because of their widespread use in away-from-home locations.
Plastics recycling has become a major industry, greatly aided by enlightened trash management policies in the major developed nations. However, it is plagued with some special problems of its own:
• Recycling is only profitable when there is a market for the regenerated material. Such markets vary with the economic cycle (they practically disappeared during the recession that commenced in 2008.)
• The energy-related costs of collecting and transporting plastic waste, and especially of processing it for re-use, are frequently the deciding factor in assessing the practicability of recycling.
• Collection of plastic wastes from diverse sources and locations and their transport to processing centers consumes energy and presents numerous operational problems.
• Most recycling processes are optimized for particular classes of polymers. The diversity of plastic types necessitates their separation into different waste streams — usually requiring manual (i.e., low-cost) labor. This in turn encourages shipment of these wastes to low-wage countries, thus reducing the availability of recycled materials in the countries in which the plastics originated.
Some of the major recycling processes include
• Thermal decomposition processes that can accommodate mixed kinds of plastics and render them into fuel oil, but the large inputs of energy they require have been a problem.
• A very small number of condensation polymers can be depolymerized so that the monomers can be recovered and re-used.
• Thermopolymers can be melted and pelletized, but those of widely differing types must be treated separately to avoid incompatability problems.
• Thermosets are usually shredded and used as filler material in recycled thermopolymers.
In order to facilitate efficient recycling, a set of seven resin idenfication codes has been established (the seventh, not shown below, is "other").
These codes are stamped on the bottoms of many containers of widely-distributed products. Not all categories are accepted by all local recycling authorities, so residents need to be informed about which kinds should be placed in recycling containers and which should be combined with ordinary trash.
Tire recycling
The large number of rubber tires that are disposed of, together with the increasing reluctance of landfills to accept them, has stimulated considerable innovation in the re-use of this material, especially in the construction industry. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.09%3A_Polymers_and_Plastics.txt |
Learning Objectives
• Summarize the principal distinguishing properties of solutions, colloidal dispersions, and suspensions.
• For the various dispersion types (emulsion, gel, sol, foam, etc.), name the type (gas, liquid, or solid) of both the dispersed phase and the dispersions phase.
• Describe the origins of Brownian motion and how it can observed.
• Describe the electric double layer that surrounds many colloidal particles.
• Explain the mechanisms responsible for the stability of lyophilic and lyophobic colloidal dispersions.
• Define: surfactant, detergent, emulsifier, micelle.
• Give some examples of how colloidal dispersions can be made.
• Explain why freezing or addition of an electrolyte can result in the coagulation of an emulsion.
• Describe some of the colloid-related principles involved in food chemistry, such as the stabilization of milk and mayonaisse, the preparation of butter, and the various ways of cooking eggs.
• Describe the role of colloids in wastewater treatment.
Sand, salt, and chalk dust are made up of chunks of solid particles, each containing huge numbers of molecules. You can usually see the individual particles directly, although the smallest ones might require some magnification. At the opposite end of the size scale, we have individual molecules which dissolve in liquids to form homogeneous solutions. There is, however, a vast but largely hidden world in between: particles so tiny that they cannot be resolved by an optical microscope, or molecules so large that they begin to constitute a phase of their own when they are suspended in a liquid. This is the world of colloids which we will survey in this lesson. As you will see, we encounter colloids in the food we eat, the consumer products we buy... and we ourselves are built largely of colloidal materials.
Introducing Colloids
Colloids occupy an intermediate place between [particulate] suspensions and solutions, both in terms of their observable properties and particle size. In a sense, they bridge the microscopic and the macroscopic. As such, they possess some of the properties of both, which makes colloidal matter highly adaptable to specific uses and functions. Colloid science is central to biology, food science and numerous consumer products.
• Solutions are homogeneous mixtures whose component particles are individual molecules whose smallest dimension is generally less than 1 nm. Within this size range, thermal motions maintain homogeneity by overcoming the effects of gravitational attraction.
• Colloidal dispersions appear to be homogeneous, and the colloidal particles they contain are small enough (generally between 1-1000 nm) to exhibit Brownian motion, cannot be separated by filtration, and do not readily settle out. But these dispersions are inherently unstable and under certain circumstances, most colloidal dispersions can be "broken" and will "flocculate" or settle out.
• Suspensions are heterogeneous mixtures in which the suspended particles are sufficiently large (> 1000 nm in their smallest dimension) to settle out under the influence of gravity or centrifugal force. The particles that form suspensions are sometimes classified into various size ranges.
Colloidal particles need not fall within the indicated size range in all three dimensions; thus fibrous colloids such as many biopolymers may be very extended sizes along one direction.
The nature of colloidal particles
To begin, you need to recall two important definitions:
• a phase is defined as a region of matter in which the composition and physical properties are uniform. Thus ice and liquid water, although two forms of the single substance H2O, constitute two separate phases within a heterogeneous mixture.
• A solution is a homogeneous mixture of two or more substances consisting of a single phase. (Think of sugar dissolved in water).
But imagine that you are able to shrink your view of a solution of sugar in water down to the sub-microscopic level at which individual molecules can be resolved: you would see some regions of space occupied by H2O molecules, others by sugar molecules, and likely still others in which sugar and H2O molecules are momentarily linked together by hydrogen bonding— not to mention the void spaces that are continually appearing and disappearing between molecules as they are jumbled about by thermal motions. As with so many simple definitions, the concept of homogeneity (and thus of a solution) breaks down as we move from the macro-scale into the molecular scale. And it is the region in between these two extremes that constitutes the realm of the colloid.
Smaller is bigger
What makes colloidal particles so special is not so much their sizes as it is the manner in which their surface areas increase as their sizes decrease. If we take a sample of matter and cut it up into smaller and smaller chunks, the total surface area will increase very rapidly. Although mass is conserved, surface area is not; as a solid is sliced up into smaller bits, more surfaces are created. These new surfaces are smaller, but there are many more of them; the ratio of surface area to mass can become extremely large.
Example $1$
1. Consider a cube of material having a length of exactly 1 cm. What will be the surface area of this cube?
2. Now let us cut this cube into smaller cubes by making 10 slices in each direction. How many smaller cubes wil this make, and what will be the total surface area?
Solution
1. A cube possesses six square surfaces, so the total surface area is 6 × (1 cm2) = 6 cm2.
2. Each new cube has a face length of 0.10 cm, and thus a surface area of 6 × (0.1 cm2) = 0.6 cm2. But there are 103 of these smaller cubes, so the total surface area is now 60 cm2 -- quite a bit larger than it was originally!
The total surface area increases as the inverse cube of the the face length, so as we make our slices still smaller, the total surface area grows rapidly. In practical situations with real colloids, surface areas can reach hectares (or acres) per mole!
number of slices per cube face length of each face (cm) surface area per face number of cubes total surface area
0 1 1 cm2 1 6 cm2
10 0.1 0.01 cm2 1000 60 cm2
100 0.01 10–4 cm2 106 600 cm2
1000 10–3 10–6 cm2 109 0.6 m2
n 1/n n–2 cm2 n3 6n cm2
Why do we focus so much attention on surface area? The general answer is that surfaces (or more generally, interfaces between phases) possess physical and chemical properties of their own. In particular,
• Surfaces can exert van der Waals attractive forces on other molecules near them, and thus loosely bind other particles by adsorption
• Interfaces between different phases usually give rise to imbalances in electrical charge which can cause them to interact with nearby ions.
• The surfaces of many solids present "broken bonds" which are chemically active.
In normal "bulk" matter, these properties are mostly hidden from us owing to the small amount of surface area in relation to the quantity of matter. But as the particle size diminishes, surface phenomena begin to dominate their properties. The small sizes of colloidal solids allows the properties of their surfaces to dominate their behavior.
Colloidal Dispersions
Colloidal matter commonly exists in the form of colloidal-sized phases of solids, liquids, or gases that are uniformly dispersed in a separate medium(sometimes called the dispersions phase) which may itself be a solid, liquid, or gas. Colloids are often classified and given special names according to the particular kinds of phases involved.
dispersed phase medium dispersion type examples
gas liquid foam whipped cream
gas
solid solid foam pumice1, aerogels2
liquid gas liquid aerosol fog, clouds
liquid liquid emulsion milk3, mayonaisse, salad dressing
liquid solid gel Jell-O, lubricating greases, opal4
solid gas solid aerosol smoke
solid liquid sol paints, some inks, blood
solid solid solid sol bone, colored glass, many alloys
Notes on this table:
1. Pumice is a volcanic rock formed by the rapid depressurization and cooling of molten lava. The sudden release of pressure as the lava is ejected from the volcano allows dissolved gases to expand, producing tiny bubbles that get frozen into the matrix. Pumice is distinguished from other rocks by its very low density.
2. Aerogels are manufactured rigid solids made by removing the liquid from gels, leaving a solid, porous matrix that can have remarkable and useful physical properties. Aerogels based on silica, carbon, alumina and other substances are available.
3. Milk is basically an emulsion of butterfat droplets dispersed in an aqueous solution of carbohydrates.
4. Opal consists of droplets of liquid water dispersed in a silica (SiO2) matrix.
Large molecules can behave as colloids
Very large polymeric molecules such as proteins, starches and other biological polymers, as well as many natural polymers, exhibit colloidal behavior. There is no clear point at which a molecule becomes sufficiently large to behave as a colloidal particle.
Macroscopic or microscopic?
Colloidal dispersions behave very much like solutions in that they appear to be homogeneous on a macroscopic scale. They are often said to be microheterogeneous. The most important feature that distinguishes them from other particulate matter is that:
Colloids dispersed in liquids or gases are sufficiently small that they do not settle out under the influence of gravity. This, together with the their small sizes which allows them to pass through most filters, makes it difficult to separate colloidal matter from the phase in which it is dispersed.
Optical properties of colloidal dispersions
Colloidal dispersions are distinguished from true solutions by their light-scattering properties. The nature of this scattering depends on the ratio of the particle size in the medium to the wavelength of the light. A collimated beam of light passing through a solution composed of ordinary molecules (r) tends retain its shape. When such a beam is directed through a colloidal dispersion, it spreads out (left container).→
John Tyndall discovered this effect in 1869.Tyndall scattering (as it is commonly known) scatters all wavelengths equally. This is in contrast to Rayleigh scattering, which scatters shorter wavelengths more, bringing us blue skies and red sunsets. Tyndall scattering can be seen even in dispersions that are transparent. As the density of particles (or the particle size) increases, the light scattering may become great enough to produce a "cloudy" effect, as in this image of a smoke-filled room. This is the reason that milk, fog, and clouds themselves appear to be white. The individual water droplets in clouds (or the butterfat droplets in milk) are actually transparent, but the intense light scattering disperses the light in all directions, preventing us from seeing through them.
The Ultramicroscope
Colloidal particles are, like molecules, too small to be visible though an ordinary optical microscope. However, if one looks in a direction perpendicular to the light beam, a colloidal particle will "appear" over a dark background as a tiny speck due to the Tyndall scattering. A microscope specially designed for this application is known as an ultramicroscope. Bear in mind that the ultramicroscope (invented in Austria in 1902) does not really allow us to "see" the particle; the scattered light merely indicates where it is at any given instant.
Brownian motion
If you observe a single colloidal particle through the ultramicroscope, you will notice that it is continually jumping around in an irregular manner. These movements are known as Brownian motion. Scottish botanist Robert Brown discovered this effect in 1827 when observing pollen particles floating in water through a microscope. (Pollen particles are larger than colloids, but they are still small enough to exhibit some Brownian motion.)
It is worth noting that Albert Einstein's analysis of Brownian motion in 1901 constituted the first proof of the molecular theory of matter. Brownian motion arises from collisions of the liquid molecules with the solid particle. For large particles, the millions of collisions from different directions cancel out, so they remain stationary. The smaller the particle, the smaller the number of surrounding molecules able to collide with it, and more likely that random fluctuations will occur in the number of collisions from different sides. Simple statistics predicts that every once in a while, the imbalance in collisions from different directions will become great enough to give the particle a real kick!
Electrical Properties of Colloids
In general, differences in electric potential exist between all phase boundaries. If you have studied electrochemistry, you will know that two dissimilar metals in contact exhibit a "contact potential", and that similar potential differences exist between a metal and a solution in which it is immersed. But this principle extends well beyond ordinary electrochemistry; there are small potential differences even at the water-glass interface in a drinking glass, and the water-air interface above it.
Colloids are no exception to this rule; there is always a difference in electric potential between the colloid "phase" and that of the surrounding liquid. Even if the liquid consists of pure water, the polar H2O molecules at the colloid's surface are likely to be predominantly oriented with either their oxygen (negative) or hydrogen (positive) ends facing the interface, depending on the electrical properties of the colloid particle itself.
Interfacial electrical potential differences can have a variety of origins:
• Particles composed of ionic or ionizable substances usually have surface charges due to adsorption of an ion (usually an anion) from the solution, or to selective loss of one kinds of ion from the crystal surface. For example, Ag+ ions on the surface of a silver iodide crystal go into solution more readily than the Br- ions, leaving a negatively-charged surface.
• The charges of amphiprotic groups such as those on the surfaces of metal oxides and hydroxides will vary with the pH of the aqueous medium. Thus a particle of a metal oxide M–O will become positive in acidic solution due to formation of M–OH+, while that of a sparingly soluble hydroxide M–OH will become negative at low pH as it changes to M–O. Colloidal-sized protein molecule can behave in a similar manner owing to the behavior of amphiprotic carboxylate-, amino- and sulfhydryl groups.
• Non-ionic particles or droplets such as oils or latex will tend to selectively adsorb positive or negative ions present in solution, thus "coating themselves" with electrical charge.
• In clays and other complex structures, isomorphous replacement of one ion by another having a different charge will leave a net electric charge on the particle. Thus particles of kaolinite clay become negatively charged due to replacement of some of the Si4+ ions by Al3+.
Charged colloidal particles will attract an excess of oppositely-charged counter-ions to their vicinity from the bulk solution, forming a localized "cloud" of compensating charge around each particle. The entire assembly is called an electric double layer. Electric double layers of one kind or another exist at all phase boundaries, but those associated with colloids are especially important.
Stability of colloidal dispersions
What keeps the colloidal particles suspended in the dispersion medium? How can we force the particles to settle out? These are very important practical matters:
• Colloidal products such as paints and many foods (e.g., milk) must remain in dispersed form if they are to be useful;
• Other dispersions, often those formed as by-products of operations such as mining, water treatment, paper-manufacture, or combustion are environmental nuisances. The only practical way of disposing them is to separate the colloidal material from the much greater volume of the dispersion medium (most commonly water). Simple evaporation of the water is usually not a practical option; it is generally too slow, or too expensive if forced by heating.
You will recall that weak attractive forces act between matter of all kinds. These are known generally as van der Waals and dispersion forces, and they only "take hold" at very close distances. Countering these is the universal repulsive force that acts at even shorter distances, but is far stronger; it is the basic reason why two atoms cannot occupy the same space. For very small atomic and molecular sized particles, another thing that keeps them apart is thermal motion. Thus when two molecules in a gas collide, they do so with more than enough kinetic energy to overcome the weak attractive forces between them. As the temperature of the gas is reduced, so is the collisional energy; below its boiling point, the attractive forces dominate and the gas condenses into a liquid.
Electrical forces help keep colloids dispersed
When particles of colloidal dimension suspended in a liquid collide with each other, they do so with much smaller kinetic energies than is the case for gases, so in the absence of any compensating repulsion forces, we might expect van der Waals or dispersion attractions to win out. This would quickly result in the growth of aggregates sufficiently large to exceed colloidal size and to fall to the bottom of the container. This process is called coagulation.
So how do stable dispersions such as sols manage to survive? In the preceding section, we saw that each particle with its double layer is more or less electrically neutral. However, when two particles approach each other, each one "sees" mainly the outer part [shown here in blue] of the double layer of the other. These will always have the same charge sign (which depends on the type of colloid and the nature of the medium), so there will be an electrostatic repulsive force that opposes the dispersion force attractions.
Electrostatic (coulombic) forces have a strong advantage in this respect because they act over much greater distances do van der Waals forces. But as we will see further on, electrostatic repulsion can lose its effectiveness if the ionic concentration of the medium is too great, or if the medium freezes. Under these conditions, there are other mechanisms that can stabilize colloidal dispersions.
Interactions with the solvent
Colloids can be divided into two general classes according to how the particles interact with the dispersions medium (often referred to as the "solvent").
Lyophilic colloids
In one class of colloids, called lyophilic ("solvent loving") colloids, the particles contain chemical groups that interact strongly with the solvent, creating a sheath of solvent molecules that physically prevent the particles from coming together. Ordinary gelatine is a common example of a lyophilic colloid. It is in fact hydrophilic, since it forms strong hydrogen bonds with water. When you mix Jell-O or tapioca powder to make a gelatine dessert, the material takes up water and forms a stable colloidal gel. Lyophilic (hydrophilic) colloids are very common in biological systems and in foods.
Lyophobic colloids
Most of the colloids in manufactured products exhibit very little attraction to water: think of oil emulsions or glacially-produced rock dust in river water. These colloids are said to be lyophobic. Lyophobic colloids are all inherently unstable; they will eventually coagulate. However, "eventually" can be a very long time (the settling time for some clay colloids in the ocean is 200-600 years!).
For systems in which coagulation proceeds too rapidly, the process can be slowed down by adding a stabilizer. Stabilizers can act by coating the particles with a protective layer such as a polymer as described immediately below, or by providing an ion that is selectively adsorbed by the particle, thereby surrounding it with a charged sheath that will repel similar particles it collides with. Dispersions of these colloids are stabilized by electrostatic repulsion between the electric double layers surrounding the particles which we discussed in the preceding section.
Stabilization by cloaking
"Stabilization by stealth" has unwittingly been employed since ancient times through the use of natural gums to stabilize pigment particles in inks, paints, and pottery glazes. These gums are also widely used to stabilize foods and personal care products. A lyophobic colloid can be made to masquerade as lyophilic by coating it with something that itself possesses suitable lyophilic properties.
Steric stabilization
Alternatively, attaching a lyophobic material to a colloid of any type can surround the particles with a protective shield that physically prevents the particles from approaching close enough to join together. This method usually employs synthetic polymers and is often referred to as steric stabilization.
Synthetic polymers, which can be tailor-made for specific applications, are now widely employed for both purposes. The polymer can be attached to the central particle either by simple adsorption or by chemical bond formation.
Surfactants and micelle formation
Surfactants and detergents are basically the same thing. Surfactants that serve as cleaning agents are commonly called detergents (from L. detergere "to wipe away, cleanse"). Surfactants are molecules consisting of a hydrophylic "head" connected to a hydrophobic chain. Because such molecules can interact with both "oil" and water phases, they are often said to be amphiphilic. Typical of these is the well known cleaning detergent sodium dodecyl sulfonate ("sodium laurel sulfate") CH3(CH2)11OSO3 Na+.
Amphiphiles possess the very important property of being able to span an oil-water interface. By doing so, they can stabilize emulsions of both the water-in-oil and oil-in-water types. Such molecules are essential components of the lipid bilayers that surround the cells and cellular organelles of living organisms.
Emulsions are inherently unstable; left alone, they tend to separate into "oil" and "water" phases. Think of a simple salad dressing made by shaking vegetable oil and vinegar. When a detergent-like molecule is employed to stabilize an emulsion, it is often referred to as an emulsifier. The resulting structure (left) is known as a micelle.
Emulsifiers are essential components of many foods. They are widely employed in pharmaceuticals, consumer goods such as lotions and other personal care products, paints and printing inks, and numerous industrial processes.
How detergents remove "dirt"
The "dirt" we are trying to remove consists of oily or greasy materials whose hydrophobic nature makes them resistant to the action of pure water. If the water contains amphiphilic molecules such as soaps or cleaning detergents that can embed their hydrophobic ends in the particles, the latter will present a hydrophilic interface to the water and will thus become "solubilized".
Soaps and detergents can also disrupt the cell membranes of many types of bacteria, for which they serve as disinfectants. However, they are generally ineffective against viruses, which do not possess cell membranes.
Bile: your body's own detergent
Oils and fats are important components of our diets, but being insoluble in water, they are unable to mix intimately with the aqueous fluid in the digestive tract in which the digestive enzymes are dissolved. In order to enable the lipase enzymes (produced by the pancreas) to break down these lipids into their component fatty acids, our livers produce a mixture of surfactants known as bile. The great surface area of the micelles in the resulting emulsion enables efficient contact between the lipase enzymes and the lipid materials.
The liver of the average adult produces about 500 mL of bile per day. Most of this is stored in the gall bladder, where it is concentrated five-fold by removal of water. As partially-digested material exits the stomach, the gall bladder squeezes bile into the top of the small intestine (the duodenum).
In addition to its action as a detergent (which also aids in the destruction of bacteria that may have survived the high acidity of the gastric fluid), the alkaline nature of the bile salts neutralizes the acidity of the stomach exudate. The bile itself consists of of salts of a variety of bile acids, all of which are derived from cholesterol. The cholesterol-like part of the structure is hydrophobic, while the charged end of the salt is hydrophilic.
Microemulsions
Ordinary emulsions are inherently unstable; they do not form spontaneously, and once formed, the drop sizes are sufficiently large to scatter light, producing a milky appearance. As time passes, the average drop size tends to increase, eventually resulting in gravitational separation of the phases.
Microemulsions, in contrast, are thermodynamically stable and can form spontaneously. The drop radii are at the very low end of the colloidal scale, often 100 nm or smaller. This is too small to appreciably scatter visible light, so microemulsions appear visually to be homogenous systems.
Microemulsions require the presence of one or more surfactants which increase the flexibility and stability of the boundary regions. This allows them to vary form smaller micelles than surface tension forces would ordinarily allow; in some cases they can form sponge-like bicontinuous mixtures in which "oil" and "water" phases extend throughout the mixture, affording more contact area between the phases.
The uses of microemulsions are quite wide-ranging, with drug delivery, polymer synthesis, enzyme-assisted synthesis, coatings, and enhanced oil recovery being especially prominent.
4 Making and breaking colloidal dispersions
Particles of colloidal size can be made in two general ways:
• Start with larger particles and break them down into smaller ones (Dispersion).
• Build up molecular-sized particles (atoms, ions, or small molecules) into aggregates within the colloidal size range. (Condensation)
Dispersion processes all require an input of energy as new surfaces are created. For solid particles, this is usually accomplished by some kind of grinding process such as in a ball- or roller-mill. Solids and liquids can also be broken into colloidal dimensions by injecting them into the narrow space between a rapidly revolving shaft and its enclosure, thus subjecting them to a strong shearing force that tends to pull the two sides of a particle in opposite directions.
The application of ultrasound (at about 20 kHz) to a mixture of two immiscible liquids can create liquid-in-liquid dispersions; the process is comparable to what we do when we shake a vinegar-and-oil salad dressing in order to create a more uniform distribution of the two liquids.
Condensation
Numerous methods exist for building colloidal particles from sub-colloidal entities.
$S_2O_3^{2–} + H_2O \rightarrow S + SO_4^{2–} + 2 H^+ + 2 e^–$
$2 Fe^{3+} + 3 H_2O \rightarrow Fe_2O_3 + 6 H^+$
$Fe^{3+} + 2 H_2O \rightarrow FeO(OH) + 3 H^+$
Dissolution followed by precipitation
This method is useful for dispersing hydrophobic organic substances in water. For example, a sample of paraffin wax is dissolved in ethanol, and the resulting solution is carefully added to a container of boiling water.
Formation of precipitates under controlled conditions
The trick here is to prevent the initial colloidal particles of the newly-formed compound from coalescing into an ordinary precipitate, as will ordinarily occur when solutions of two dissolved salts are combined directly. An alternative that is sometimes useful is to form the sol by a chemical process that procedes more slowly than direct precipitation:
• Sulfur sols are readily formed by oxidation of thiosulfate ions in acidic solution:
• Sols of oxides or hydrous oxides of transition metals can often be formed by boiling a soluble salt in water under slightly acidic conditions to prevent formation of insoluble hydroxides:
• Addition of a dispersant (usually a surfactant) can sometimes prevent colloidal particles from precipitating. Thus barium sulfate sols can be prepared from barium thiocyanate and (NH4)2SO4 in the presence of potassium citrate.
• Ionic solids can often selectively adsorb cations or anions from solutions containing the same kinds of ions that are present in the crystal lattice, thus coating the particles with protective electric charges. This is probably what happens in the example of Fe3+-ion hydrolysis given above.
• Similarly, if a solution of AgNO3 is added to a dilute solution of potassium iodide, the AgI will form as a negatively-charged sol (AgI)·I. But if the AgI is precipitated by adding KI to a solution of AgNO3, the excess Ag+ will adsorb to the new particles, giving a positively-charged sol of (AgI)·Ag+.
How Dispersions are Broken
That oil-in-vinegar salad dressing you served at dinner the other day has now mostly separated into two layers, with unsightly globs of one phase floating in the other. This is surface chemistry in action! Emulsions are fundamentally unstable because molecules near surfaces (i.e., interfaces between phases) are no longer surrounded by their own kind on all sides. The resulting repulsions between like and unlike exact an energetic cost that must eventually be repaid through processes that reduce the interfacial area.
The consequent breakup of the emulsion can proceed through various stages:
• Coalescence - smaller drops join together to form larger ones;
• Flocculation - the small drops stick together without fully coalescing;
• Creaming - Most oils have lower densities than water, so the drops float to the surface, but may not completely coalesce;
• Breaking - the ultimate thermodynamic fate and end result of the preceding steps.
The time required for these processes to take place is highly variable, and can be extended by the presence of stabilizer substances. Thus milk, an emulsion of butterfat in water, is stabilized by some of its natural components.
Coagulation and flocculation
The processes described above that allow colloids to remain suspended sometimes fail when conditions change, or equally troublesome, they work entirely too well and make it impossible to separate the colloidal particles from the medium; this is an especially serious problem in wastewater settling basins associated with sewage treatment and operations such as mining and the pulp-and-paper industries.
Coagulation is the general term that refers to the "breaking" of dispersions so that the colloidal particles can be collected, usually by settling out. The term Flocculation is often used as a synonym for coagulation, but it is more properly reserved for a special method of effecting coagulation which is described further on. Most coagulation processes act by disrupting the outer (diffuse) part of the electric double layer that gives rise to the electrostatic repulsion between them.
"Do not freeze"
Have you ever encountered milk that had previously been frozen? Not likely something you would want to drink! You will see "Do not freeze" labels on many foodstuffs and on colloidal consumer products such as latex house paint. Freezing disrupts the double layer by causing the ions within it to combine into neutral species so that the particles can now approach closely enough for attractive forces to take over, and once they do so, they never let go: coagulation is definitely an irreversible process!
Addition of an electrolyte
Coagulation of water-suspended dispersions can be brought about by raising the ionic concentration of the medium. The added ions will migrate to the oppositely-charged regions of the double layer, thus neutralizing its charges; this effectively reduces the thickness of the double layer, eventually allowing the attractive forces to prevail.
Rivers carry millions of tons of colloidal clay into the oceans. If you fly over the mouth of a river such as the Mississippi (shown here in a satellite image), you can sometimes see the difference in color as the clay colloids coagulate due to the action of the salt water.
The coagulated clay accumulates as sediments which eventually form a geographical feature called a river delta.
5 Gels
A liquid phase dispersed in a solid medium is known as a gel, but this formal definition does not always convey the full sense of the nature of the "solid". The latter may start out as a powdery or granulated material such as natural gelatin or a hydrophilic polymer, but once the gel has formed, the "solid" part is less a "phase" than a cross-linked network that extends throughout the volume of the liquid, whose quantity largely defines the volume of the entire gel.
Hydrogels can contain up to 90% water by weight
Most of the gels we commonly encounter have water as the liquid phase, and thus are called hydrogels; ordinary gelatin deserts are well known examples.
The "solid" components of hydrogels are usually polymeric materials that have an abundance of hydrophilic groups such as hydroxyl (–OH) that readily hydrogen-bond to water and also to each other, creating an irregular, flexible, and greatly-extendable network. These polymers are sometimes synthesized for this purpose, but are more commonly formed by processing natural materials, including natural polymers such as cellulose.
• Gelatine is a protein-like material made by breaking down the connective tissues of animal skins, organs, and bones. The many polar groups on the resulting protein fragments bind them together, along with water molecules, to form a gel.
• A number of so-called super-absorbant polymers derived from cellulose, polyvinyl alcohol and other materials can absorb huge quantities of water, and have found uses for products such as disposable diapers, environmental spill control, water retention media for plants, surgical pads and wound dressings, and protective inner coatings and water-blockers in fiber optics and electrical cables.
Gels are essential components of a huge variety of consumer products ranging from thickening agents in foods and personal care products to cushioning agents in running shoes.
Gels can be fragile!
You may have noticed that a newly-opened container of yogurt or sour cream appears to be smooth and firm, but once some of the material has been spooned out, little puddles of liquid appear in the hollowed-out depressions.
As the spoon is plunged into the material, it pulls the nearby layers of the gel along with it, creating a shearing action that breaks it apart, releasing the liquid. Anyone who has attacked an egg yolk with a cook's whisk, written with a ball-point pen, or spread latex paint on a wall has made use of this phenomenon which is known as shear thinning.
Our bodies are mostly gels
The interior (the cytoplasm) of each cell in the soft tissues of our bodies consists of a variety of inclusions (organelles) suspended in a gel-like liquid phase called the cytosol. Dissolved in the cytosol are a variety of ions and molecules varying from the small to the large; among the latter, proteins and carbohydrates make up the "solid" portion of the gel structure.
Embedded within the cytosol is the filament-like cytoskeleton which controls the overall shape of the cell and holds the organelles in place.
(In free-living cells such as the amoeba, changes in the cytoskeleton enable the organism to alter its shape and move around to engulf food particles.)
Be thankful for the gels in your body; without them, you would be little more than a bag of gunge-filled liquid, likely to end up as a puddle on the floor!
The individual cells are bound into tissues by the extracellular matrix (ECM) which — on a much larger scale, holds us together and confers an overall structure and shape to the body. The ECM is made of a variety of structural fibers (collagens, elastins) embedded in a gel-like matrix.
6 Applications of colloids
Thickening agents
The usefulness of many industrial and consumer products is strongly dependent on their viscosity and flow properties. Toothpastes, lotions, lubricants, coatings are common examples. Most of the additives that confer desirable flow properties on these products are colloidal in nature; in many cases, they also provide stabilization and prevent phase separation. Since ancient times, various natural gums have been employed for such purposes, and many remain in use today.
More recently, manufactured materials whose properties can be tailored for specific applications have become widely available. Examples are colloidal microcrystalline cellulose, carboxymethyl cellulose, and fumed silica.
Fumed silica is a fine (5-50 nm), powdery form of SiO2 of exceptionally low bulk density (as little as 0.002 g cm–3); the total surface area of one Kg can be as great as 60 hectares (148 acres). It is made by spraying SiCl4(a liquid) into a flame. It is used as a filler, for viscosity and flow control, a gelling agent, and as an additive for strenghthening concrete.
Food colloids
Most of the foods we eat are largely colloidal in nature. The function of food colloids generally has less to do with nutritional value than appearance, texture, and "mouth feel". The latter two terms relate to the flow properties of the material, such as spreadability and the ability to "melt" (transform from gel to liquid emulsion) on contact with the warmth of the mouth.
Dairy products
Milk is basically an emulsion of lipid oils ("butterfat") dispersed in water and stabilized by phospholipids and proteins. Most of the protein content of milk consists of a group known as caseins which aggregate into a complex micellar structure which is bound together by calcium phosphate units.
Homogenizer
The stabilizers present in fresh milk will maintain its uniformity for 12-24 hours, but after this time the butterfat globules begin to coalesce and float to the top ("creaming"). In order to retard this process, most milk sold after the early 1940's undergoes homogenization in which the oil particles are forced through a narrow space under high pressure. This breaks up the oil droplets into much smaller ones which remain suspended for the useful shelf life of the milk.
Before homogenization become common, milk bottles commonly had enlarged tops ↑
to make it easier to skim off the cream that would separate out.
The structures of cream, yogurt and ice cream are dominated by the casein aggregates mentioned above.
Ice cream is a complex mixture of several colloid types:
• an emulsion (of butterfat globules in a highly viscous aquatic phase);
• a semisolid foam consisting of small (100 μ) air bubbles which are beat into the mixture as it is frozen. Without these bubbles, the frozen mixture would be too hard to conveniently eat;
• a gel in which a network of tiny (50 μ) ice crystals are dispersed in a semi-glassy aqueous phase containing sugars and dissolved macromolecules.
Whereas milk is an oil (butterfat)-in-water dispersion, butter and margarine have a "reversed" (water-in-oil) arrangement. This transformation is accomplished by subjecting the butterfat droplets in cream to violent agitation (churning) which forces the droplets to coalesce into a semisolid mass within which remnants of the water phase are embedded. The greater part of this phase ends up as the by-product buttermilk.
Eggs: colloids for breakfast, lunch, and dessert
A detailed study of eggs and their many roles in cooking can amount to a mini-course in colloid chemistry in itself. There is something almost magical in the way that the clear, viscous "white" of the egg can be transformed into a white, opaque semi-solid by brief heating, or rendered into more intricate forms by poaching, frying, scrambling, or baking into custards, soufflés, and meringues, not to mention tasty omelettes, quiches, and more exotic delights such as the eggah (Arabic) and kuku (Persian) dishes of the Middle-East.
The raw eggwhite is basically a colloidal sol of long-chain protein molecules, all curled up into compact folded forms due to hydrogen bonding between different parts of the same molecule. Upon heating, these bonds are broken, allowing the proteins to unfold. The denuded chains can now tangle and bind to each other, transforming the sol into a cross-linked hydrogel, now so dense that scattered light changes its appearance to opaque white.
What happens next depends very much on the skill of the cook. The idea is to drive out enough of the water entrapped within the gel network to achieve the desired density while retaining enough gel structure to prevent it from forming a rubbery mass, as usually happens with hard-boiled eggs. This is especially important when the egg structure is to be incorporated into other food components as in baked dishes.
The key to all this is temperature control; the eggwhite proteins begin to coagulate at 65°C and if yolk proteins are present, the mixture is nicely set at about 73°; by 80° the principal (albumin) protein has set, and at much above this the gel network will collapse into an overcooked mass. The temperature limit required to avoid this disaster can be raised by adding milk or sugar; the water part of the milk dilutes the proteins, while sugar molecules hydrogen-bond to them, forming a protective shield that keeps the proton strand separated. This is essential when baking custards, but incorporating a bit of cream into scrambled eggs can similarly help them retain their softness.
Whipped cream and meringues
The other colloidal personalities eggs can display are liquid and solid foams. Instead of applying heat to unfold the proteins, we "beat" them; the shearing force of a whisk or egg-beater helps pull them apart, and the air bubbles that get entrapped in the mixture attract the hydrophobic parts of the unfolded proteins and help hold them in place. Sugar will stabilize the foam by raising its viscosity, but will interfere with protein folding if added before the foam is fully formed. Sugar also binds the residual water during cooking, retarding its evaporation until after the proteins not broken up by beating can be thermally coagulated.
Paints and inks
Paints have been used since ancient times for both protective and decorative purposes. They consist basically of pigment particles dispersed in vehicle — a liquid capable for forming a stable solid film as the paint "dries".
The earliest protective coatings were made by dissolving plant-derived natural polymers (resins) in an oil such as that of linseed. The double-bonds in these oils tends to oxidize when exposed to air, causing it to polymerize into an impervious film. The colloidal pigments were stabilized with naturally-occurring surfactants such as polysaccharide gums.
Present-day paints are highly-engineered products specialized for particular industrial or architectural coatings and for marine or domestic use. For environmental reasons, water-based ("latex") vehicles are now preferred.
Inks
The most critical properties of inks relate to their drying and surface properties; they must be able to flow properly and attach to the surface without penetrating it — the latter is especially critical when printing on a porous material such as paper.
Many inks consist of organic dyes dissolved in a water-based solvent, and are not colloidal at all. The ink used in printing newspapers employs colloidal carbon black dispersed in an oil vehicle. The pressure applied by the printing press forces the vehicle into the pores of the paper, leaving most of the pigment particles on the surface.
The inks employed in ball-point pens are gels, formulated in such a way that the ink will only flow over the ball and onto the paper when the shearing action of the ball (which rotates as it moves across the paper) "breaks" the gel into a liquid; the resulting liquid coats the ball and is transferred to the paper. As in conventional printing, the pigment particles remain on the paper surface, while the liquid is pressed into the pores and gradually evaporates.
Water and wastewater treatment
Turbidities of 5, 50, and 500 units. [WikiMedia]
Water, whether intended specifically for drinking, or wastewaters such as sewage or from industrial operations such as from pulp-and-paper manufacture (most of which are likely to end up being re-used elsewhere) usually contains colloidal matter that cannot be removed by ordinary sand filters, as evidenced by its turbidity. Even "pristine" surface waters often contain suspended soil sediments that can harbor infectious organisms and may provide them with partial protection from standard disinfection treatments.
The sulfates of aluminum (alum) and of iron(III) have long been widely employed for this purpose. Synthetic polymers tailored specifically for these applications have more recently come into use.
The usual method of removing turbidity is to add a flocculating agent (flocculant). These are most often metallic salts that can form gel-like hydroxide precipitates, often with the aid of added calcium hydroxide (quicklime) if pH of the water must be raised.
The flocculant salts neutralize the surface charges of the colloids, thus enabling them to coagulate; these are engulfed and trapped by fragments of gelatinous precipitate, which are drawn together into larger aggregates by gentle agitation until they become sufficiently large to form flocs which can be separated by settling or filtration.
Soil colloids
The four major components of soils are mineral sediments, organic matter, water, and air. The water is primarily adsorbed to the mineral and organic materials, but may also share pore spaces with air; pore spaces constitute about half the bulk volume of typical solid.
The principal colloidal components of soils are mineral sediments in the form of clays, and the humic materials in the organic matter. In addition to influencing the consistency of soil by binding water molecules, soil colloids play an essential role in storing and exchanging the mineral ions required by plants.
Most soil colloids are negatively charged, and therefor attract cations such as Ca2+, Mg2+, and K+ into the outer parts of their double layers. Because these ions are loosely bound, they constitute a source from which plant roots can draw these essential nutrients. Conversely, they can serve as a sink for these same ions when they are released after the plant dies.
Clays
These are layered structures based on alumino-silicates or hydrous oxides, mostly of iron or aluminum. Each layer is built of two or three sheets of extended silica or alumina structures linked together by shared oxygen atoms. These layers generally have an overall negative charge owing to the occasional replacement of a Si4+ ion by one of Al3+.
Adjacent layers are separated by a region of adsorbed cations (to neutralize the negative charges) and water molecules, and thus are held together relatively loosely. It is these interlayer regions that enable clays to work their magic by exchanging ions with both the soil water and the roots of plants.
Humic substances
The principal organic components of soil are complex substances of indeterminate structure that present –OH and –COOH groups which become increasingly dissociated as the pH increases. This allows them to bind and exchange cations in much the same way as described above. | textbooks/chem/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.10%3A_Colloids_and_their_Uses.txt |
Solutions are homogeneous (single-phase) mixtures of two or more components. For convenience, we often refer to the majority component as the solvent; minority components are solutes; there is really no fundamental distinction between them. Solutions play a very important role in Chemistry because they allow intimate and varied encounters between molecules of different kinds, a condition that is essential for rapid chemical reactions to occur.
• 8.1: Solutions and their Concentrations
Concentration is a general term that expresses the quantity of solute contained in a given amount of solution. Various ways of expressing concentration are in use; the choice is usually a matter of convenience in a particular application. You should become familiar with all of them.
• 8.2: Thermodynamics of Solutions
The two fundamental processes that must occur whenever a solute dissolves in a solvent, and discuss the effects of the absorption or release of energy on the extent of these processes.
• 8.3: Colligative Properties- Raoult's Law
The reduction in the vapor pressure of a solution is directly proportional to the fraction of the [volatile] solute molecules in the liquid — that is, to the mole fraction of the solvent. The reduced vapor pressure is given by Raoult's law.
• 8.4: Colligative Properties- Boiling Point Elevation and Freezing Point Depression
The temperature at which the vapor pressure of a solution is 1 atm will be higher than the normal boiling point by an amount known as the boiling point elevation.
• 8.5: Colligative Properties - Osmotic Pressure
Osmosis is the process in which a liquid passes through a membrane whose pores permit the passage of solvent molecules but are too small for the larger solute molecules to pass through.
• 8.6: Reverse Osmosis
Applying a hydrostatic pressure greater than this to the high-solute side of an osmotic cell will force water to flow back into the fresh-water side. This process, known as reverse osmosis, is now the major technology employed to desalinate ocean water and to reclaim "used" water from power plants, runoff, and even from sewage. It is also widely used to deionize ordinary water and to purify it for for industrial uses (especially beverage and food manufacture) and drinking purposes.
• 8.7: Colligative Properties and Entropy
All four colligative properties result from “dilution” of the solvent by the added solute. More specifically, these all result from the effect of dilution of the solvent on its entropy, and thus in the increase in the density of energy states of the system in the solution compared to that in the pure liquid.
• 8.8: Ideal vs. Real Solutions
One might expect the vapor pressure of a solution of ethanol and water to be directly proportional to the sums of the values predicted by Raoult's law for the two liquids individually, but in general, this does not happen. The reason for this can be understood if you recall that Raoult's law reflects a single effect: the smaller proportion of vaporizable molecules (and thus their reduced escaping tendency) when the liquid is diluted by otherwise "inert" (non-volatile) substance.
• 8.9: Distillation
Distillation is a process whereby a mixture of liquids having different vapor pressures is separated into its components. Since distillation depends on the different vapor pressures of the components to be separated, let's first consider the vapor pressure vs. composition plots for a hypothetical mixture at some arbitrary temperature at which both liquid and gas phases can exist, depending on the total pressure.
• 8.10: Ions and Electrolytes
Electrolytic solutions are those that are capable of conducting an electric current. A substance that, when added to water, renders it conductive, is known as an electrolyte. A common example of an electrolyte is ordinary salt, sodium chloride. Solid NaCl and pure water are both non-conductive, but a solution of salt in water is readily conductive. A solution of sugar in water, by contrast, is incapable of conducting a current; sugar is therefore a non-electrolyte.
08: Solutions
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Describe the major reasons that solutions are so important in the practical aspects of chemistry.
• Explain why expressing a concentration as "x-percent" can be ambiguous.
• Explain why the molarity of a solution will vary with its temperature, whereas molality and mole fraction do not.
• Given the necessary data, convert (in either direction) between any two concentration units, e.g. molarity - mole fraction.
• Show how one can prepare a given volume of a solution of a certain molarity, molality, or percent concentration from a solution that is more concentrated (expressed in the same units.)
• Calculate the concentration of a solution prepared by mixing given volumes to two solutions whose concentrations are expressed in the same units.
Solutions are homogeneous (single-phase) mixtures of two or more components. For convenience, we often refer to the majority component as the solvent; minority components are solutes; there is really no fundamental distinction between them. Solutions play a very important role in Chemistry because they allow intimate and varied encounters between molecules of different kinds, a condition that is essential for rapid chemical reactions to occur. Several more explicit reasons can be cited for devoting a significant amount of effort to the subject of solutions:
• For the reason stated above, most chemical reactions that are carried out in the laboratory and in industry, and that occur in living organisms, take place in solution.
• Solutions are so common; very few pure substances are found in nature.
• Solutions provide a convenient and accurate means of introducing known small amounts of a substance to a reaction system. Advantage is taken of this in the process of titration, for example.
• The physical properties of solutions are sensitively influenced by the balance between the intermolecular forces of like and unlike (solvent and solute) molecules. The physical properties of solutions thus serve as useful experimental probes of these intermolecular forces.
We usually think of a solution as a liquid made by adding a gas, a solid or another liquid solute in a liquid solvent. Actually, solutions can exist as gases and solids as well.
Solid solutions are very common; most natural minerals and many metallic alloys are solid solutions.
Still, it is liquid solutions that we most frequently encounter and must deal with. Experience has taught us that sugar and salt dissolve readily in water, but that “oil and water don’t mix”. Actually, this is not strictly correct, since all substances have at least a slight tendency to dissolve in each other. This raises two important and related questions: why do solutions tend to form in the first place, and what factors limit their mutual solubilities?
Understanding Concentrations
Concentration is a general term that expresses the quantity of solute contained in a given amount of solution. Various ways of expressing concentration are in use; the choice is usually a matter of convenience in a particular application. You should become familiar with all of them.
Parts-per concentration
In the consumer and industrial world, the most common method of expressing the concentration is based on the quantity of solute in a fixed quantity of solution. The “quantities” referred to here can be expressed in weight, in volume, or both (i.e., the weight of solute in a given volume of solution.) In order to distinguish among these possibilities, the abbreviations (w/w), (v/v) and (w/v) are used.
In most applied fields of Chemistry, (w/w) measure is often used, and is commonly expressed as weight-percent concentration, or simply "percent concentration". For example, a solution made by dissolving 10 g of salt with 200 g of water contains "1 part of salt per 20 g of water".
"Cent" is the Latin-derived prefix relating to the number 100 (L. centum), as in century or centennial. It also denotes 1/100th (from L. centesimus) as in centimeter and the monetary unit cent. It is usually more convenient to express such concentrations as "parts per 100", which we all know as "percent". So the solution described above is a "5% (w/w) solution" of NaCl in water. In clinical chemistry, (w/v) is commonly used, with weight expressed in grams and volume in mL (Example $1$).
Example $1$
The normal saline solution used in medicine for nasal irrigation, wound cleaning and intravenous drips is a 0.91% (w/v) solution of sodium chloride in water. How would you prepare 1.5 L of this solution?
Solution
The solution will contain 0.91 g of NaCl in 100 mL of water, or 9.1 g in 1 L. Thus you will add (1.5 × 9.1g) = 13.6 g of NaCl to 1.5 L of water.
Percent means parts per 100; we can also use parts per thousand (ppt) for expressing concentrations in grams of solute per kilogram of solution. For more dilute solutions, parts per million (ppm) and parts per billion (109; ppb) are used. These terms are widely employed to express the amounts of trace pollutants in the environment.
Example $2$
Describe how you would prepare 30 g of a 20 percent (w/w) solution of KCl in water.
Solution
The weight of potassium chloride required is 20% of the total weight of the solution, or 0.2 × (3 0 g) = 6.0 g of KCl. The remainder of the solution (30 – 6 = 24) g consists of water. Thus you would dissolve 6.0 g of KCl in 24 g of water.
Weight/volume and volume/volume basis
It is sometimes convenient to base concentration on a fixed volume, either of the solution itself, or of the solvent alone. In most instances, a 5% by volume solution of a solid will mean 5 g of the solute dissolved in 100 ml of the solvent.
Example $3$
Fish, like all animals, need a supply of oxygen, which they obtain from oxygen dissolved in the water. The minimum oxygen concentration needed to support most fish is around 5 ppm (w/v). How many moles of O2 per liter of water does this correspond to?
Solution
5 ppm (w/v) means 5 grams of oxygen in one million mL (1000 L) of water, or 5 mg per liter. This is equivalent to (0.005 g) / (32.0 g mol–1) = 1.6 × 10–4 mol.
If the solute is itself a liquid, volume/volume measure usually refers to the volume of solute contained in a fixed volume of solution (not solvent). The latter distinction is important because volumes of mixed substances are not strictly additive. These kinds of concentration measure are mostly used in commercial and industrial applications. The "proof" of an alcoholic beverage is the (v/v)-percent, multiplied by two; thus a 100-proof vodka has the same alcohol concentration as a solution made by adding sufficient water to 50 ml of alcohol to give 100 ml of solution.
Molarity: mole/volume basis
This is the method most used by chemists to express concentration, and it is the one most important for you to master. Molar concentration (molarity) is the number of moles of solute per liter of solution.
The important point to remember is that the volume of the solution is different from the volume of the solvent; the latter quantity can be found from the molarity only if the densities of both the solution and of the pure solvent are known. Similarly, calculation of the weight-percentage concentration from the molarity requires density information; you are expected to be able to carry out these kinds of calculations, which are covered in most texts.
Example $4$
How would you make 120 mL of a 0.10 M solution of potassium hydroxide in water?
Solution
The amount of KOH required is
(0.120 L) × (0.10 mol L–1) = 0.012 mol.
The molar mass of KOH is 56.1 g, so the weight of KOH required is
$(0.012\; mol) \times (56.1\; g \;mol^{-1}) = 0.67\; g$
We would dissolve this weight of KOH in a volume of water that is less than 120 mL, and then add sufficient water to bring the volume of the solution up to 120 mL.
Note: if we had simply added the KOH to 120 mL of water, the molarity of the resulting solution would not be the same. This is because volumes of different substances are not strictly additive when they are mixed. Without actually measuring the volume of the resulting solution, its molarity would not be known.
Example $5$
Calculate the molarity of a 60-% (w/w) solution of ethanol (C2H5OH) in water whose density is 0.8937 g mL–1.
Solution
One liter of this solution has a mass of 893.7 g, of which
$0.60 \times (893.7\; g) = 536.2\; g$
consists of ethanol. The molecular weight of C2H5OH is 46.0, so the number of moles of ethanol present in one liter (that is, the molarity) will be
$\dfrac{\dfrac{536.2\;g}{46.0\;g\;mol^{-1}}}{1 L} =11.6\; mol\,L^{-1}$
Normality and Equivalents
Normality is a now-obsolete concentration measure based on the number of equivalents per liter of solution. Although the latter term is now also officially obsolete, it still finds some use in clinical- and environmental chemistry and in electrochemistry. Both terms are widely encountered in pre-1970 textbooks and articles.
The equivalent weight of an acid is its molecular weight divided by the number of titratable hydrogens it carries. Thus for sulfuric acid H2SO4, one mole has a mass of 98 g, but because both hydrogens can be neutralized by strong base, its equivalent weight is 98/2 = 49 g. A solution of 49 g of H2SO4per liter of water is 0.5 molar, but also "1 normal" (1N = 1 eq/L). Such a solution is "equivalent" to a 1M solution of HCl in the sense that each can be neutralized by 1 mol of strong base.
solution of FeCl3 is said to be "3 normal" (3 N) because it dissociates into three moles/L of chloride ions.
Although molar concentration is widely employed, it suffers from one serious defect: since volumes are temperature-dependent (substances expand on heating), so are molarities; a 0.100 M solution at 0° C will have a smaller concentration at 50° C. For this reason, molarity is not the preferred concentration measure in applications where physical properties of solutions and the effect of temperature on these properties is of importance.
Mole fraction: mole/mole basis
This is the most fundamental of all methods of concentration measure, since it makes no assumptions at all about volumes. The mole fraction of substance i in a mixture is defined as
$X_i= \dfrac{n_i}{\sum_j n_j}$
in which nj is the number of moles of substance j, and the summation is over all substances in the solution. Mole fractions run from zero (substance not present) to unity (the pure substance). The sum of all mole fractions in a solution is, by definition, unity:
$\sum_i X_i=1$
Example $6$
What fraction of the molecules in a 60-% (w/w) solution of ethanol in water consist of H2O?
Solution
From the previous problem, we know that one liter of this solution contains 536.2 g (11.6 mol) of C2H5OH. The number of moles of H2O is
( (893.7 – 536.2) g) / (18.0 g mol–1) = 19.9 mol.
The mole fraction of water is thus
$\dfrac{19.9}{19.9+11.6} = 0.63$
Thus 63% of the molecules in this solution consist of water, and 37% are ethanol.
In the case of ionic solutions, each kind of ion acts as a separate component.
Example $7$
Find the mole fraction of water in a solution prepared by dissolving 4.5 g of CaBr2 in 84.0 mL of water.
Solution
The molar mass of CaBr2 is 200 g, and 84.0 mL of H2O has a mass of very close to 84.0 g at its assumed density of 1.00 g mL–1. Thus the number of moles of CaBr2 in the solution is
$\dfrac{4.50\; g}{200\; g/mol} = 0.0225 \;mol$
Because this salt is completely dissociated in solution, the solution will contain 0.268 mol of Ca2+ and (2 × .268) = 0.536 of Br. The number of moles of water is
(84 g) / (18 g mol–1) = 4.67 mol.
The mole fraction of water is then
$\dfrac{0.467\; \cancel{mol}}{0.268 + 0.536 + 4.67\; \cancel{mol}} = \dfrac{0.467}{5.47} = 0.854$
Thus H2O constitutes 85 out of every 100 molecules in the solution.
Molality: mole/weight basis
A 1-molal solution contains one mole of solute per 1 kg of solvent. Molality is a hybrid concentration unit, retaining the convenience of mole measure for the solute, but expressing it in relation to a temperature-independent mass rather than a volume. Molality, like mole fraction, is used in applications dealing with certain physical properties of solutions; we will see some of these in the next lesson.
Example $8$
Calculate the molality of a 60-% (w/w) solution of ethanol in water.
Solution
From the above problems, we know that one liter of this solution contains 11.6 mol of ethanol in
(893.7 – 536.2) = 357.5 g
of water. The molarity of ethanol in the solution is therefore
(11.6 mol) / (0.3575 kg) = 32.4 mol kg–1.
Conversion between Concentration Measures
Anyone doing practical chemistry must be able to convert one kind of concentration measure into another. The important point to remember is that any conversion involving molarity requires a knowledge of the density of the solution.
Example $9$
A solution prepared by dissolving 66.0 g of urea (NH2)2CO in 950 g of water had a density of 1.018 g mL–1. Express the concentration of urea in
1. weight-percent
2. mole fraction
3. molarity
4. molality
Solution
a) The weight-percent of solute is (100%) –1 (66.0 g) / (950 g) = 6.9%
The molar mass of urea is 60, so the number of moles is
(66 g) /(60 g mol–1) = 1.1 mol.
The number of moles of H2O is
(950 g) / (18 g mol–1) = 52.8 mol.
b) Mole fraction of urea:
(1.1 mol) / (1.1 + 52.8 mol) = 0.020
c) molarity of urea: the volume of 1 L of solution is
(66 + 950)g / (1018 g L–1)= 998 mL.
The number of moles of urea (from a) is 1.1 mol.
Its molarity is then
(1.1 mol) / (0.998 L) = 1.1 mol L–1.
d) The molality of urea is (1.1 mol) / (.066 + .950) kg = 1.08 mol kg–1.
Example $10$
Ordinary dry air contains 21% (v/v) oxygen. About many moles of O2 can be inhaled into the lungs of a typical adult woman with a lung capacity of 4.0 L?
Solution
The number of molecules (and thus the number of moles) in a gas is directly proportional to its volume (Avogadro's law), so the mole fraction of O2 is 0.21. The molar volume of a gas at 25° C is
(298/271) × 22.4 L mol–1 = 24.4 L mol–1
so the moles of O2 in 4 L of air will be
(4 / 24.4) × (0.21 mol) × (24.4 L mol–1) = 0.84 mol O2.
Dilution calculations
These kinds of calculations arise frequently in both laboratory and practical applications. If you have a thorough understanding of concentration definitions, they are easily tackled. The most important things to bear in mind are
• Concentration is inversely proportional to volume;
• Molarity is expressed in mol L–1, so it is usually more convenient to express volumes in liters rather than in mL;
• Use the principles of unit cancelations to determine what to divide by what.
Example $11$
Commercial hydrochloric acid is available as a 10.17 molar solution. How would you use this to prepare 500 mL of a 4.00 molar solution?
Solution
The desired solution requires (0.50 L) × (4.00 M L–1) = 2.0 mol of HCl. This quantity of HCl is contained in (2.0 mol) / (10.17 M L–1) = 0.197 L of the concentrated acid. So one would measure out 197 mL of the concentrated acid, and then add water to make the total volume of 500 mL.
Example $12$
Calculate the molarity of the solution produced by adding 120 mL of 6.0 M HCl to 150 mL of 0.15 M HCl. What important assumption must be made here?
Solution
The assumption, of course, is that the density of HCl within this concentration range is constant, meaning that their volumes will be additive.
Moles of HCl in first solution:
(0.120 L) × (6.0 mol L–1) = 0.72 mol HCl
Moles of HCl in second solution:
(0.150 L) × (0.15 mol L–1) = 0.02 mol HCl
Molarity of mixture:
(0.72 + 0.02) mol / (.120 + .150) L = 4.3 mol L–1. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.01%3A_Solutions_and_their_Concentrations.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Describe the two fundamental processes that must occur whenever a solute dissolves in a solvent, and discuss the effects of the absorption or release of energy on the extent of these processes.
• Another factor entering into the process of solution formation is the increase (or occasionally, the decrease) in the entropy — that is, the degree to which thermal energy is dispersed or "diluted". Explain this in your own terms.
• Explain how the adage "like dissolves like" reflects the effects mentioned above. What is the principal physical property of a molecule that defines this "likeness"?
• What do we mean when we describe a liquid such as water as "associated"? Explain how this relates to the the solubility of solutes in such liquids.
You may recall that in the earlier unit on phase equilibria, we pointed out that aggregations of molecules that are more disordered tend to be the ones that are favored at higher temperature, whereas those that possess the lowest potential energy are favored at lower temperatures. This is a general principle that applies throughout the world of matter; the stable form at any given temperature will always be that which leads to the best balance between low potential energy and high molecular disorder. To see how these considerations are applied to solutions, think about the individual steps that must be carried out when a solute is dissolved in a solvent:
1. If the solute is a solid or liquid, it must first be dispersed — that is, its molecular units must be pulled apart. This requires energy, and so this step always works against solution formation.
1. The solute must then be introduced into the solvent. Whether this is energetically favorable or unfavorable depends on the nature of the solute and solvent. If the solute is A and the solvent is B, then what is important is the strength of the attractive forces between A-A and B-B molecules, compared to those between A-B pairs; if the latter are greater, then the potential energy will be lower when the substances are mixed and solution formation will be favored.
If step 2 releases more energy than is consumed in step 1, this will favor solution formation, and we can generally expect the solute to be soluble in the solvent. Even if the dissolution process is slightly endothermic, there is a third important factor, the entropy increase, that will very often favor the dissolved state.
Entropy of Solution
As anyone who has shuffled a deck of cards knows, disordered arrangements of objects are statistically more favored simply because there are more ways in which they can be realized. And as the number of objects increases, the more does statistics govern their most likely arrangements. The numbers of objects (molecules) we deal with in Chemistry is so huge that their tendency to become as spread out as possible becomes overwhelming. However, in doing so, the thermal energy they carry with them is also spread and dispersed, so the availability of this energy, as measured by the temperature, is also of importance. Chemists use the term "entropy" to denote this aspect of molecular randomness.
Readers of this section who have had some exposure to thermodynamics will know that solubility, like all equilibria, is governed by the Gibbs free energy change for the process, which incorporates the entropy change at a fundamental level. A proper understanding of these considerations requires some familiarity with thermodynamics, which most students do not encounter until well into their second semester of Chemistry. If you are not there yet, do not despair; you are hereby granted temporary permission to think of molecular "disorder" and entropy simply in terms of "spread-outedness".
Thus in the very common case in which a small quantity of solid or liquid dissolves in a much larger volume of solvent, the solute becomes more spread out in space, and the number of equivalent ways in which the solute can be distributed within this volume is greatly increased. This is the same as saying that the entropy of the solute increases.
If the energetics of dissolution are favorable, this increase in entropy means that the conditions for solubility will always be met. Even if the energetics are slightly endothermic, the entropy effect can still allow the solution to form, although perhaps limiting the maximum concentration that can be achieved. In such a case, we may describe the solute as being slightly soluble in a certain solvent. What this means is that a greater volume of solvent will be required to completely dissolve a given mass of solute.
Enthalpy of Solution
Polar molecules are those in which electric charge is distributed asymmetrically. The most familiar example is ordinary water, in which the highly electronegative oxygen atom pulls part of the electric charge cloud associated with each O–H bond closer to itself. Although the H2O molecule is electrically neutral overall, this charge imbalance gives rise to a permanent electric dipole moment.
Chemists use the term "Associated" liquids to refer to liquids in which the effects of hydrogen bonding dominate the local structure. Water is the most important of these, but ammonia NH3 and hydrogen cyanide HCN are other common examples.
Thus liquid water consists of an extended network of H2O molecules linked together by dipole-dipole attractions that we call hydrogen bonds. Because these are much weaker than ordinary chemical bonds, they are continually being disrupted by thermal forces. As a result, the extended structure is highly disordered (in contrast to that of solid ice) and continually changing.
When a solute molecule is introduced into an associated liquid, a certain amount of energy must be expended in order to break the local hydrogen-bond structure and make space for the new molecule. If the solute is itself an ion or a polar molecule, new ion-dipole or dipole-dipole attractions come into play. In favorable cases these may release sufficient potential energy to largely compensate for the energy required to incorporate the solute into the structure.
An extreme example of this occurs when ammonia dissolves in water. Each NH3 molecule can form three hydrogen bonds, so the resulting solution is even more hydrogen-bonded than is pure water — accounting for the considerable amount of heat released in the process and the extraordinarily large solubility of ammonia in water.
Nonpolar solutes are Sparingly Soluble in Water: The Hydrophobic effect
When a nonpolar solute such as oxygen or hexane is introduced into an associated liquid, we might expect that the energy required to break the hydrogen bonds to make space for the new molecule is not compensated by the formation of new attractive interactions, suggesting that the process will be energetically unfavorable. We can therefore predict that solutes of these kinds will be only sparingly soluble in water, and this is indeed the case.
It turns out, however, that this is not an entirely correct explanation for the small solubility of non polar solutes in water. It is now known that the H2O molecules that surround a non-polar intruder and find themselves unable to form energy-lowering polar or hydrogen-bonded interactions with it will rearrange themselves into a configuration that maximizes the hydrogen bonding between the water molecules themselves. In doing so, this creates a cage-like shell around the solute molecule. In terms of the energetics of the process, these new H2O-H2O interactions largely compensate for the lack of solute-H2O interactions.
However, this shell of highly organized water molecules exacts its own toll on the solubility by reducing the entropy of the system. Dissolution of a solute normally increases the entropy by spreading the solute molecules (and the thermal energy they contain) through the larger volume of the solvent. But in this case, the H2O molecules within the highly structured shell surrounding the solute molecule are themselves constrained to this location, and their number is sufficiently great to reduce the entropy by far more than the dissolved solute increases it.
The implications of the hydrophobic effect extend far beyond the topic of solubility. It governs the way that proteins fold, the formation of soap bubbles, and the formation of cell membranes. The small solubility of a non polar solute in an associated liquid such as water results more from the negative entropy change rather than from energetic considerations. This phenomenon is known as the hydrophobic effect. In the next section, we will explore the ways in which these energy-and-entropy considerations come together in various kinds of solutions.
8.02: Thermodynamics of Solutions
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Why are gases the only state of matter that never fail to form solutions?
Mixtures of gases are really solutions, but we tend not to think of them this way because they mix together freely and with no limits to their compositions; we say that gases are miscible in all proportions.
To the extent that gases behave ideally (because they consist mostly of empty space), their mixing does not involve energy changes at all; the mixing of gases is driven entirely by the increase in entropy (S) as each kind of molecule occupies and shares the space and kinetic energy of the other. Your nose can be a remarkably sensitive instrument for detecting components of gaseous solutions, even at the parts-per-million level. The olfactory experiences resulting from cooking cabbage, eating asparagus, and bodily emanations that are not mentionable in polite society are well known.
Can solids or liquids "dissolve" in a gaseous solvent? In a very narrow sense they can, but only to a very small extent. Dissolution of a condensed phase of matter into a gas is formally equivalent to evaporation (of a liquid) or sublimation (of a solid), so the process really amounts to the mixing of gases.
The energy required to remove molecules from their neighbors in a liquid or solid and into the gaseous phase is generally too great to be compensated by the greater entropy they enjoy in the larger volume of the mixture, so solids tend to have relatively low vapor pressures. The same is true of liquids at temperatures well below their boiling points. These two cases of gaseous solutions can be summarized as follows:
gaseous solvent, solute → gas liquid or solid
energy to disperse solute nil large
energy to introduce into gas nil nil
increase in entropy large large
miscibility complete very limited | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2A%3A_8.2.2A%3A_Solutions_of_Gaseous_Solutes_in_Gaseous_Solvents.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Explain why, in contrast, gases tend to be only slightly soluble in liquids and solids. And why are some combinations, such as the dissolution of ammonia or hydrogen chloride in water, significant exceptions to this rule?
• State Henry's law and explain
• Why do fish in rivers and streams sometimes become asphyxiated (oxygen-starved) in hot weather?
Solutions of Gases in Liquids
Gases dissolve in liquids, but usually only to a small extent. When a gas dissolves in a liquid, the ability of the gas molecules to move freely throughout the volume of the solvent is greatly restricted. If this latter volume is small, as is often the case, the gas is effectively being compressed. Both of these effects amount to a decrease in the entropy of the gas that is not usually compensated by the entropy increase due to mixing of the two kinds of molecules. Such processes greatly restrict the solubility of gases in liquids.
liquid solvent, solute → gas
energy to disperse solute nil
energy to introduce into solvent medium to large
increase in entropy negative
miscibility usually very limited
Solubility of gases in water
One important consequence of the entropy decrease when a gas dissolves in a liquid is that the solubility of a gas decreases at higher temperatures; this is in contrast to most other situations, where a rise in temperature usually leads to increased solubility. Bringing a liquid to its boiling point will completely remove a gaseous solute. Some typical gas solubilities, expressed in the number of moles of gas at 1 atm pressure that will dissolve in a liter of water at 25° C, are given below:
solute formula solubility, mol L–1 atm–1
ammonia NH3 57
carbon dioxide CO2 0.0308
methane CH4 0.00129
nitrogen N2 0.000661
oxygen O2 0.00126
sulfur dioxide SO2 1.25
As we indicated above, the only gases that are readily soluble in water are those whose polar character allows them to interact strongly with it.
Ammonia is remarkably soluble in water
Inspection of the above table reveals that ammonia is a champion in this regard. At 0° C, one liter of water will dissolve about 90 g (5.3 mol) of ammonia. The reaction of ammonia with water according to
$\ce{NH_3 + H_2O → NH_4^{+} + OH^{–}}$
makes no significant contribution to its solubility; the equilibrium lies heavily on the left side (as evidenced by the strong odor of ammonia solutions). Only about four out of every 1000 NH3molecules are in the form of ammonium ions at equilibrium. This is truly impressive when one calculates that this quantity of NH3 would occupy (5.3 mol) × (22.4 L mol–1) = 119 L at STP. Thus one volume of water will dissolve over 100 volumes of this gas. It is even more impressive when you realize that in order to compress 119 L of an ideal gas into a volume of 1 L, a pressure of 119 atm would need to be applied! This, together with the observation that dissolution of ammonia is accompanied by the liberation of a considerable amount of heat, tells us that the high solubility of ammonia is due to the formation of more hydrogen bonds (to H2O) than are broken within the water structure in order to accommodate the NH3 molecule.
If we actually compress 90 g of pure NH3 gas to 1 L, it will liquefy, and the vapor pressure of the liquid would be about 9 atm. In other words, the escaping tendency of NH3 molecules from H2O is only about 1/9th of what it is from liquid NH3. One way of interpreting this is that the strong intermolecular (dipole-dipole) attractions between NH3 and the solvent H2O give rise to a force that has the effect of a negative pressure of 9 atm.
The Ammonia Fountain
This classic experiment nicely illustrates the high solubility of gaseous ammonia in water. A flask fitted with a tube as shown is filled with ammonia gas and inverted so that the open end of tube is submerged in a container of water. A small amount of water is pushed up into the flask to get the process started. As the gas dissolves in the water, its pressure is reduced, creating a partial vacuum that draws additional water into the flask. The effect can be made more dramatic by adding an indicator dye such as phenolphthalein to the water, which turns pink as the water emerges from the "fountain" and becomes alkaline.
In old textbooks, ammonia's extraordinarily high solublility in water was incorrectly attributed to the formation of the non-existent compound "ammonium hydroxide" NH4OH. Although this formula is still occasionally seen, the name ammonium hydroxide is now used as a synonym for "aqueous ammonia" whose formula is simply NH3(aq).
As can also be seen in the above table, the gases CO2 and SO2 also exhibit higher solubilities in water. The main product in each case is a loosely-bound hydrate of the gas, denoted by CO2(aq) or SO2(aq). A very small fraction of the hydrate CO2·H2O then combines to form carbonic acid H2CO3.
Solubility of gases decreases with Temperature
Recall that entropy is a measure of the ability of thermal energy to spread and be shared and exchanged by molecules in the system. Higher temperature exerts a kind of multiplying effect on a positive entropy change by increasing the amount of thermal energy available for sharing. Have you ever noticed the tiny bubbles that form near the bottom of a container of water when it is placed on a hot stove? These bubbles contain air that was previously dissolved in the water, but reaches its solubility limit as the water is warmed. You can completely rid a liquid of any dissolved gases (including unwanted ones such as Cl2 or H2S) by boiling it in an open container.
This is quite different from the behavior of most (but not all) solutions of solid or liquid solutes in liquid solvents. The reason for this behavior is the very large entropy increase that gases undergo when they are released from the confines of a condensed phase .
Solubility of Oxygen in water
Fresh water at sea level dissolves 14.6 mg of oxygen per liter at 0°C and 8.2 mg/L at 25°C. These saturation levels ensure that fish and other gilled aquatic animals are able to extract sufficient oxygen to meet their respiratory needs. But in actual aquatic environments, the presence of decaying organic matter or nitrogenous runoff can reduce these levels far below saturation. The health and survival of these organisms is severely curtailed when oxygen concentrations fall to around 5 mg/L.
The temperature dependence of the solubility of oxygen in water is an important consideration for the well-being of aquatic life; thermal pollution of natural waters (due to the influx of cooling water from power plants) has been known to reduce the dissolved oxygen concentration to levels low enough to kill fish. The advent of summer temperatures in a river can have the same effect if the oxygen concentration has already been partially depleted by reaction with organic pollutants.
Solubility of gases increases with pressure: Henry's Law
The pressure of a gas is a measure of its "escaping tendency" from a phase. So it stands to reason that raising the pressure of a gas in contact with a solvent will cause a larger fraction of it to "escape" into the solvent phase. The direct-proportionality of gas solubility to pressure was discovered by William Henry (1775-1836) and is known as Henry's Law. It is usually written as
$P = k_H C \label{7b.2.1}$
with
• $P$ is the partial pressure of the gas above the liquid,
• $C$ is the concentration of gas dissolved in the liquid, and
• $k_H$ is the Henry's law constant, which can be expressed in various units, and in some instances is defined in different ways, so be very careful to note these units when using published values.
For Table 7b.2.X, kH is given as
$k_H = \dfrac{\text{partial pressure of gas in atm}}{\text{concentration in liquid} \; mol \;L^{–1}}$
Table 7b.2.X: Henry's law constants in water at 25° C, L atm mol–1
gas He N2 O2 CO2 CH4 NH3
KH 2703 1639 769 29.4 0.00129 57
Example $1$
Some vendors of bottled waters sell pressurized "oxygenated water" that is (falsely) purported to enhance health and athletic performance by supplying more oxygen to the body.
1. How many moles of O2 will be in equilibrium with one liter of water at 25° C when the partial pressure of O2 above the water is 2.0 atm?
2. How many mL of air (21% O2 v/v) must you inhale in order to introduce an equivalent quantity of O2 into the lungs (where it might actually do some good?)
Solution:
1. Solving Henry's law for the concentration, we get
$C = \dfrac{P}{k_H} = \dfrac{2.0\; atm}{769\; L\; atm \;mol^{–1}} = 0.0026\; mol\; L^{–1}$
1. At 25° C, 0.0026 mol of O2 occupies (22.4 L) × (.0026 mol) × (298/273) = 0.063 L. The equivalent volume of air would be (0.063 L) (.21) = 0.303 L. Given that the average tidal volume of the human lung is around 400 mL, this means that taking one extra breath would take in more O2 than is present in 1 L of "oxygenated water".
Carbonated beverages: the history of "Fizz-ics"
Artificially carbonated water was first prepared by Joseph Priestley (who later discovered oxygen) in 1767 and was commercialized in 1783 by Joseph Schweppe, a Swiss-German jeweler. Naturally-carbonated spring waters have long been reputed to have curative values, and these became popular tourist destinations in the 19th century. The term "seltzer water" derives from one such spring in Niederselters, Germany. Of course, carbonation produced by fermentation has been known since ancient times. The tingling sensation that carbonated beverages produce in the mouth comes from the carbonic acid produced when bubbles of carbon dioxide come into contact with the mucous membranes of the mouth and tongue:
$CO_2 + H_2O → H_2CO_3$ | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2B%3A_8.2.2B%3A_Solutions_of_Gaseous_Solutes_in_Liquid_Solvents.txt |
Whereas all gases will mix to form solutions regardless of the proportions, liquids are much more fussy. Some liquids, such as ethyl alcohol and water, are miscible in all proportions. Others, like the proverbial oil and water, are not; each liquid has only a limited solubility in the other, and once either of these limits is exceeded, the mixture separates into two phases.
solute → liquid
energy to disperse solute varies
energy to introduce into solvent varies
increase in entropy moderate
miscibility "like dissolves like"
The reason for this variability is apparent from the table. Mixing of two liquids can be exothermic, endothermic, or without thermal effect, depending on the particular substances. Whatever the case, the energy factors are not usually very large, but neither is the increase in randomness; the two factors are frequently sufficiently balanced to produce limited miscibility.
The range of possibilities is shown here in terms of the mole fractions X of two liquids A and B. If A and B are only slightly miscible, they separate into two layers according to their relative densities. Note that when one takes into account trace levels, no two liquids are totally immiscible.
Like Dissolves Like
A useful general rule is that liquids are completely miscible when their intermolecular forces are very similar in nature; “like dissolves like”. Thus water is miscible with other liquids that can engage in hydrogen bonding, whereas a hydrocarbon liquid in which London or dispersion forces are the only significant intermolecular effect will only be completely miscible with similar kinds of liquids.
Substances such as the alcohols, CH3(CH2)nOH, which are hydrogen-bonding (and thus hydrophilic) at one end and hydrophobic at the other, tend to be at least partially miscible with both kinds of solvents. If n is large, the hydrocarbon properties dominate and the alcohol has only a limited solubility in water. Very small values of n allow the –OH group to dominate, so miscibility in water increases and becomes unlimited in ethanol (n = 1) and methanol (n = 0), but miscibility with hydrocarbons decreases owing to the energy required to break alcohol-alcohol hydrogen bonds when the non polar liquid is added.
These considerations have become quite important in the development of alternative automotive fuels based on mixing these alcohols with gasoline. At ordinary temperatures the increased entropy of the mixture is great enough that the unfavorable energy factor is entirely overcome, and the mixture is completely miscible. At low temperatures, the entropy factor becomes less predominant, and the fuel mixture may separate into two phases, presenting severe problems to the fuel filter and carburetor.
8.2.2D: 8.2.2D: Solutions of Solid Solutes in Liquid Solvents
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• A key to understanding the solubility of ionic solids in water are the concepts of lattice energy and hydration energy. Explain the meaning of these terms, and sketch out a diagram that shows how these are related to the "heat of solution".
Molecular Solids in Liquid Solvents
The stronger intermolecular forces in solids require more input of energy to disperse the molecular units into a liquid solution, but there is also a considerable increase in entropy that can more than compensate if the intermolecular forces are not too strong, and if the solvent has no strong hydrogen bonds that must be broken in order to introduce the solute into the liquid.
solvent → non polar liquid polar liquid
energy to disperse solute moderate moderate
energy to introduce into solvent small moderate
increase in entropy moderate moderate
miscibility moderate small
For example, at 25° C and 1 atm pressure, 20 g of iodine crystals will dissolve in 100 ml of ethyl alcohol, but the same quantity of water will dissolve only 0.30 g of iodine. As the molecular weight of the solid increases, the intermolecular forces holding the solid together also increase, and solubilities tend to fall off; thus the solid linear hydrocarbons CH3(CH2)nCH3 (n > 20) show diminishing solubilities in hydrocarbon liquids.
Ionic Solids in Liquid Solvents
Since the Coulombic forces that bind ions and highly polar molecules into solids are quite strong, we might expect these solids to be insoluble in just about any solvent. Ionic solids are insoluble in most non-aqueous solvents, but the high solubility of some (including NaCl) in water suggests the need for some further explanation.
solvent → non polar polar (water)
energy to disperse solute large large (endothermic)
energy to introduce into liquid small highly negative (exothermic)
increase in entropy moderate moderate to slightly negative
miscibility very small small to large
The key factor here turns out to be the interaction of the ions with the solvent. The electrically-charged ions exert a strong coulombic attraction on the end of the water molecule that has the opposite partial charge.
As a consequence, ions in solution are always hydrated; that is, they are quite tightly bound to water molecules through ion-dipole interaction. The number of water molecules contained in the primary hydration shell varies with the radius and charge of the ion.
Figure $1$: Hydration shells around some ions in a sodium chloride solution. The average time an ion spends in a shell is about 2-4 nanoseconds. But this is about two orders of magnitude longer than the lifetime of an individual $H_2O–H_2O$ hydrogen bond.
Lattice and Hydration Energies
The dissolution of an ionic solid $M$X in water can be thought of as a sequence of two (hypothetical) steps:
$MX(s) \rightarrow M^+(g) + X^–(g)$
$M^+(g) + X^–(g) + H_2O(l) \rightarrow M^+(aq) + X–(aq)$
The enthalpy difference of the first step is the lattice energy and is always positive; the enthalpy difference of the second step is the hydration energy and is always negative)
• The first reaction is always endothermic; it takes a lot of work to break up an ionic crystal lattice (Table $1$).
• The hydration step is always exothermic as H2O molecules are attracted into the electrostatic field of the ion (Table $2$).
• The heat (enthalpy) of solution is the sum of the lattice and hydration energies, and can have either sign.
Table $1$: Hydration Energies (kJ mol–1)
H+(g) –1075 F(g) –503
Li+(g) –515 Cl(g) –369
Na+(g) –405 Br(g) –336
K+(g) –321 I(g) –398
Mg2+(g) –1922 OH(g) –460
Ca2+(g) –1592 NO3 –328
Sr2+(g) –1445 SO42– –1145
Single-ion hydration energies (Table $1$) cannot be observed directly, but are obtained from the differences in hydration energies of salts having the given ion in common. When you encounter tables such as the above in which numeric values are related to different elements, you should always stop and see if you can make sense of any obvious trends. In this case, the things to look for are the size and charge of the ions as they would affect the electrostatic interaction between two ions or between an ion and a [polar] water molecule.
F Cl Br I
Table $2$: Lattice Energies (kJ/mol–1)
Li+ +1031 +848 +803 +759
Na+ +918 +780 +742 +705
K+ +817 +711 +679 +651
Mg2+ +2957 +2526 +2440 +2327
Ca2+ +2630 +2258 +2176 +2074
Sr2+ +2492 +2156 +2075 +1963
Lattice energies are not measured directly, but are estimates based on electrostatic calculations which are reliable only for simple salts. Enthalpies of solution are observable either directly or (for sparingly soluble salts,) indirectly. Hydration energies are not measurable; they are estimated as the sum the other two quantities. It follows that any uncertainty in the lattice energies is reflected in those of the hydration energies. For this reason, tabulated values of the latter will vary depending on the source.
Example $1$: Calcium Chloride
When calcium chloride, CaCl2, is dissolved in water, will the temperature immediately after mixing rise or fall?
Solution:
Estimate the heat of solution of CaCl2.
• lattice energy of solid CaCl2: +2258 kJ mol–1
• hydration energy of the three gaseous ions: (–1562 –381 – 381) = –2324 kJ mol–1
• heat of solution:
(2258 – 2324) kJ mol–1 = –66 kJ mol–1
Since the process is exothermic, this heat will be released to warm the solution.
As often happens for a quantity that is the sum of two large terms having opposite signs, the overall dissolution process can come out as either endothermic or exothermic, and examples of both kinds are common.
Table $3$: Energy terms associated with the dissolution of some salts (kJ mol–1)
substance →
LiF NaI KBr CsI LiCl NaCl KCl AgCl
lattice energy 1021 682 669 586 846 778 707 910
hydration energy 1017 686 649 552 884 774 690 844
enthalpy of solution +3 –4 +20 +34 –38 +4 +17 +66
Two common examples illustrate the contrast between exothermic and endothermic heats of solution of ionic solids:
Hydration Entropy can make a Difference!
Hydration shells around some ions in a sodium chloride solution. The average time an ion spends in a shell is about 2-4 nanoseconds. But this is about two orders of magnitude longer than the lifetime of an individual $H_2O$–$H_2O$ hydrogen bond. The balance between the lattice energy and hydration energy is a major factor in determining the solubility of an ionic crystal in water, but there is another factor to consider as well. We generally assume that there is a rather large increase in the entropy when a solid is dispersed into the liquid phase. However, in the case of ionic solids, each ion ends up surrounded by a shell of oriented water molecules. These water molecules, being constrained within the hydration shell, are unable to participate in the spreading of thermal energy throughout the solution, and reduce the entropy. In some cases this effect predominates so that dissolution of the salt leads to a net decrease in entropy. Recall that any process in which the the entropy diminishes becomes less probable as the temperature increases; this explains why the solubilities of some salts decrease with temperature. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2C%3A_8.2.2C%3A_Solutions_of_Liquid_Solutes_in_Liquid_Solvents.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• State Raoult's law in your own words, and explain why it makes sense.
• What do we mean by the escaping tendency of a molecule from a phase? How might we be able to observe or measure it?
• Explain why boiling point elevation follows naturally from Raoult's law.
• Explaining freezing point depression is admittedly a bit more difficult, but you should nevertheless be able to explain how the application of salt on an ice-covered road can cause the ice to melt.
The tendency of molecules to escape from a liquid phase into the gas phase depends in part on how much of an increase in entropy can be achieved in doing so. Evaporation of solvent molecules from the liquid always leads to a large increase in entropy because of the greater volume occupied by the molecules in the gaseous state. But if the liquid solvent is initially “diluted“ with solute, its entropy is already larger to start with, so the amount by which it can increase on entering the gas phase will be less. There will accordingly be less tendency for the solvent molecules to enter the gas phase, and so the vapor pressure of the solution diminishes as the concentration of solute increases and that of solvent decreases.
The number 55.5 mol L–1 (= 1000 g L–1 ÷ 18 g mol–1) is a useful one to remember if you are dealing a lot with aqueous solutions; this represents the concentration of water in pure water. (Strictly speaking, this is the molal concentration of H2O; it is only the molar concentration at temperatures around 4° C, where the density of water is closest to 1.000 g cm–1.)
Diagram 1 (above left) represents pure water whose concentration in the liquid is 55.5 M. A tiny fraction of the H2O molecules will escape into the vapor space, and if the top of the container is closed, the pressure of water vapor builds up until equilibrium is achieved. Once this happens, water molecules continue to pass between the liquid and vapor in both directions, but at equal rates, so the partial pressure of H2O in the vapor remains constant at a value known as the vapor pressure of water at the particular temperature.
In Figure $1$, we have replaced a fraction of the water molecules with a substance that has zero or negligible vapor pressure — a nonvolatile solute such as salt or sugar. This has the effect of diluting the water, reducing its escaping tendency and thus its vapor pressure.
What's important to remember is that the reduction in the vapor pressure of a solution of this kind is directly proportional to the fraction of the [volatile] solute molecules in the liquid — that is, to the mole fraction of the solvent. The reduced vapor pressure is given by Raoult's law(1886):
From the definition of mole fraction, you should understand that in a two-component solution (i.e., a solvent and a single solute),
$\chi_{solvent} = 1–\chi_{solute}.$
Example $1$
Estimate the vapor pressure of a 40 % (W/W) solution of ordinary cane sugar (C22O11H22, 342 g mol–1) in water. The vapor pressure of pure water at this particular temperature is 26.0 torr.
Solution
100 g of solution contains (40 g) ÷ (342 g mol–1) = 0.12 mol of sugar and (60 g) ÷ (18 g mol–1) = 3.3 mol of water. The mole fraction of water in the solution is
$\dfrac{3.3}{3.3 + 12} = 0.96$
and its vapor pressure will be 0.96 × 26.0 torr = 25.1 torr.
Example $2$
The vapor pressure of water at 10° C is 9.2 torr. Estimate the vapor pressure at this temperature of a solution prepared by dissolving 1 mole of CaCl2 in 1 L of water.
Solution
Each mole of CaCl2 dissociates into one mole of Ca2+ and two moles of Cl1–, giving a total of three moles of solute particles. The mole fraction of water in the solution will be
$\dfrac{55.5}{3 + 55.5} = 0.95$
The vapor pressure will be 0.95 × 9.2 torr = 8.7 torr.
Since the sum of all mole fractions in a mixture must be unity, it follows that the more moles of solute, the smaller will be the mole fraction of the solvent. Also, if the solute is a salt that dissociates into ions, then the proportion of solvent molecules will be even smaller. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.03%3A_Colligative_Properties-_Raoult%27s_Law.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Explain why boiling point elevation follows naturally from Raoult's law.
• Explaining freezing point depression is admittedly a bit more difficult, but you should nevertheless be able to explain how the application of salt on an ice-covered road can cause the ice to melt.
The colligative properties really depend on the escaping tendency of solvent molecules from the liquid phase. You will recall that the vapor pressure is a direct measure of escaping tendency, so we can use these terms more or less interchangeably.
Boiling Point Elevation
If addition of a nonvolatile solute lowers the vapor pressure of the solution via Raoult's law , then it follows that the temperature must be raised to restore the vapor pressure to the value corresponding to the pure solvent. In particular, the temperature at which the vapor pressure is 1 atm will be higher than the normal boiling point by an amount known as the boiling point elevation . The exact relation between the boiling point of the solution and the mole fraction of the solvent is rather complicated, but for dilute solutions the elevation of the boiling point is directly proportional to the molal concentration of the solute:
Bear in mind that the proportionality constant KB is a property of the solvent because this is the only component that contributes to the vapor pressure in the model we are considering in this section.
Table $1$: Boiling point elevation constants
solvent normal bp, °C Kb , K mol–1 kg
water 100 0.514
ethanol 79 1.19
acetic acid 118 2.93
carbon tetrachloride 76.5 5.03
Example $1$
Sucrose (C22O11H22, 342 g mol–1), like many sugars, is highly soluble in water; almost 2000 g will dissolve in 1 L of water, giving rise to what amounts to pancake syrup. Estimate the boiling point of such a sugar solution.
Solution
moles of sucrose:
$\dfrac{2000\, g}{342\, g\, mol^{–1}} = 5.8\; mol$
mass of water: assume 1000 g (we must know the density of the solution to find its exact value)
The molality of the solution is (5.8 mol) ÷ (1.0 kg) = 5.8 m.
Using the value of Kb from the table, the boiling point will be raised by (0.514 K mol–1 kg) × (5.8 mol kg–1) = 3.0 K, so the boiling point will be 103° C.
Freezing Point Depression
The freezing point of a substance is the temperature at which the solid and liquid forms can coexist indefinitely — that is, they are in equilibrium. Under these conditions molecules pass between the two phases at equal rates because their escaping tendencies from the two phases are identical. Suppose that a liquid solvent and its solid (water and ice, for example) are in equilibrium ( below), and we add a non-volatile solute (such as salt, sugar, or automotive antifreeze liquid) to the water. This will have the effect of reducing the mole fraction of H2O molecules in the liquid phase, and thus reduce the tendency of these molecules to escape from it, not only into the vapor phase (as we saw above), but also into the solid (ice) phase. This will have no effect on the rate at which H2O molecules escape from the ice into the water phase, so the system will no longer be in equilibrium and the ice will begin to melt.
If we wish to keep the solid from melting, the escaping tendency of molecules from the solid must be reduced. This can be accomplished by reducing the temperature; this lowers the escaping tendency of molecules from both phases, but it affects those in the solid more than those in the liquid, so we eventually reach the new, lower freezing point where the two quantities are again in exact balance and both phases can coexist .
If you prefer to think in terms of vapor pressures, you can use the same argument if you bear in mind that the vapor pressures of the solid and liquid must be the same at the freezing point. Dilution of the liquid (the solvent) by the nonvolatile solute reduces the vapor pressure of the solvent according to Raoult’s law, thus reducing the temperature at which the vapor pressures of the liquid and frozen forms of the solution will be equal. As with boiling point elevation, in dilute solutions there is a simple linear relation between the freezing point depression and the molality of the solute:
$\Delta T_f = K_f \dfrac{\text{moles of solute}}{\text{kg of solvent}}$
Note that Kf values are all negative!
Table $2$: Freezing point depression constants
Solvent Normal Freezing Point (°C) Kf (K mol–1 kg)
water 0.0 –1.86
acetic acid 16.7 –3.90
benzene 5.5 –5.10
camphor 180 –40.0
cyclohexane 6.5 –20.2
phenol 40 –7.3
Salting Roads
The use of salt to de-ice roads is a common application of this principle. The solution formed when some of the salt dissolves in the moist ice reduces the freezing point of the ice. If the freezing point falls below the ambient temperature, the ice melts. In very cold weather, the ambient temperature may be below that of the salt solution, and the salt will have no effect. The effectiveness of a de-icing salt depends on the number of particles it releases on dissociation and on its solubility in water:
name Formula lowest practical T, °C
ammonium sulfate (NH4)2SO4 –7
calcium chloride CaCl2 –29
potassium chloride KCl –15
sodium chloride NaCl –9
urea (NH2)2CO –7
Automotive radiator antifreezes are mostly based on ethylene glycol, (CH2OH)2. Owing to the strong hydrogen-bonding properties of this double alcohol, this substance is miscible with water in all proportions, and contributes only a very small vapor pressure of its own. Besides lowering the freezing point, antifreeze also raises the boiling point, increasing the operating range of the cooling system. The pure glycol freezes at –12.9°C and boils at 197°C, allowing water-glycol mixtures to be tailored to a wide range of conditions.
Example $2$
Estimate the freezing point of an antifreeze mixture is made up by combining one volume of ethylene glycol (MW = 62, density 1.11 g cm–3) with two volumes of water.
Solution
Assume that we use 1 L of glycol and 2 L of water (the actual volumes do not matter as long as their ratios are as given.) The mass of the glycol will be 1.10 kg and that of the water will be 2.0 kg, so the total mass of the solution is 3.11 kg. We then have:
• number of moles of glycol: (1110 g) ÷ (62 g mol–1) = 17.9 mol
• molality of glycol: (17.9 mol) ÷ (2.00 kg) = 8.95 mol kg–1
• freezing point depression: ΔTF = (–1.86 K kg–1 mol) × (8.95 mol kg–1) = –16.6 K so the solution will freeze at about –17°C.
Any ionic species formed by dissociation will also contribute to the freezing point depression. This can serve as a useful means of determining the fraction of a solute that is dissociated.
Example $3$
An aqueous solution of nitrous acid (HNO2, MW = 47) freezes at –0.198 .C. If the solution was prepared by adding 0.100 mole of the acid to 1000 g of water, what percentage of the HNO2 is dissociated in the solution?
Solution
The nominal molality of the solution is (.001 mol) ÷ (1.00 kg) = 0.001 mol kg–1.
But the effective molality according to the observed ΔTF value is given by
ΔTF ÷ KF = (–.198 K) ÷(–1.86 K kg mol–1) = 0.106 mol kg–1; this is the total number of moles of species present after the dissociation reaction HNO2 → H+ + NO has occurred. If we let x = [H+] = [NO2], then by stoichiometry, [HNO2] = 0.100 - x and .106 - x = 2x and x = .0355. The fraction of HNO2 that is dissociated is .0355 ÷ 0.100 = .355, corresponding to 35.5% dissociation of the acid.
Another Perspective of Freezing Point Depression and Boiling Point Elevation
A simple phase diagram can provide more insight into these phenomena. You may already be familiar with the phase map for water below.
The one shown below expands on this by plotting lines for both pure water and for its "diluted" state produced by the introduction of a non-volatile solute.
The normal boiling point of the pure solvent is indicated by point where the vapor pressure curve intersects the 1-atm line — that is, where the escaping tendency of solvent molecules from the liquid is equivalent to 1 atmosphere pressure. Addition of a non-volatile solute reduces the vapor pressures to the values given by the blue line. This shifts the boiling point to the right , corresponding to the increase in temperature ΔTb required to raise the escaping tendency of the H2O molecules back up to 1 atm.
To understand freezing point depression, notice that the vapor pressure line intersects the curved black vapor pressure line of the solid (ice), which corresponds to a new triple point at which all three phases (ice, water vapor, and liquid water) are in equilibrium and thus exhibit equal escaping tendencies. This point is by definition the origin of the freezing (solid-liquid) line, which intersects the 1-atm line at a reduced freezing point ΔTf, indicated by .
Note that the above analysis assumes that the solute is soluble only in the liquid solvent, but not in its solid form. This is generally more or less true. For example, when arctic ice forms from seawater, the salts get mostly "squeezed" out. This has the interesting effect of making the water that remains more saline, and hence more dense, causing it to sink to the bottom part of the ocean where it gets taken up by the south-flowing deep current.
A Thermodynamics Perspective on Freezing and Boiling
Those readers who have some knowledge of thermodynamics will recognize that what we have been referring to as "escaping" tendency is really a manifestation of the Gibbs Energy. This schematic plot shows how the G's for the solid, liquid, and gas phases of a typical substance vary with the temperature.
The rule is that the phase with the most negative free energy rules.
The phase that is most stable (and which therefore is the only one that exists) is always the one having the most negative free energy (indicated here by the thicker portions of the plotted lines.) The melting and boiling points correspond to the respective temperatures where the solid and liquid and liquid and vapor have identical free energies.
As we saw above, adding a solute to the liquid dilutes it, making its free energy more negative, with the result that the freezing and boiling points are shifted to the left and right, respectively.
The relationships shown in these plots depend on the differing slopes of the lines representing the free energies of the phases as the temperature changes. These slopes are proportional to the entropy of each phase. Because gases have the highest entropies, the slope of the "gaseous solvent" line is much greater than that of the others. Note that this plot is not to scale. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.04%3A_Colligative_Properties-_Boiling_Point_Elevation_and_Freezing_Point_Depression.txt |
Learning Objectives
• Define a semipermeable membrane in the context of osmotic flow.
• Explain, in simple terms, what fundamental process "drives" osmotic flow.
• What is osmotic pressure, and how is it measured?
• Osmotic pressure can be a useful means of estimating the molecular weight of a substance, particularly if its molecular weight is quite large. Explain in your own words how this works.
• What is reverse osmosis, and what is its principal application?
• Explain the role of osmotic pressure in food preservation, and give an example.
• Describe the role osmosis plays in the rise of water in plants (where is the semipermeable membrane?), and why it cannot be the only cause in very tall trees.
Osmosis is the process in which a liquid passes through a membrane whose pores permit the passage of solvent molecules but are too small for the larger solute molecules to pass through.
Semipermeable Membranes and Osmotic flow
Figure $1$ shows a simple osmotic cell. Both compartments contain water, but the one on the right also contains a solute whose molecules (represented by green circles) are too large to pass through the membrane. Many artificial and natural substances are capable of acting as semi-permeable membranes. The walls of most plant and animal cells fall into this category.
If the cell is set up so that the liquid level is initially the same in both compartments, you will soon notice that the liquid rises in the left compartment and falls in the right side, indicating that water molecules from the right compartment are migrating through the semipermeable membrane and into the left compartment. This migration of the solvent is known as osmotic flow, or simply osmosis.
The escaping tendency of a substance from a phase increases with its concentration in the phase. What is the force that drives the molecules through the membrane? This is a misleading question, because there is no real “force” in the physical sense other than the thermal energies all molecules possess. Osmosis is a consequence of simple statistics: the randomly directed motions of a collection of molecules will cause more to leave a region of high concentration than return to it; the escaping tendency of a substance from a phase increases with its concentration in the phase.
Diffusion and Osmotic Flow
Suppose you drop a lump of sugar into a cup of tea, without stirring. Initially there will be a very high concentration of dissolved sugar at the bottom of the cup, and a very low concentration near the top. Since the molecules are in random motion, there will be more sugar molecules moving from the high concentration region to the low concentration region than in the opposite direction. The motion of a substance from a region of high concentration to one of low concentration is known as diffusion. Diffusion is a consequence of a concentration gradient (which is a measure of the difference in escaping tendency of the substance in different regions of the solution).
There is really no special force on the individual molecules; diffusion is purely a consequence of statistics. Osmotic flow is simply diffusion of a solvent through a membrane impermeable to solute molecules. Now take two solutions of differing solvent concentration, and separate them by a semipermeable membrane (Figure $2$). Being semipermeable, the membrane is essentially invisible to the solvent molecules, so they diffuse from the high concentration region to the low concentration region just as before. This flow of solvent constitutes osmotic flow, or osmosis.
Figure $2$: Osmosis osmotic flow(a) Two sugar-water solutions of different concentrations, separated by a semipermeable membrane that passes water but not sugar. Osmosis will be to the right, since water is less concentrated there. (b) The fluid level rises until the back pressure ρgh equals the relative osmotic pressure; then, the net transfer of water is zero. (CC-BY; OpenStax).
Figure $2$ shows water molecules (blue) passing freely in both directions through the semipermeable membrane, while the larger solute molecules remain trapped in the left compartment, diluting the water and reducing its escaping tendency from this cell, compared to the water in the right side. This results in a net osmotic flow of water from the right side which continues until the increased hydrostatic pressure on the left side raises the escaping tendency of the diluted water to that of the pure water at 1 atm, at which point osmotic equilibrium is achieved.
Osmotic flow is simply diffusion of a solvent through a membrane impermeable to solute molecules.
In the absence of the semipermeable membrane, diffusion would continue until the concentrations of all substances are uniform throughout the liquid phase. With the semipermeable membrane in place, and if one compartment contains the pure solvent, this can never happen; no matter how much liquid flows through the membrane, the solvent in the right side will always be more concentrated than that in the left side. Osmosis will continue indefinitely until we run out of solvent, or something else stops it.
Osmotic equilibrium and osmotic pressure
One way to stop osmosis is to raise the hydrostatic pressure on the solution side of the membrane. This pressure squeezes the solvent molecules closer together, raising their escaping tendency from the phase. If we apply enough pressure (or let the pressure build up by osmotic flow of liquid into an enclosed region), the escaping tendency of solvent molecules from the solution will eventually rise to that of the molecules in the pure solvent, and osmotic flow will case. The pressure required to achieve osmotic equilibrium is known as the osmotic pressure. Note that the osmotic pressure is the pressure required to stop osmosis, not to sustain it.
Osmotic pressure is the pressure required to stop osmotic flow It is common usage to say that a solution “has” an osmotic pressure of "x atmospheres". It is important to understand that this means nothing more than that a pressure of this value must be applied to the solution to prevent flow of pure solvent into this solution through a semipermeable membrane separating the two liquids.
Osmotic Pressure and Solute Concentration
The Dutch scientist Jacobus Van't Hoff (1852-1911) was one of the giants of physical chemistry. He discovered this equation after a chance encounter with a botanist friend during a walk in a park in Amsterdam; the botanist had learned that the osmotic pressure increases by about 1/273 for each degree of temperature increase. van’t Hoff immediately grasped the analogy to the ideal gas law. The osmotic pressure $\Pi$ of a solution containing $n$ moles of solute particles in a solution of volume $V$ is given by the van 't Hoff equation:
$\Pi = \dfrac{nRT}{V} \label{8.4.3}$
in which
• $R$ is the gas constant (0.0821 L atm mol–1 K–1) and
• $T$ is the absolute temperature.
In contrast to the need to employ solute molality to calculate the effects of a non-volatile solute on changes in the freezing and boiling points of a solution, we can use solute molarity to calculate osmotic pressures.
Note that the fraction $n/V$ corresponds to the molarity ($M$) of a solution of a non-dissociating solute, or to twice the molarity of a totally-dissociated solute such as $NaCl$. In this context, molarity refers to the summed total of the concentrations of all solute species. Hence, Equation \ref{8.4.3} can be expressed as
$\Pi =MRT \label{8.4.3B}$
Recalling that $\Pi$ is the Greek equivalent of P, the re-arranged form $\Pi V = nRT$ of the above equation should look familiar. Much effort was expended around the end of the 19th century to explain the similarity between this relation and the ideal gas law, but in fact, the Van’t Hoff equation turns out to be only a very rough approximation of the real osmotic pressure law, which is considerably more complicated and was derived after van 't Hoff's formulation. As such, this equation gives valid results only for extremely dilute ("ideal") solutions.
According to the Van't Hoff equation, an ideal solution containing 1 mole of dissolved particles per liter of solvent at 0° C will have an osmotic pressure of 22.4 atm.
Example $1$
Sea water contains dissolved salts at a total ionic concentration of about 1.13 mol L–1. What pressure must be applied to prevent osmotic flow of pure water into sea water through a membrane permeable only to water molecules?
Solution
This is a simple application of Equation \ref{8.4.3B}.
\begin{align*} \Pi &= MRT \[4pt] &= (1.13\; mol /L)(0.0821\; L \,atm \,mol^{–1}\; K^{–1})(298\; K) \[4pt] &= 27.6\; atm \end{align*}
Molecular Weight Determination by Osmotic Pressure
Since all of the colligative properties of solutions depend on the concentration of the solvent, their measurement can serve as a convenient experimental tool for determining the concentration, and thus the molecular weight, of a solute. Osmotic pressure is especially useful in this regard, because a small amount of solute will produce a much larger change in this quantity than in the boiling point, freezing point, or vapor pressure. even a 10–6 molar solution would have a measurable osmotic pressure. Molecular weight determinations are very frequently made on proteins or other high molecular weight polymers. These substances, owing to their large molecular size, tend to be only sparingly soluble in most solvents, so measurement of osmotic pressure is often the only practical way of determining their molecular weights.
Example $2$: average molecular weight
The osmotic pressure of a benzene solution containing 5.0 g of polystyrene per liter was found to be 7.6 torr at 25°C. Estimate the average molecular weight of the polystyrene in this sample.
Solution:
osmotic pressure:
\begin{align*} \Pi &= \dfrac{7.6\, torr}{760\, torr\, atm^{–1}} \[4pt] &= 0.0100 \,atm \end{align*}
Using the form of the van 't Hoff equation (Equation \ref{8.4.3}), PV = nRT, the number of moles of polystyrene is
n = (0.0100 atm)(1 L) ÷ (0.0821 L atm mol–1 K–1)(298 K) = 4.09 x 10–4 mol
Molar mass of the polystyrene:
(5.0 g) ÷ (4.09 x 10–4 mol) = 12200 g mol–1.
The experiment to demonstrate this is quite simple: pure solvent is introduced into one side of a cell that is separated into two parts by a semipermeable membrane. The polymer solution is placed in the other side, which is enclosed and connected to a manometer or some other kind of pressure gauge. As solvent molecules diffuse into the solution cell the pressure builds up; eventually this pressure matches the osmotic pressure of the solution and the system is in osmotic equilibrium. The osmotic pressure is read from the measuring device and substituted into the van’t Hoff equation to find the number of moles of solute. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.05%3A__Colligative_Properties_-_Osmotic_Pressure.txt |
Learning Objectives
• What is reverse osmosis, and what is its principal application?
• Explain the role of osmotic pressure in food preservation, and give an example.
• Describe the role osmosis plays in the rise of water in plants (where is the semipermeable membrane?), and why it cannot be the only cause in very tall trees.
\[E=mc^2\]
If it takes a pressure of \(Π\) atm to bring about osmotic equilibrium, then it follows that applying a hydrostatic pressure greater than this to the high-solute side of an osmotic cell will force water to flow back into the fresh-water side. This process, known as reverse osmosis, is now the major technology employed to desalinate ocean water and to reclaim "used" water from power plants, runoff, and even from sewage. It is also widely used to deionize ordinary water and to purify it for for industrial uses (especially beverage and food manufacture) and drinking purposes.
Pre-treatment commonly employs activated-carbon filtration to remove organics and chlorine (which tends to damage RO membranes). Although bacteria are unable to pass through semipermeable membranes, the latter can develop pinhole leaks, so some form of disinfection is often advised. The efficiency and cost or RO is critically dependent on the properties of the semipermeable membrane.
Osmotic Generation of Electric Power
The osmotic pressure of seawater is almost 26 atm. Since a pressure of 1 atm will support a column of water 10.6 m high, this means that osmotic flow of fresh water through a semipermeable membrane into seawater could in principle support a column of the latter by 26 x 10.3 = 276 m (904 ft)!
So imagine an osmotic cell in which one side is supplied with fresh water from a river, and the other side with seawater. Osmotic flow of fresh water into the seawater side forces the latter up through a riser containing a turbine connected to a generator, thus providing a constant and fuel-less source of electricity. The key component of such a scheme, first proposed by an Israeli scientist in 1973 and known as pressure-retarded osmosis (PRO) is of course a semipermeable membrane capable of passing water at a sufficiently high rate.
The world's first experimental PRO plant was opened in 2009 in Norway. Its capacity is only 4 kW, but it serves as proof-in-principle of a scheme that is estimated capable of supplying up to 2000 terawatt-hours of energy worldwide. The semipermeable membrane operates at a pressure of about 10 atm and passes 10 L of water per second, generating about 1 watt per m2 of membrane. PRO is but one form of salinity gradient power that depends on the difference between the salt concentrations in different bodies of water.
1 atm is equivalent to 1034 g cm–2, so from the density of water we get (1034 g cm–2) ÷ (1 g cm–3) = 1034 cm = 10.3 m.
Osmosis in Biology and Physiology
Because many plant and animal cell membranes and tissues tend to be permeable to water and other small molecules, osmotic flow plays an essential role in many physiological processes.
Normal saline solution
The interiors of cells contain salts and other solutes that dilute the intracellular water. If the cell membrane is permeable to water, placing the cell in contact with pure water will draw water into the cell, tending to rupture it. This is easily and dramatically seen if red blood cells are placed in a drop of water and observed through a microscope as they burst. This is the reason that "normal saline solution", rather than pure water, is administered in order to maintain blood volume or to infuse therapeutic agents during medical procedures.
In order to prevent irritation of sensitive membranes, one should always add some salt to water used to irrigate the eyes, nose, throat or bowel. Normal saline contains 0.91% w/v of sodium chloride, corresponding to 0.154 M, making its osmotic pressure close to that of blood.
Food preservation
The drying of fruit, the use of sugar to preserve jams and jellies, and the use of salt to preserve certain meats, are age-old methods of preserving food. The idea is to reduce the water concentration to a level below that in living organisms. Any bacterial cell that wanders into such a medium will have water osmotically drawn out of it, and will die of dehydration. A similar effect is noticed by anyone who holds a hard sugar candy against the inner wall of the mouth for an extended time; the affected surface becomes dehydrated and noticeably rough when touched by the tongue.
In the food industry, what is known as water activity is measured on a scale of 0 to 1, where 0 indicates no water and 1 indicates all water. Food spoilage micro-organisms, in general, are inhibited in food where the water activity is below 0.6. However, if the pH of the food is less than 4.6, micro-organisms are inhibited (but not immediately killed] when the water activity is below 0.85.
Diarrhea
The presence of excessive solutes in the bowel draws water from the intestinal walls, giving rise to diarrhea. This can occur when a food is eaten that cannot be properly digested (as, for example, milk in lactose-intolerant people). The undigested material contributes to the solute concentration, raising its osmotic pressure. The situation is made even worse if the material undergoes bacterial fermentation which results in the formation of methane and carbon dioxide, producing a frothy discharge.
Water Transport in Plants
Osmotic flow plays an important role in the transport of water from its source in the soil to its release by transpiration from the leaves, it is helped along by hydrogen-bonding forces between the water molecules. Capillary rise is not believed to be a significant factor.
Water enters the roots via osmosis, driven by the low water concentration inside the roots that is maintained by both the active [non-osmotic] transport of ionic nutrients from the soil and by the supply of sugars that are photosynthesized in the leaves. This generates a certain amount of root pressure which sends the water molecules on their way up through the vascular channels of the stem or trunk. But the maximum root pressures that have been measured can push water up only about 20 meters, whereas the tallest trees exceed 100 meters. Root pressure can be the sole driver of water transport in short plants, or even in tall ones such as trees that are not in leaf. Anyone who has seen apparently tender and fragile plants pushing their way up through asphalt pavement cannot help but be impressed!
But when taller plants are actively transpiring (losing water to the atmosphere], osmosis gets a boost from what plant physiologists call cohesion tension or transpirational pull. As each H2O molecule emerges from the opening in the leaf it pulls along the chain of molecules beneath it. So hydrogen-bonding is no less important than osmosis in the overall water transport process. If the soil becomes dry or saline, the osmotic pressure outside the root becomes greater than that inside the plant, and the plant suffers from “water tension”, i.e., wilting.
Do fish drink water? Do they Urinate?
The following section is a bit long, but for those who are interested in biology it offers a beautiful example of how the constraints imposed by osmosis have guided the evolution of ocean-living creatures into fresh-water species . It concerns ammonia NH3, a product of protein metabolism that is generated within all animals, but is highly toxic and must be eliminated.
Marine invertebrates (those that live in seawater) are covered in membranes that are fairly permeable to water and to small molecules such as ammonia. So water can diffuse in either direction as required, and ammonia can diffuse out as quickly as it forms. Nothing special here.
Invertebrates that live in fresh water do have problem: the salt concentrations within their bodies are around 1%, much greater than in fresh water. For this reason they have evolved surrounding membranes that are largely impermeable to salts (to prevent their diffusion out of the body) and to water (to prevent osmotic flow in.) But these organisms must also be able to exchange oxygen and carbon dioxide with their environment. The special respiratory organs (gills) that mediate this process, as a consequence of being permeable to these two gases, will also allow water molecules (whose sizes are comparable to those of the respiratory gases) to pass through. In order to protect fresh-water invertebrates from the disastrous effects of unlimited water inflow through the gill membranes, these animals possess special excretory organs that expel excess water back into the environment. Thus in such animals, there is a constant flow of water passing through the body. Ammonia and other substances that need to be excreted are taken up by this stream which constitutes a continual flow of dilute urine.
Fishes fall into two general classes: most fish have bony skeletons and are known as teleosts. Sharks and rays have cartilage instead of bones, and are called elasmobranchs. For the teleosts that live in fresh water, the situation is very much the same as with fresh-water invertebrates; they take in and excrete water continuously. The fact that an animal lives in the water does not mean that it enjoys an unlimited supply of water. Marine teleosts have a more difficult problem. Their gills are permeable to water, as are those of marine invertebrates. But the salt content of seawater (about 3%), being higher than the about 1% in the fish’s blood, would draw water out of the fish. Thus these animals are constantly losing water, and would be liable to desiccation if water could freely pass out of their gills. Some does, of course, and with it goes most of its nitrogen in the form of NH3.
Thus most of the waste nitrogen exits not through the usual excretory organs as with most vertebrates, but through the gills. But in order to prevent excessive loss of water, the gills have reduced permeability to this water, and with it, to comparably-sized NH3. So in order to prevent ammonia toxicity, the remainder of it is converted to a non-toxic substance (trimethylamine oxide (CH3)3NO) which is excreted via the kidneys.
The marine elasmobranchs solve the loss-of-water problem in another way: they convert waste ammonia to urea (NH3)2CO which is highly soluble and non-toxic. Their kidneys are able to control the quantity of urea excreted so that their blood retains about 2-2.5 percent of this substance. Combined with the 1 percent of salts and other substances in their blood, this raises the osmotic pressure within the animal to slightly above that of seawater, Thus the same mechanism that protects them from ammonia poisoning also ensures them an adequate water supply. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.06%3A__Reverse_Osmosis.txt |
All four solution effects (Reduced Vapor Pressure, Freezing Point depression, Boiling Point Elevation, Osmotic pressure) result from “dilution” of the solvent by the added solute. Because of this commonality they are referred to as colligative properties (Lat. co ligare, connected to.) The key role of the solvent concentration is obscured by the greatly-simplified expressions used to calculate the magnitude of these effects, in which only the solute concentration appears. The details of how to carry out these calculations and the many important applications of colligative properties are covered elsewhere. Our purpose here is to offer a more complete explanation of why these phenomena occur.
Basically, these all result from the effect of dilution of the solvent on its entropy, and thus in the increase in the density of energy states of the system in the solution compared to that in the pure liquid. Equilibrium between two phases (liquid-gas for boiling and solid-liquid for freezing) occurs when the energy states in each phase can be populated at equal densities. The temperatures at which this occurs are depicted by the shading.
• Dilution of the solvent adds new energy states to the liquid, but does not affect the vapor phase. This raises the temperature required to make equal numbers of microstates accessible in the two phases.
• Dilution of the solvent adds new energy states to the liquid, but does not affect the solid phase. This reduces the temperature required to make equal numbers of states accessible in the two phases.
Effects of pressure on the entropy: Osmotic Pressure
When a liquid is subjected to hydrostatic pressure— for example, by an inert, non-dissolving gas that occupies the vapor space above the surface, the vapor pressure of the liquid is raised. The pressure acts to compress the liquid very slightly, effectively narrowing the potential energy well in which the individual molecules reside and thus increasing their tendency to escape from the liquid phase. (Because liquids are not very compressible, the effect is quite small; a 100-atm applied pressure will raise the vapor pressure of water at 25°C by only about 2 torr.) In terms of the entropy, we can say that the applied pressure reduces the dimensions of the "box" within which the principal translational motions of the molecules are confined within the liquid, thus reducing the density of energy states in the liquid phase.
Applying hydrostatic pressure to a liquid increases the spacing of its microstates, so that the number of energetically accessible states in the gas, although unchanged, is relatively greater— thus increasing the tendency of molecules to escape into the vapor phase. In terms of free energy, the higher pressure raises the free energy of the liquid, but does not affect that of the gas phase.
This phenomenon can explain osmotic pressure. Osmotic pressure, students must be reminded, is not what drives osmosis, but is rather the hydrostatic pressure that must be applied to the more concentrated solution (more dilute solvent) in order to stop osmotic flow of solvent into the solution. The effect of this pressure \(\Pi\) is to slightly increase the spacing of solvent energy states on the high-pressure (dilute-solvent) side of the membrane to match that of the pure solvent, restoring osmotic equilibrium.
Osmotic pressure does not drive osmosis, but is rather the hydrostatic pressure that must be applied to the more concentrated solution (more dilute solvent) in order to stop osmotic flow of solvent into the solution. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.07%3A_Colligative_Properties_and_Entropy.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Describe the physical reasons that a binary liquid solution might exhibit non-ideal behavior
The popular liquor vodka consists mainly of ethanol (ethyl alcohol) and water in roughly equal portions. Ethanol and water both have substantial vapor pressures, so both components contribute to the total pressure of the gas phase above the liquid in a closed container of the two liquids. One might expect the vapor pressure of a solution of ethanol and water to be directly proportional to the sums of the values predicted by Raoult's law for the two liquids individually, but in general, this does not happen. The reason for this can be understood if you recall that Raoult's law reflects a single effect: the smaller proportion of vaporizable molecules (and thus their reduced escaping tendency) when the liquid is diluted by otherwise "inert" (non-volatile) substance.
Ideal Solutions
There are some solutions whose components follow Raoult's law quite closely. An example of such a solution is one composed of hexane C6H14 and heptane C7H16. The total vapor pressure of this solution varies in a straight-line manner with the mole fraction composition of the mixture.
Note that the mole fraction scales at the top and bottom run in opposite directions, since by definition,
$\chi_{hexane} = 1 – \chi_{heptane}$
If this solution behaves ideally, then is the sum of the Raoult's law plots for the two pure compounds:
$P_{total} = P_{ heptane } + P_{ hexane }$
An ideal solution is one whose vapor pressure follows Raoult's law throughout its range of compositions. Experience has shown solutions that approximate ideal behavior are composed of molecules having very similar structures. Thus hexane and heptane are both linear hydrocarbons that differ only by a single –CH2 group. This provides a direct clue to the underlying cause of non-ideal behavior in solutions of volatile liquids. In an ideal solution, the interactions are there, but they are all energetically identical. Thus in an ideal solution of molecules A and B, A—A and B—B attractions are the same as A—B attractions. This is the case only when the two components are chemically and structurally very similar.
Ideal Solutions vs. Ideal Gases
The ideal solution differs in a fundamental way from the definition of an ideal gas, defined as a hypothetical substance that follows the ideal gas law. The kinetic molecular theory that explains ideal gas behavior assumes that the molecules occupy no space and that intermolecular attractions are totally absent.
The definition of an ideal gas is clearly inapplicable to liquids, whose volumes directly reflect the volumes of their component molecules. And of course, the very ability of the molecules to form a condensed phase is due to the attractive forces between the molecules. So the most we can say about an ideal solution is that the attractions between its all of its molecules are identical — that is, A-type molecules are as strongly attracted to other A molecules as to B-type molecules. Ideal solutions are perfectly democratic: there are no favorites.
Real Solutions
Real solutions are more like real societies, in which some members are "more equal than others." Suppose, for example, that unlike molecules are more strongly attracted to each other than are like molecules. This will cause A–B pairs that find themselves adjacent to each other to be energetically more stable than A–A and B–B pairs. At compositions in which significant numbers of both kind of molecules are present, their tendencies to escape the solution — and thus the vapor pressure of the solution, will fall below what it would be if the interactions between all the molecules were identical. This gives rise to a negative deviation from Raoult's law. The chloroform-acetone system, illustrated above, is a good example.
Conversely, if like molecules of each kind are more attracted to each other than to unlike ones, then the molecules that happen to be close to their own kind will be stabilized. At compositions approaching 50 mole-percent, A and B molecules near each other will more readily escape the solution, which will therefore exhibit a higher vapor pressure than would otherwise be the case. It should not be surprising molecules as different as benzene and $CS_2$ should interact more strongly with their own kind, hence the positive deviation illustrated here.
You will recall that all gases approach ideal behavior as their pressures approach zero. In the same way, as the mole fraction of either component approaches unity, the behavior of the solution approaches ideality. This is a simple consequence of the fact that at these limits, each molecule is surrounded mainly by its own kind, and the few A-B interactions will have little effect. Raoult's law is therefore a limiting law:
$P_i = \lim_{x_i \rightarrow 0} P^o \chi_i$
it gives the partial pressure of a substance in equilibrium with the solution more and more closely as the mole fraction of that substance approaches unity. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.08%3A_Ideal_vs._Real_Solutions.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Sketch out a typical boiling point diagram for a binary liquid solution, and use this to show how a simple one-stage distillation works.
• Explain the role of the lever rule in fractional distillation
• Describe the purpose and function of a fractionating column
• Sketch out boiling point diagrams for high- and low-boiling azeotropes
• Describe the role of distillation in crude oil refining, and explain, in a very general way, how further processing is used to increase the yield of gasoline motor fuel.
Distillation is a process whereby a mixture of liquids having different vapor pressures is separated into its components. At first one might think that this would be quite simple: if you have a solution consisting of liquid A that boils at 50°C and liquid B with a boiling point of 90°C, all that would be necessary would be to heat the mixture to some temperature between these two values; this would boil off all the A (whose vapor could then be condensed back into pure liquid A), leaving pure liquid B in the pot. But that overlooks that fact that these liquids will have substantial vapor pressures at all temperatures, not only at their boiling points.
Vapor Pressure vs. Composition Phase Diagrams
To fully understand distillation, we will consider an ideal binary liquid mixture of $\ce{A}$ and $\ce{B}$. If the mole fraction of $A$ in the mixture is $\chi_A$, then by the definition of mole fraction, that of $B$ is
$\chi_B = 1 – \chi_A$
Since distillation depends on the different vapor pressures of the components to be separated, let's first consider the vapor pressure vs. composition plots (Figure $1$) for a hypothetical mixture at some arbitrary temperature at which both liquid and gas phases can exist, depending on the total pressure.
In Figure $2$, all states of the system (i.e., combinations of pressure and composition) in which the solution exists solely as a liquid are shaded in green. Since liquids are more stable at higher pressures, these states occupy the upper part of the diagram. At any given total vapor pressure such as at , the composition of the vapor in equilibrium with the liquid (designated by $x_A$) corresponds to the intercept with the diagonal equilibrium line at . The diagonal line is just an expression of the linearity between vapor pressure and composition according to Raoult's law.
The two liquid-vapor equilibrium lines (one curved, the other straight) now enclose an area in which liquid and vapor can coexist; outside of this region, the mixture will consist entirely of liquid or of vapor. At this particular pressure , the intercept with the upper boundary of the two-phase region gives the mole fractions of A and B in the liquid phase, while the intercept with the lower boundary gives the mole fractions of the two components in the vapor.
Exercise $1$
Take a moment to study Figure $5$ and to confirm that
• because both intercepts occur on equilibrium lines, they describe the compositions of the liquid and vapor that can simultaneously exist;
• the compositions of the vapor and liquid are not the same;
• in the vapor, the mole fraction of $\ce{B}$ (the more volatile component of the solution) is greater than that in the liquid;
• in the liquid, the mole fraction of $\ce{A}$ (the less volatile component) is smaller than that of the vapor.
The vapor in equilibrium with a solution of two or more liquids is always richer in the more volatile component.
Temperatures vs. Composition Phase Diagrams (Boiling Point Diagrams)
The rule shown above suggests that if we heat a mixture sufficiently to bring its total vapor pressure into the two-phase region, we will have a means of separating the mixture into two portions which will be enriched in the more volatile and less volatile components respectively. This is the principle on which distillation is based. But what temperature is required to achieve this? Again, we will spare you the mathematical details, but it is possible to construct a plot similar to the Figure $4$ except that the vertical axis represents temperature rather than pressure. This kind of plot is called a boiling point diagram.
Important Properties of Boiling Point Diagrams
Some important things to understand about Figure $6$:
• The shape of the two-phase region is bi-convex, as opposed to the half-convex shape of the pressure-composition plot.
• The slope of the two-phase region is opposite to what we saw in the previous plot, and the areas corresponding to the single-phase regions are reversed. This simply reflects the fact that liquids having a higher vapor pressure boil at lower temperatures, and vice versa.
• The horizontal line that defines the temperature is called the tie line. Its intercepts with the two equilibrium curves specify the composition of the liquid and vapor in equilibrium with the mixture at the given temperature.
• The vapor composition line is also known as the dew point line — the temperature at which condensation begins on cooling.
• The liquid composition line is also called the bubble point line — the temperature at which boiling begins on heating.
The tie line shown in Figure $6$ is for one particular temperature. But when we heat a liquid to its boiling point, the composition will change as the more volatile component ($\ce{B}$ in these examples) is selectively removed as vapor. The remaining liquid will be enriched in the less volatile component, and its boiling point will consequently rise. To understand this process more thoroughly, let us consider the situation at several points during the distillation of an equimolar solution of $\ce{A}$ and $\ce{B}$.
Figure $\PageIndex{5A}$: We begin with the liquid at T1, below its boiling point. When the temperature rises to T2, boiling begins and the first vapor (and thus the first drop of condensate) will have the composition y2.
Figure $\PageIndex{5B}$: As the more volatile component B is boiled off, the liquid and vapor/condensate compositions shift to the left (orange arrows). Figure $\PageIndex{5C}$: At T4, the last trace of liquid disappears. The system is now entirely vapor, of composition y4.
Notice that the vertical green system composition line remains in the same location in the three plots because the "system" is defined as consisting of both the liquid in the "pot" and that in the receiving container which was condensed from the vapor. The principal ideas you should take away from this are that
• distillation can never completely separate two volatile liquids;
• the composition of the vapor and thus of the condensed distillate changes continually as each drop forms, starting at y2 and ending at y4 in this example;
• if the liquid is completely boiled away, the composition of the distillate will be the same as that of the original solution.
Laboratory Distillation Setup
The apparatus used for a simple laboratory batch distillation is shown here. The purpose of the thermometer is to follow the progress of the distillation; as a rough rule of thumb, the distillation should be stopped when the temperature rises to about half-way between the boiling points of the two pure liquids, which should be at least 20-30 C° apart (if they are closer, then fractional distillation, described below, becomes necessary).
Condensers are available in a number of types. The simple Liebig condenser shown above is the cheapest and therefore most commonly used in student laboratories. Several other classic designs increase the surface area separating the vapor/distillate and cooling water, leading to greater heat exchange efficiency and allowing higher throughput.
Fractional Distillation
Although distillation can never achieve complete separation of volatile liquids, it can in principal be carried out in such a way that any desired degree of separation can be achieved if the solution behaves ideally and one is willing to go to the trouble. The general procedure is to distill only a fraction of the liquid, the smaller the better. The condensate, now enriched in the more volatile component, is then collected and re-distilled (again, only a small fraction), thus obtaining a condensate even-more-enriched in the more volatile component. If we repeat this sequence many times, we can eventually obtain almost-pure, if minute, samples of the two components.
But since this would hardly be practical, there is a better way. In order to understand it, you need to know about the lever rule, which provides a simple way of determining the relative quantities (not just the compositions) of two phases in equilibrium. The lever rule is easily derived from Raoult's and Dalton's laws, but we will simply illustrate it graphically (Figure $7$). The plot shows the boiling point diagram of a simple binary mixture of composition . At the temperature corresponding to the tie line, the composition of the liquid corresponds to and that of the vapor to .
So now for the lever rule: the relative quantities of the liquid and the vapor we identified above are given by the lengths of the tie-line segments labeled a and b. Thus in this particular example, in which b is about four times longer than a, we can say that the mole ratio of vapor (of composition ) to liquid (composition ) is 4.
Steps in Fractional Distillation
It is not practical to carry out an almost-infinite number of distillation steps to obtain nearly-infinitesimal quantities of the two pure liquids we wish to separate. So instead of collecting each drop of condensate and re-distilling it, we will distil half of the mixture in each step. Suppose you want to separate a liquid mixture composed of 20 mole-% B and 80 mole-% A, with A being the more volatile.
As we heat the mixture whose overall composition is indicated by , the first vapor is formed at T0and has the composition y0, found by extending the horizontal dashed line until it meets the vapor curve. This vapor is clearly enriched in B; if it is condensed, the resulting liquid will have a mole fraction xB approaching that of A in the original liquid. But this is only the first drop, we don't want to stop there!
As the liquid continues to boil, the boiling temperature rises. When it reaches T1, we will have boiled away half of the liquid. At this point, the "system" composition (liquid plus vapor) is still the same (), but is now equally divided between the liquid, which we call "residue" R1, and the condensed vapor, the distillate D1.
How do we know it is equally divided? We have picked T1 so that the tie line is centered on the system concentration, so by the lever rule, R1 and D1 contain equal numbers of moles.
We now take the condensed liquid D1 having the composition , and distill half of it, obtaining distillate of composition D2.
.. and then carry out yet another distillation, this time using D3 as our feedstock.
Our four-stage fractionation has enriched the more volatile solute from 20 to slightly over 80 mole-percent in D4. The less volatile component A is most concentrated in R1. R2 through R4 are thrown away (but not down the sink, please!)
This may be sufficient for some purposes, but we might wish to do much better, using perhaps 1000 stages instead of just 4. What could be more tedious?
Fractionation with reflux
Not to worry! The multiple successive distillations can be carried out "virtually" by inserting a fractionating column between the boiling flask and the condenser.
These columns are made with indentations or are filled with materials that provide a large surface area extending through the vertical temperature gradient (higher temperature near the bottom, lower temperature at the top.) The idea is that hot vapors condense at various levels in the column and the resulting liquid drips down (refluxes) to a lower level where it is vaporized, which corresponds roughly to a re-distillation.
Vigreux columns having multiple indentations are widely used (above right). Simple columns can be made by filling a glass tube with beads, short glass tubes, or even stainless steel kitchen-type scouring pads. More elaborate ones have spinning steel ribbons.
Separation efficiency: theoretical plates
The operation of fractionating columns can best be understood by reference to a bubble-cap column. The one shown here consists of four sections, or "plates" through which hot vapors rise and bubble up through pools of condensate that collect on each plate. The intimate contact between vapor and liquid promotes equilibration and re-distillation at successively higher temperatures at each higher plate in the column. Unlike the case of the step-wise fractional distillation we discussed above, none of the intermediate residues is thrown away; they simply drip down back into the pot where their fractionation journey begins again, always leading to a further concentration of the less-volatile component in the remaining liquid. At the same time, the vapor emerging from the top plate (5) provides a continuing flow of volatile-enriched condensate, although in diminishing quantities as it is depleted in the boiling pot.
If complete equilibrium is attained between the liquid and vapor at each stage, then we can describe the system illustrated above as providing "five theoretical plates" of separation (remember that the pot represents the first theoretical plate.) Equilibrium at each stage requires a steady-state condition in which the quantity of vapor moving upward at each stage is equal to the quantity of liquid draining downward — in other words, the column should be operating in total reflux, with no net removal of distillate. So any real distillation process will be operated at a reflux ratio that provides optimum separation in a reasonable period of time.
Some of the more advanced laboratory-type devices (such as some spinning-steel band columns) are said to offer up to around 200 theoretical plates of separating power.
Azeotropes: the Limits of Distillation
The boiling point diagrams presented in the foregoing section apply to solutions that behave in a reasonably ideal manner — that is, to solutions that do not deviate too far from Raoult's law. As we explained above, mixtures of liquids whose intermolecular interactions are widely different do not behave ideally, and may be impossible to separate by ordinary distillation. The reason for this is that under certain conditions, the compositions of the liquid and of the vapor in equilibrium with it become identical, precluding any further separation. These cross-over points appear as "kinks" in the boiling point diagrams.
High- and low-boiling azeotropes
Thus in this boiling point diagram for a mixture exhibiting a positive deviation from Raoult's law, successive fractionations of mixtures correspond to either or bring the distillation closer to the azeotropic composition indicated by the dashed vertical line. Once this point is reached, further distillation simply yields more of the same "high-boiling" azeotrope.
Distillation of a mixture having a negative deviation from Raoult's law leads to a similar stalemate, in this case yielding a "low-boiling" azeotrope. High- and low-boiling azeotropes are commonly referred to as constant-boiling mixtures, and they are more common than most people think.
"Breaking" an azeotrope
There are four general ways of dealing with azeotropes. The first two of these are known collectively as azeotropic distillation.
• Addition of a third substance that alters the intermolecular attractions is the most common trick. The drawback is that another procedure is usually needed to remove this other substance.
• Pressure-swing distillation takes advantage of the fact that boiling point (T,X) diagrams are two-dimensional slices of a (T,X,P) diagram in which the pressure is the third variable. This means that the azeotropic composition depends on the pressure, so distillation at some pressure other than 1 atm may allow one to "jump" the azeotrope.
• Use of a molecular sieve — a porous material that selectively absorbs one of the liquids, most commonly water when the latter is present at a low concentration.
• Give up. It often happens that the azeotropic composition is sufficiently useful that it's not ordinarily worth the trouble of obtaining a more pure product. This accounts for the concentrations of many commercial chemicals such as mineral acids.
Some constant-boiling mixtures with water
mixture azeotrope
Ethanol 98%, high, 78.1°C
Hydrochloric acid 20.2% high, 108.6°C
Hydrofluoric acid 35.6%, 111.3°C
Nitric acid 68%, 120.5°C
Sulfuric acid 98.3%, 338°C
Distillation of Ethanol
Ethanol is one of the major industrial chemicals, and is of course the essential component of beverages that have been a part of civilization throughout recorded history. Most ethanol is produced by fermentation of the starch present in food grains, or of sugars formed by the enzymatic degradation of cellulose. Because ethanol is toxic to the organisms whose enzymes mediate the fermentation process, the ethanol concentration in the fermented mixture is usually limited to about 15%. The liquid phase of the mixture is then separated and distilled.
For applications requiring anhydrous ethanol ("absolute ethanol "), the most common method is the use of zeolite-based molecular sieves to absorb the remaining water. Addition of benzene can break the azeotrope, and this was the most common production method in earlier years. For certain critical uses where the purest ethanol is required, it is synthesized directly from ethylene.
Special Distillation Methods
Here be briefly discuss two distillation methods that students are likely to encounter in more advanced organic lab courses.
Vacuum distillation: Many organic substances become unstable at high temperatures, tending to decompose, polymerize or react with other substances at temperatures around 200° C or higher. A liquid will boil when its vapor pressure becomes equal to the pressure of the gas above it, which is ordinarily that of the atmosphere. If this pressure is reduced, boiling can take place at a lower temperature. (Even pure water will boil at room temperature under a partial vacuum.) "Vacuum distillation" is of course a misnomer; a more accurate term would be "reduced-pressure distillation". Vacuum distillation is very commonly carried out in the laboratory and will be familiar to students who take more advanced organic lab courses. It is also sometimes employed on a large industrial scale.
The vacuum distillation setup is similar that employed in ordinary distillation, with a few additions:
• The vacuum line is connected to the bent adaptor above the receiving flask.
• In order to avoid uneven boiling and superheating ("bumping"), the boiling flask is usually provided with a fine capillary ("ebulliator") through which an air leak produces bubbles that nucleate the boiling liquid.
• The vacuum is usually supplied by a mechanical pump, or less commonly by a water aspirator or a "house vacuum" line.
• The boiling flask is preferably heated by a water- or steam bath, which provides more efficient heat transfer to the flask and avoids localized overheating. Prior to about 1960, open flames were commonly used in student laboratories, resulting in occasional fires that enlivened the afternoon, but detracted from the student's lab marks.
• A Claisen-type distillation head (below) provides a convenient means of accessing the boiling flask for inserting an air leak capillary or introducing additional liquid through a separatory funnel. This Claisen-Vigreux head includes a fractionation column.
Steam Distillation: Strictly speaking, this topic does not belong in this unit, since steam distillation is used to separate immiscible liquids rather than solutions. But because immiscible liquid mixtures are not treated in elementary courses, we present a brief description of steam distillation here for the benefit of students who may encounter it in an organic lab course. A mixture of immiscible liquids will boil when their combined vapor pressures reach atmospheric pressure. This combined vapor pressure is just the sum of the vapor pressures of each liquid individually, and is independent of the quantities of each phase present.
Because water boils at 100° C, a mixture of water and an immiscible liquid (an "oil"), even one that has a high boiling point, is guaranteed to boil below 100°, so this method is especially valuable for separating high boiling liquids from mixtures containing non-volatile impurities. Of course the water-oil mixture in the receiving flask must itself be separated, but this is usually easily accomplished by means of a separatory funnel since their densities are ordinarily different.
There is a catch, however: the lower the vapor pressure of the oil, the greater is the quantity of water that co-distills with it. This is the reason for using steam: it provides a source of water able to continually restore that which is lost from the boiling flask. Steam distillation from a water-oil mixture without the introduction of additional steam will also work, and is actually used for some special purposes, but the yield of product will be very limited. Steam distillation is widely used in industries such as petroleum refining (where it is often called "steam stripping") and in the flavors-and-perfumes industry for the isolation of essential oils
The term essential oil refers to the aromas ("essences") of these [mostly simple] organic liquids which occur naturally in plants, from which they are isolated by steam distillation or solvent extraction. Steam distillation was invented in the 13th Century by Ibn al-Baiter, one of the greatest of the scientists and physicians of the Islamic Golden Age in Andalusia.
Industrial-scale distillation and Petroleum fractionation
Distillation is one of the major "unit operations" of the chemical process industries, especially those connected with petroleum and biofuel refining, liquid air separation, and brewing. Laboratory distillations are typically batch operations and employ relatively simple fractionating columns to obtain a pure product. In contrast, industrial distillations are most often designed to produce mixtures having a desired boiling range rather than pure products.
Industrial operations commonly employ bubble-cap fractionating columns (seldom seen in laboratories), although packed columns are sometimes used. Perhaps the most distinctive feature of large scale industrial distillations is that they usually operate on a continuous basis in which the preheated crude mixture is preheated in a furnace and fed into the fractionating column at some intermediate point. A reboiler unit maintains the bottom temperature at a constant value. The higher-boiling components then move down to a level at which they vaporize, while the lighter (lower-boiling) material moves upward to condense at an appropriate point.
Petroleum is a complex mixture of many types of organic molecules, mostly hydrocarbons, that were formed by the effects of heat and pressure on plant materials (mostly algae) that grew in regions that the earth's tectonic movements buried over periods of millions of years. This mixture of liquid and gases migrates up through porous rock until it s trapped by an impermeable layer of sedimentary rock. The molecular composition of crude oil (the liquid fraction of petroleum) is highly variable, although its overall elemental makeup generally reflects that of typical plants.
element carbon hydrogen nitrogen oxygen sulfur metals
amount 83-87% 10-14% 0.1-2% 0.1-1.5% 0.5-6%
The principal molecular constituents of crude oil are
• alkanes: Also known as paraffins, these are saturated linear- or branched-chain molecules having the general formula CnH2n+2 in which n is mostly between 5 and 40.
• unsaturated aliphatic: Linear- or branched chain molecules containing one or more double or triple bonds (alkenes or alkynes).
• Cycloalkanes:Also known as naphthenes these are saturated hydrocarbons CnH2n containing one or more ring structures.
• Aromatic hydrocarbons:These contain one or more fused benzene rings CnHn, often with hydrocarbon side-chains.
The word gasoline predates its use as a motor fuel; it was first used as a topical medicine to rid people of head lice, and to remove grease spots and stains from clothing. The first major step of refining is to fractionate the crude oil into various boiling ranges.
boiling range fraction name further processing
butane and propane gas processing
30 - 210° straight-run gasoline blending into motor gasoline
100 - 200° naphtha reforming into gasoline components
150 - 250° kerosene jet fuel blending
160 -400° light gas oil distillate fuel blending into diesel or fuel oil
315 - 540° heavy gas oil catalytic cracking: large molecules are broken up into smaller ones and recycled
>450° asphalts, bottoms may be vacuum-distilled into more fractions
Further processing and blending
About 16% of crude oil is diverted to the petrochemical industry where it is used to make ethylene and other feedstocks for plastics and similar products. Because the fraction of straight-run gasoline is inadequate to meet demand, some of the lighter fractions undergo reforming and the heavier ones cracking and are recycled into the gasoline stream. These processes necessitate a great amount of recycling and blending, into which must be built a considerable amount of flexibility in order to meet seasonal needs (more volatile gasolines and heating fuel oil in winter, more total gasoline volumes in the summer.) | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.09%3A_Distillation.txt |
Electrolytic solutions are those that are capable of conducting an electric current. A substance that, when added to water, renders it conductive, is known as an electrolyte. A common example of an electrolyte is ordinary salt, sodium chloride. Solid NaCl and pure water are both non-conductive, but a solution of salt in water is readily conductive. A solution of sugar in water, by contrast, is incapable of conducting a current; sugar is therefore a non-electrolyte.
8.10: Ions and Electrolytes
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Describe the properties of water that make it an ideal electrolytic solvent.
• Describe the general structure of ionic hydration shells.
• Explain why all cations act as acids in water.
• Describe some of the major ways in which the conduction of electricity through a solution differs from metallic conduction.
• Define resistance, resistivity, conductance, and conductivity.
• Define molar conductivity and explain its significance.
• Explain the major factors that cause molar conductivity to diminish as electrolyte concentrations increase.
• Describe the contrasting behavior of strong, intermediate, and weak electrolytes.
• Explain the distinction between ionic diffusion and ionic migration.
• Define the limiting ionic conductivity, and comment on some of its uses.
• Explain why hydrogen- and hydroxide ions exhibit exceptionally large ionic mobilities.
Electrolytic solutions are those that are capable of conducting an electric current. A substance that, when added to water, renders it conductive, is known as an electrolyte. A common example of an electrolyte is ordinary salt, sodium chloride. Solid NaCl and pure water are both non-conductive, but a solution of salt in water is readily conductive. A solution of sugar in water, by contrast, is incapable of conducting a current; sugar is therefore a non-electrolyte.
These facts have been known since 1800 when it was discovered that an electric current can decompose the water in an electrolytic solution into its elements (a process known as electrolysis). By mid-century, Michael Faraday had made the first systematic study of electrolytic solutions. Faraday recognized that for a sample of matter to conduct electricity, two requirements must be met:
1. The matter must be composed of, or contain, electrically charged particles.
2. These particles must be mobile; that is, they must be free to move under the influence of an external applied electric field.
In metallic solids, the charge carriers are electrons rather than ions; their mobility is a consequence of the quantum-mechanical uncertainty principle which promotes the escape of the electrons from the confines of their local atomic environment. In the case of electrolytic solutions, Faraday called the charge carrier ions (after the Greek word for "wanderer"). His most important finding was that each kind of ion (which he regarded as an electrically-charged atom) carries a definite amount of charge, most commonly in the range of ±1-3 units.
The fact that the smallest charges observed had magnitudes of ±1 unit suggested an "atomic" nature for electricity itself, and led in 1891 to the concept of the "electron" as the unit of electric charge — although the identification of this unit charge with the particle we now know as the electron was not made until 1897.
An ionic solid such as NaCl is composed of charged particles, but these are held so tightly in the crystal lattice that they are unable to move about, so the second requirement mentioned above is not met and solid salt is not a conductor. If the salt is melted or dissolved in water, the ions can move freely and the molten liquid or the solution becomes a conductor.
Since positively-charged ions are attracted to a negative electrode that is traditionally known as the cathode, these are often referred to as cations. Similarly, negatively-charged ions, being attracted to the positive electrode, or anode, are called anions. (These terms were all coined by Faraday.)
The role of the solvent: what's special about water
Although we tend to think of the solvent (usually water) as a purely passive medium within which ions drift around, it is important to understand that electrolytic solutions would not exist without the active involvement of the solvent in reducing the strong attractive forces that hold solid salts and molecules such as HCl together. Once the ions are released, they are stabilized by interactions with the solvent molecules. Water is not the only liquid capable of forming electrolytic solutions, but it is by far the most important. It is therefore essential to understand those properties of water that influence the stability of ions in aqueous solution.
According to Coulomb's law, the force between two charged particles is directly proportional to the product of the two charges, and inversely proportional to the square of the distance between them:
The proportionality constant D is the dimensionless dielectric constant. Its value in empty space is unity, but in other media it will be larger. Since D appears in the denominator, this means that the force between two charged particles within a gas or liquid will be less than if the particles were in a vacuum. Water has one of the highest dielectric constants of any known liquid; the exact value varies with the temperature, but 80 is a good round number to remember. When two oppositely-charged ions are immersed in water, the force acting between them is only 1/80 as great as it would be between the two gaseous ions at the same distance. It can be shown that in order to separate one mole of Na+ and Cl ions at their normal distance of 23.6 pm in solid sodium chloride, the work required will be 586 J in a vacuum, but only 7.3 J in water
Dielectric Constant
The dielectric constant is a bulk property of matter, rather than being a property of the molecule itself, as is the dipole moment. It is a cooperative effect of all the molecules in the liquid, and is a measure of the extent to which an applied electric field will cause the molecules to line up with the negative ends of their dipoles pointing toward the positive direction of the electric field. The high dielectric constant of water is a consequence of the small size of the H2O molecule in relation to its large dipole moment.
When one molecule is reoriented by the action of an external electric field, local hydrogen bonding tends to pull neighboring molecules into the same alignment, thus producing an additive effect that would be absent if the molecules were all acting independently.
Water's dipole moment stabilizes dissolved ions through hydrogen bonding
When an ion is introduced into a solvent, the attractive interactions between the solvent molecules must be disrupted to create space for the ion. This costs energy and would by itself tend to inhibit dissolution. However, if the solvent has a high permanent dipole moment, the energy cost is more than recouped by the ion-dipole attractions between the ion and the surrounding solvent molecules.
Water, as you know, has a sizeable dipole moment that is the source of the hydrogen bonding that holds the liquid together. The greater strength of ion-dipole attractions compared to hydrogen-bonding (dipole-dipole) attractions stabilizes the dissolved ion.
Water is not the only electrolytic solvent, but it is by far the best. For some purposes, chemists occasionally need to employ non-aqueous solvents when studying electrolytes. Here are a few examples:
solvent Melting Point (°C) Boiling Point °C D dipole moment specific conductivity, S/cm
water 0 100 80.1 1.87 5.5 × 10–8
methanol –98 64.7 32.7 1.7 1.5× 10–7
ethanol –114 78.3 24.5 1.66 1.35 × 10–9
acetonitrile –43.8 81.6 37.5 3.45 7 × 10–6
dimethyl sulfoxide 18.5 189 46.7 3.96 3 × 10–8
ethylene carbonate 36.4 238 89.6 4.87 < 1 × 10–9 | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9A%3A_8.10.9A%3A_Electrolytes_and_Electrolytic_Solutions.txt |
The kinds of ions we will consider in this lesson are mostly those found in solutions of common acids or salts. As is evident from the image below, most of the metallic elements form monatomic cations, but the number of monatomic anions is much smaller. This reflects the fact that many single-atom anions such as hydride H, oxide O2–, sulfide S2– and those in Groups 15 and 16, are unstable in (i.e., react with) water, and their major forms are those in which they are combined with other elements, particularly oxygen. Some of the more familiar oxyanions are hydroxide OH, carbonate CO32–, nitrate NO3, sulfate SO42–, chlorate ClO42–, and arsenate AsO42–.
Conductivity water" with κ = 0.043 × 10–6 S cm–1 at 18°C. Ordinary distilled water in equilibrium with atmospheric CO2 has a conductivity that is 16 times greater.
It is now known that ordinary distillation cannot entirely remove all impurities from water. Ionic impurities get entrained into the fog created by breaking bubbles and are carried over into the distillate by capillary flow along the walls of the apparatus. Organic materials tend to be steam-volatile ("steam-distilled").
The best current practice is to employ a special still made of fused silica in which the water is volatilized from its surface without boiling. Complete removal of organic materials is accomplished by passing the water vapor through a column packed with platinum gauze heated to around 800°C through which pure oxygen gas is passed to ensure complete oxidation of carbon compounds.
Conductance measurements are widely used to gauge water quality, especially in industrial settings in which concentrations of dissolved solids must be monitored in order to schedule maintenance of boilers and cooling towers.
The conductance of a solution depends on 1) the concentration of the ions it contains, 2) on the number of charges carried by each ion, and 3) on the mobilities of these ions. The latter term refers to the ability of the ion to make its way through the solution, either by ordinary thermal diffusion or in response to an electric potential gradient.
The first step in comparing the conductances of different solutes is to reduce them to a common concentration. For this, we define the conductance per unit concentration which is known as the molar conductivity, denoted by the upper-case Greek lambda:
\[Λ = \dfrac{κ}{c}\]
When κ is expressed in S cm–1, C should be in mol cm–3, so Λ will have the units S cm2. This is best visualized as the conductance of a cell having 1-cm2 electrodes spaced 1 cm apart — that is, of a 1 cm cube of solution. But because chemists generally prefer to express concentrations in mol L–1 or mol dm–3 (mol/1000 cm3) , it is common to write the expression for molar conductivity as
\[Λ = \dfrac{1000κ}{c}\]
electrodes, separated again by 1 cm.
But if c is the concentration in moles per liter, this will still not fairly compare two salts having different stoichiometries, such as AgNO3 and FeCl3, for example. If we assume that both salts dissociate completely in solution, each mole of AgNO3 yields two moles of charges, while FeCl3 releases six (i.e., one Fe3+ ion, and three Cl ions.) So if one neglects the [rather small] differences in the ionic mobilities, the molar conductivity of FeCl3 would be three times that of AgNO3.
The most obvious way of getting around this is to note that one mole of a 1:1 salt such as AgNO3 is "equivalent" (in this sense) to 1/3 of a mole of FeCl3, and of ½ a mole of MgBr2. To find the number of equivalents that correspond a given quantity of a salt, just divide the number of moles by the total number of positive charges in the formula unit. (If you like, you can divide by the number of negative charges instead; because these substances are electrically neutral, the numbers will be identical.)
We can refer to equivalent concentrations of individual ions as well as of neutral salts. Also, since acids can be regarded as salts of H+, we can apply the concept to them; thus a 1M L–1 solution of sulfuric acid H2SO4 has a concentration of 2 eq L–1.
The following diagram summarizes the relation between moles and equivalents for CuCl2:
Example \(1\)
What is the concentration, in equivalents per liter, of a solution made by dissolving 4.2 g of chromium(III) sulfate pentahydrate Cr2(SO4)3·5H2O in sufficient water to give a total volume of 500 mL? (The molar mass of the hydrate is 482 g)
Solution
Assume that the salt dissociates into 6 positive charges and 6 negative charges.
• Number of moles of the salt: (4.2 g) / (482 g mol–1) = 0.00871 mol
• Number of equivalents: (.00871 mol) / (6 eq mol–1) = 0.00146 eq
• Equivalent concentration: (0.00146 eq) / (0.5 L) = 0.00290 eq L–1
The concept of equivalent concentration allows us to compare the conductances of different salts in a meaningful way. Equivalent conductivity is defined similarly to molar conductivity
\[Λ = \dfrac{κ}{c}\]
except that the concentration term is now expressed in equivalents per liter instead of moles per liter. (In other words, the equivalent conductivity of an electrolyte is the conductance per equivalent per liter.) | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9B%3A_8.10.9B%3A_The_nature_of_ions_in_aqueous_solution.txt |
The serious study of electrolytic solutions began in the latter part of the 19th century, mostly in Germany — and before the details of dissociation and ionization were well understood. These studies revealed that the equivalent conductivities of electrolytes all diminish with concentration (or more accurately, with the square root of the concentration), but they do so in several distinct ways that are distinguished by their behaviors at very small concentrations. This led to the classification of electrolytes as weak, intermediate, and strong.
You will notice that plots of conductivities vs. √c start at c=0. It is of course impossible to measure the conductance of an electrolyte at vanishingly small concentrations (not to mention zero!), but for strong and intermediate electrolytes, one can extrapolate a series of observations to zero. The resulting values are known as limiting equivalent conductances or sometimes as "equivalent conductances at infinite dilution", designated by Λ°.
Strong electrolytes
These well-behaved systems include many simple salts such as NaCl, as well as all strong acids.
The Λ vs. √c plots closely follow the linear relation
Λ = Λ° – bc
Intermediate electrolytes
These "not-so-strong" salts can't quite conform to the linear equation above, but their conductivities can be extrapolated to infinite dilution.
Weak electrolytes
"Less is more" for these oddities which possess the remarkable ability to exhibit infinite equivalent conductivity at infinite dilution. Although Λ° cannot be estimated by extrapolation, there is a clever work-around.
Conductivity diminishes as concentrations increase
Since ions are the charge carriers, we might expect the conductivity of a solution to be directly proportional to their concentrations in the solution. So if the electrolyte is totally dissociated, the conductivity should be directly proportional to the electrolyte concentration. But this ideal behavior is never observed; instead, the conductivity of electrolytes of all kinds diminishes as the concentration rises.
The non-ideality of electrolytic solutions is also reflected in their colligative properties, especially freezing-point depression and osmotic pressure. The primary cause of this is the presence of the ionic atmosphere that was introduced above. To the extent that ions having opposite charge signs are more likely to be closer together, we would expect their charges to partially cancel, reducing their tendency to migrate in response to an applied potential gradient.
A secondary effect arises from the fact that as an ion migrates through the solution, its counter-ion cloud does not keep up with it. Instead, new counter-ions are continually acquired on the leading edge of the motion, while existing ones are left behind on the opposite side. It takes some time for the lost counter-ions to dissipate, so there are always more counter-ions on the trailing edge. The resulting asymmetry of the counter-ion field exerts a retarding effect on the central ion, reducing its rate of migration, and thus its contribution to the conductivity of the solution.
The quantitative treatment of these effects was first worked out by P. Debye and W. Huckel in the early 1920's, and was improved upon by Ostwald a few years later. This work represented one of the major advances in physical chemistry in the first half of the 20th Century, and put the behavior of electrolytic solutions on a sound theoretical basis. Even so, the Debye-Huckel theory breaks down for concentrations in excess of about 10–3 M L–1 for most ions.
Not all Electrolytes Totally Dissociate in Solution
plots for strong electrolytes is largely explained by the effects discussed immediately above. The existence of intermediate electrolytes served as the first indication that many salts are not completely ionized in water; this was soon confirmed by measurements of their colligative properties.
The curvature of the plots for intermediate electrolytes is a simple consequence of the Le Chatelier effect, which predicts that the equilibrium
\[MX_{(aq)} = M^+_{(aq)} + X^–_{(aq)}\]
will shift to the left as the concentration of the "free" ions increases. In more dilute solutions, the actual concentrations of these ions is smaller, but their fractional abundance in relation to the undissociated form is greater. As the solution approaches zero concentration, virtually all of the \(MX_{(aq)}\) becomes dissociated, and the conductivity reaches its limiting value.
Weak electrolytes are dissociated only at extremely high dilution
hydrofluoric acid HF Ka = 10–3.2
acetic acid CH3COOH Ka = 10–6.3
bicarbonate ion HCO3 Ka = 10–10.3
ammonia NH3 Kb = 10–4.7
Dissociation, of course, is a matter of degree. The equilibrium constants for the dissociation of an intermediate electrolyte salt MX are typically in the range of 1-200. This stands in contrast to the large number of weak acids (as well as weak bases) whose dissociation constants typically range from 10–3 to smaller than 10–10.
These weak electrolytes, like the intermediate ones, will be totally dissociated at the limit of zero concentration; if the scale of the weak-electrolyte plot (blue) shown above were magnified by many orders of magnitude, the curve would resemble that for the intermediate electrolyte above it, and a value for Λ° could be found by extrapolation. But at such a high dilution, the conductivity would be so minute that it would be masked by that of water itself (that is, by the H+ and OH ions in equilibrium with the massive 55.6 M L–1 concentration of water) — making values of Λ in this region virtually unmeasurable. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9C%3A_8.10.9C%3A__Weak_and_Strong_Electrolytes.txt |
The motion of ions in solution is mainly random
The conductance of an electrolytic solution results from the movement of the ions it contains as they migrate toward the appropriate electrodes. But the picture we tend to have in our minds of these ions moving in a orderly, direct march toward an electrode is wildly mistaken. The thermally-induced random motions of molecules is known as diffusion. The term migration refers specifically to the movement of ions due to an externally-applied electrostatic field.
The average thermal energy at temperatures within water's liquid range (given by RT ) is sufficiently large to dominate the movement of ions even in the presence of an applied electric field. This means that the ions, together with the water molecules surrounding them, are engaged in a wild dance as they are buffeted about by thermal motions (which include Brownian motion).
If we now apply an external electric field to the solution, the chaotic motion of each ion is supplemented by an occasional jump in the direction dictated by the interaction between the ionic charge and the field. But this is really a surprisingly tiny effect:
It can be shown that in a typical electric field of 1 volt/cm, a given ion will experience only about one field-directed (non-random) jump for every 105 random jumps. This translates into an average migration velocity of roughly 10–7 m sec–1 (10–4 mm sec–1). Given that the radius of the H2O molecule is close to 10–10 m, it follows that about 1000 such jumps are required to advance beyond a single solvent molecule!
The ions migrate Independently
All ionic solutions contain at least two kinds of ions (a cation and an anion), but may contain others as well. In the late 1870's, the physicist Friedrich Kohlrausch noticed that the limiting equivalent conductivities of salts that share a common ion exhibit constant differences.
electrolyte Λ0(25°C) difference electrolyte Λ0 (25°C) difference
KCl
LiCl
149.9
115.0
34.9
HCl
HNO3
426.2
421.1
4.9
KNO3
LiNO3
145.0
140.1
34.9 LiCl
LiNO3
115.0
110.1
4.9
These differences represent the differences in the conductivities of the ions that are not shared between the two salts. The fact that these differences are identical for two pairs of salts such as KCl/LiCl and KNO3 /LiNO3 tells us that the mobilities of the non-common ions K+ and LI+ are not affected by the accompanying anions.
Kohlrausch's law greatly simplifies estimates of Λ0
This principle is known as Kohlrausch's law of independent migration, which states that in the limit of infinite dilution,
Each ionic species makes a contribution to the conductivity of the solution that depends only on the nature of that particular ion, and is independent of the other ions present.
Kohlrausch's law can be expressed as
Λ0 = Σ λ0+ + Σ λ0
This means that we can assign a limiting equivalent conductivity λ0 to each kind of ion:
Limiting ionic equivalent conductivities at 25°C, S cm–1 eq–1
cation H3O+ NH4+ K+ Ba2+ Ag+ Ca2+ Sr2+ Mg2+ Na+ Li+
λ0 349.98 73.57 73.49 63.61 61.87 59.47 59.43 53.93 50.89 38.66
anion OH SO42– Br I Cl NO3 ClO3 CH3COO C2H5COO C3H7COO
λ0 197.60 80.71 78.41 76.86 76.30 71.80 67.29 40.83 35.79 32.57
Just as a compact table of thermodynamic data enables us to predict the chemical properties of a very large number of compounds, this compilation of equivalent conductivities of twenty different species yields reliable estimates of the of Λ0 values for five times that number of salts.
We can now estimate weak electrolyte limiting conductivities
One useful application of Kohlrausch's law is to estimate the limiting equivalent conductivities of weak electrolytes which, as we observed above, cannot be found by extrapolation. Thus for acetic acid CH3COOH ("HAc"), we combine the λ0 values for H3O+ and CH3COO given in the above table:
Λ0HAc = λ0H+ + λ0Ac–
How fast do ions migrate in solution?
Movement of a migrating ion through the solution is brought about by a force exerted by the applied electric field. This force is proportional to the field strength and to the ionic charge. Calculations of the frictional drag are based on the premise that the ions are spherical (not always true) and the medium is continuous (never true) as opposed to being composed of discrete molecules. Nevertheless, the results generally seem to be realistic enough to be useful.
According to Newton's law, a constant force exerted on a particle will accelerate it, causing it to move faster and faster unless it is restrained by an opposing force. In the case of electrolytic conductance, the opposing force is frictional drag as the ion makes its way through the medium. The magnitude of this force depends on the radius of the ion and its primary hydration shell, and on the viscosity of the solution.
Eventually these two forces come into balance and the ion assumes a constant average velocity which is reflected in the values of λ0 tabulated in the table above.
The relation between λ0 and the velocity (known as the ionic mobility μ0) is easily derived, but we will skip the details here, and simply present the results:
Anions are conventionally assigned negative μ0 values because they move in opposite directions to the cations; the values shown here are absolute values |μ0|. Note also that the units are cm/sec per volt/cm, hence the cm2 term.
Absolute limiting mobilities of ions at 25°C, (cm2 volt–1 sec–1) × 100
cation H3O+ NH4+ K+ Ba2+ Ag+ Ca2+ Sr2+ Mg2+ Na+ Li+
μ0 0.362 0.0762 0.0762 0.0659 0.0642 0.0616 0.0616 0.0550 0.0520 0.0388
anion OH SO42– Br I Cl NO3 ClO3 CH3COO C2H5COO C3H7COO
μ0 .2050 0.0827 0.0812 0.0796 0.0791 0.0740 0.0705 0.0461 0.0424 0.0411
As with the limiting conductivities, the trends in the mobilities can be roughly correlated with the charge and size of the ion. (Recall that negative ions tend to be larger than positive ions.)
Cations and anions carry different fractions of the current
In electrolytic conduction, ions having different charge signs move in opposite directions. Conductivity measurements give only the sum of the positive and negative ionic conductivities according to Kohlrausch's law, but they do not reveal how much of the charge is carried by each kind of ion. Unless their mobilities are the same, cations and anions do not contribute equally to the total electric current flowing through the cell.
Recall that an electric current is defined as a flow of electric charges; the current in amperes is the number of coulombs of charge moving through the cell per second. Because ionic solutions contain equal quantities of positive and negative charges, it follows that the current passing through the cell consists of positive charges moving toward the cathode, and negative charges moving toward the anode. But owing to mobility differences, cations and ions do not usually carry identical fractions of the charge.
Transference numbers are often referred to as transport numbers; either term is acceptable in the context of electrochemistry. The fraction of charge carried by a given kind of ion is known as the transference number $t_{\pm}$. For a solution of a simple binary salt,
$t_+ = \dfrac{\lambda_+}}{\lambda_+ + \lambda_-}$
and
$t_- = \dfrac{\lambda_-}}{\lambda_+ + \lambda_-}$
By definition,
$t_+ + t_– = 1.$
To help you visualize the effects of non-identical transference numbers, consider a solution of M+ X in which t+ = 0.75 and t = 0.25. Let the cell be divided into three [imaginary] sections as we examine the distribution of cations and anions at three different stages of current flow.
Initially, the concentrations of M+ and X are the same in all parts of the cell.
After 4 faradays of charge have passed through the cell, 3 eq of cations and 1 eq of anions have crossed any given plane parallel to the electrodes. Note that 3 anions are discharged at the anode, exactly balancing the number of cations discharged at the cathode.
In the absence of diffusion, the ratio of the ionic concentrations near the electrodes equals the ratio of their transport numbers.
Transference numbers can be determined experimentally by observing the movement of the boundary between electrolyte solutions having an ion in common, such as LiCl and KCl:
In this example, K+ has a higher transference number than Li+, but don't try to understand why the KCl boundary move to the left; the details of how this works are rather complicated and not important for the purposes of this this course.
H+ and OH– ions "migrate" without moving, and rapidly!
You may have noticed from the tables above that the hydrogen- and hydroxide ions have extraordinarily high equivalent conductivities and mobilities. This is a consequence of the fact that unlike other ions which need to bump and nudge their way through the network of hydrogen-bonded water molecules, these ions are participants in this network. By simply changing the H2O partners they hydrogen-bond with, they can migrate "virtually". In effect, what migrates is the hydrogen-bonds, rather than the physical masses of the ions themselves.
This process is known as the Grothuss Mechanism. The shifting of the hydrogen bonds occurs when the rapid thermal motions of adjacent molecules brings a particular pair into a more favorable configuration for hydrogen bonding within the local molecular network. Bear in mind that what we refer to as "hydrogen ions" H+(aq) are really hydronium ions H3O+. It has been proposed that the larger aggregates H5O2+and H9O4+ are important intermediates in this process.
It is remarkable that this virtual migration process was proposed by Theodor Grotthuss in 1805 — just five years after the discovery of electrolysis, and he didn't even know the correct formula for water; he thought its structure was H–O–O–H.
These two diagrams will help you visualize the process. The successive downward rows show the first few "hops" made by the virtual H+ and OHions as they move in opposite directions toward the appropriate electrodes. (Of course, the same mechanism is operative in the absence of an external electric field, in which case all of the hops will be in random directions.)
Covalent bonds are represented by black lines, and hydrogen bonds by gray lines. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9D%3A_8.10.9D%3A_Ionic_migration.txt |
From the chemist's standpoint, the most important examples of conduction are in connection with electrochemical cells, electrolysis and batteries.
Determination of equilibrium constants
Owing to their high sensitivity, conductivity measurements are well adapted to the measurement of equilibrium constants for processes that involve very small ion concentrations. These include
• Ks values for sparingly soluble solids
• Autoprotolysis constants for solvents (such as Kw )
• Acid dissociation constants for weak acids
As long as the ion concentrations are so low, their values can be taken as activities, and limiting equivalent conductivities Λ0 can be used directly.
Ion product of water
The very small conductivity of pure water makes it rather difficult to obtain a precise value for Kw; better values are obtained by measuring the potential of an appropriate galvanic cell. But the principle illustrated here might be applicable to other autoprotolytic solvents such as H2SO4.
Example \(1\)
Use the appropriate limiting molar ionic conductivities to estimate the autoprotolysis constant Kw of water at 25° C. Use the reaction equation
2 H2O → H3O+ + OH.
Solution: The data we need are λH+ = 349.6 and λOH– = 199.1 S cm2 mol–1.
The conductivity of water is κ = [H+] λH+ + [OH] λOH– whose units work out to (mol cm–3) (S cm2 mol–1). In order to express the ionic concentrations in molarities, we multiply the cm–3 term by (1 L / 1000 cm–3), yielding
1000 κ = [H+] λH+ + [OH] λOH– with units S cm–1 L–1.
Recalling that in pure water, [H+] = [OH] = Kw½, we obtain
1000 κ = (Kw½)(λH+ + λOH– ) = (Kw½)(548.7 S cm2 mol–1).
Solving for Kw:
Kw = (1000 κ / 548.7 S cm2 mol–1)2
Substituting Kohlrausch's water conductivity value of 0.043 × 10–6 S cm–1) for κ gives
Kw =(1000 × 0.043 × 10–6 S cm–1 / 548.7 S cm2 mol–1)2 = 0.614 × 10–14 mol2 cm–6 (i.e., mol2 L–2).
The accepted value for the autoprotolysis constant of water at 25° C is Kw = 1.008 × 10–14 mol2 L–2.
Example \(2\): Solubility products
A saturated solution of silver chloride AgCl has a conductance 2.28 x 10–6 S cm–1 at 25°C. The conductance of the water used to make up this solution is 0.116 x 10–6 S cm–1. The limiting ionic conductivities of the two ions are λAg+= 61.9 and λCl– = 76.3 S cm2 mol–1. Use this information to estimate the molar solubility of AgCl.
The limiting molar conductivity of the solution is
Λo = λAg+ + λCl– = 138.2 S cm–1.
The conductance due to the dissociated salt alone is the difference
(2.785 – 0.043) = 2.16 x 10–6 S cm–1.
Substituting into the expression Λ = 1000κ/c yields
= 1.56 x 10–5 mol/1000 cm3or 1.56 x 10–5 mol L–1.
Conductometric titration
A chemical reaction in which there is a significant change in the number or mobilities of ionic species can be followed by monitoring the change in conductance. Many acid-base reactions fall into this category. In conductometric titration, conductometry is employed to detect the end-point of a titration.
Consider, for example, the titration of the strong acid HCl by the strong base NaOH. In ionic terms, the process can be represented as
H+ + Cl + Na+ + OHH2O + Na++ Cl
At the end point, only two ionic species remain, compared to the four during the initial stages of the titration, so the conductivity will be at a minimum. Beyond the end point, continued addition of base causes the conductivity to rise again. The very large mobilities of the H+and OH ions cause the conductivity to rise very sharply on either side of the end point, making it quite easy to locate.
The plot on the left depicts the conductivities due to the individual ionic species. But of course the conductivity we measure is just the sum of all of these (Kohlrausch's law of independent migration), so the plot on the right corresponds to what you actually see when the titration is carried out. The factor (Va + Vb)/Vb compensates for the dilution of the solution as more base is added.
For most ordinary acid-base titrations, conductometry rarely offers any special advantage over regular volumetric analysis or potentiometry. But in some special cases such as those illustrated here, it is the only method capable of yielding useful results.
Ground- and soil conduction
Most people think of electrolytic conduction as something that takes place mainly in batteries, electrochemical plants and in laboratories, but by far the biggest and most important electrolytic system is the earth itself, or at least the thin veneer of soil sediments that coat much of its surface.
Soils are composed of sand and humic solids within which are embedded pore spaces containing gases (air and microbial respiration products) and water. This water, both that which fills the pores as well as that which is adsorbed to soil solids, contains numerous dissolved ions on which soil conductivity mainly depends. These ions include exchangeable cations present on the surfaces of the clay components of soils. Although these electrolyte channels are small and tortuously connected, they are present in huge numbers and offer an abundance of parallel conduction paths, so the ground itself acts as a surprisingly efficient conductor.
There is no better illustration of this than the use of a ground path to transmit a current of up to 1800 amperes along the 1360-km path of the Pacific DC Intertie that extends from the Celilo Converter Station (view below) in northern Oregon to Los Angeles. This system normally employs a two-conductor power line that operates at ±500,000 volts DC, but when one of these conductors is out of service, the ground path acts as a substitute conductor. This alternative path is said to have a lower resistance than the normal metallic conductor!
From an electrochemical standpoint, the most interesting aspect of this system is the manner in which the current flows into or out of the ground. The grounding system at Celilo is composed of over 1000 cast-iron electrodes buried in a circular 3.3-km trench of petroleum coke which acts as the working electrode. At the Los Angeles end, grounding is achieved by means of 24 silicon-iron alloy electrodes submerged in the nearby Pacific Ocean.
On a much smaller scale, single-wire earth return systems are often employed to supply regular single-phase ac power to remote regions, or as the return path for high-voltage DC submarine cables. For direct current submarine systems, a copper cable laid on the bottom is suitable for the cathode. The anodes are normally graphite rods surrounded by coke.
You may have noticed that the pole-mounted step-down transformers used to distribute single-phase ac power in residential neighborhoods are connected to the high-voltage (10 Kv or so) primary line by only a single wire. The return circuit back to the local substation is completed by a ground connection which runs down the pole to a buried electrode.
The Celilo Converter Station is located at The Dalles, Oregon
Other applications of ground conductivity
Ground-wave radio propagation
During the daytime, radio transmission at frequencies below about 5 Mhz (such as in the old standard AM broadcast band) depends entirely on so-called ground waves that follow the curvature of the earth. This occurs because that portion of the vertically-polarized wavefronts in contact with the earth induce an electrolytic current in the ground, causing their lower portions to travel more slowly, bending their pathways in toward the earth.
Agricultural and environmental soils assessment
Conductivity has long been used as a tool to assess the salinity of agricultural soils — a serious problem in irrigated regions, where evaporation of irrigation water over the years can raise salinity to levels that can reduce crop yields. Because other soil characteristics (moisture content, density, and mineralogy, fertilization) also play important roles, some care is required to correctly interpret measurements. Measuring devices that can be drawn behind a tractor and are equipped with GPS receivers allow the production of conductivity maps for entire fields.
Archaeological exploration
Buried artifacts such as stone walls and foundations and large metallic objects can be located by a series of conductivity measurements at archaeological sites. | textbooks/chem/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.10%3A_Ions_and_Electrolytes/8.10.9E%3A_8.10.9E%3A_Some_applications_of_electrolytic_conduction.txt |
Why do some atoms join together to form molecules, but others do not? Why is the CO2 molecule linear whereas H2O is bent? How can we tell? How does hemoglobin carry oxygen through our bloodstream? There is no topic more fundamental to Chemistry than the nature of the chemical bond, and the introduction you find here will provide you with an overview of the fundamentals and a basis for further study.
• 9.1: Three Views of Chemical Bonding
These short tutorials summarize the various ways of looking at bond formation without going into too much detail.
• 9.2: Molecules - Properties of Bonded Atoms
The concept of chemical bonding lies at the very core of Chemistry; it is what enables about one hundred elements to form the more than fifty million known chemical substances that make up our physical world. Exactly what is a chemical bond? And what observable properties can we use to distinguish one kind of bond from another? This is the first of ten lessons that will help familiarize you with the fundamental concepts of this very broad subject.
• 9.3: Models of Chemical Bonding
Why do atoms bind together— sometimes? The answer to this question would ideally be a simple, easily understood theory that would not only explain why atoms bind together to form molecules, but would also predict the three-dimensional structures of the resulting compounds as well as the energies and other properties of the bonds themselves. Unfortunately, no one theory exists that accomplishes these goals in a satisfactory way for all of the many categories of compounds that are known.
• 9.4: Polar Covalence
The electrons constituting a chemical bond are simultaneously attracted by the electrostatic fields of the nuclei of the two bonded atoms. In a homonuclear molecule such as O2 the bonding electrons will be shared equally by the two atoms. In general, however, differences in the sizes and nuclear charges of the atoms will cause one of them to exert a greater attraction on the bonding pair, causing the electron cloud to be displaced toward the more strongly-attracting atom.
• 9.5: Molecular Geometry
The Lewis electron-dot structures you have learned to draw have no geometrical significance other than depicting the order in which the various atoms are connected to one another. Nevertheless, a slight extension of the simple shared-electron pair concept is capable of rationalizing and predicting the geometry of the bonds around a given atom in a wide variety of situations.
• 9.6: The Hybrid Orbital Model
As useful and appealing as the concept of the shared-electron pair bond is, it raises a somewhat troubling question that we must sooner or later face: what is the nature of the orbitals in which the shared electrons are contained? Up until now, we have been tacitly assuming that each valence electron occupies the same kind of atomic orbital as it did in the isolated atom. As we shall see below, his assumption very quickly leads us into difficulties.
• 9.7: The Hybrid Orbital Model II
This is a continuation of the previous page which introduced the hybrid orbital model and illustrated its use in explaining how valence electrons from atomic orbitals of s and p types can combine into equivalent shared-electron pairs known as hybrid orbitals. In this lesson, we extend this idea to compounds containing double and triple bonds, and to those in which atomic d electrons are involved (and which do not follow the octet rule.)
• 9.8: Molecular Orbital Theory
The molecular orbital model is by far the most productive of the various models of chemical bonding, and serves as the basis for most quantiative calculations, including those that lead to many of the computer-generated images that you have seen elsewhere in these units. In its full development, molecular orbital theory involves a lot of complicated mathematics, but the fundamental ideas behind it are quite easily understood, and this is all we will try to accomplish in this lesson.
• 9.9: Bonding in Coordination Complexes
Coordination complexes have been known and studied since the mid-nineteenth century. and their structures had been mostly worked out by 1900. Although the hybrid orbital model was able to explain how neutral molecules such as water or ammonia could bond to a transition metal ion, it failed to explain many of the special properties of these complexes. Ligand field theory was developed that is able to organize and explain most of the observed properties of these compounds.
• 9.10: Bonding in Metals
The simplest picture of metals, which regards them as a lattice of positive ions immersed in a “sea of electrons” which can freely migrate throughout the solid. In effect the electropositive nature of the metallic atoms allows their valence electrons to exist as a mobile fluid which can be displaced by an applied electric field, hence giving rise to their high electrical conductivities.
• 9.11: Bonding in Semiconductors
With the aid of simple diagrams, show how different band energy ranges in solids can produce conductors, insulators, and semiconductors. Describe the nature and behavior of a simple PN junction.
• 9.12: The Shared-Electron Covalent Bond
09: Chemical Bonding and Molecular Structure
Chemical bonds form when electrons can be simultaneously close to two or more nuclei, but beyond this, there is no simple, easily understood theory that would not only explain why atoms bind together to form molecules, but would also predict the three-dimensional structures of the resulting compounds as well as the energies and other properties of the bonds themselves. Unfortunately, no one theory exists that accomplishes these goals in a satisfactory way for all of the many categories of compounds that are known. Moreover, it seems likely that if such a theory does ever come into being, it will be far from simple.
When we are faced with a scientific problem of this complexity, experience has shown that it is often more useful to concentrate instead on developing models. A scientific model is something like a theory in that it should be able to explain observed phenomena and to make useful predictions. But whereas a theory can be discredited by a single contradictory case, a model can be useful even if it does not encompass all instances of the phenomena it attempts to explain. We do not even require that a model be a credible representation of reality; all we ask is that be able to explain the behavior of those cases to which it is applicable in terms that are consistent with the model itself. An example of a model that you may already know about is the kinetic molecular theory of gases. Despite its name, this is really a model (at least at the level that beginning students use it) because it does not even try to explain the observed behavior of real gases. Nevertheless, it serves as a tool for developing our understanding of gases, and as a starting point for more elaborate treatments.
Given the extraordinary variety of ways in which atoms combine into aggregates, it should come as no surprise that a number of useful bonding models have been developed. Most of them apply only to certain classes of compounds, or attempt to explain only a restricted range of phenomena. In this section we will provide brief descriptions of some of the bonding models; the more important of these will be treated in much more detail in later parts of this chapter.
Classical models
By classical, we mean models that do not take into account the quantum behavior of small particles, notably the electron. These models generally assume that electrons and ions behave as point charges which attract and repel according to the laws of electrostatics. Although this completely ignores what has been learned about the nature of the electron since the development of quantum theory in the 1920's, these classical models have not only proven extremely useful, but the major ones also serve as the basis for the chemist's general classification of compounds into "covalent" and "ionic" categories.
Electrostatic (Ionic Bonding)
Ever since the discovery early in the 19th century that solutions of salts and other electrolytes conduct electric current, there has been general agreement that the forces that hold atoms together must be electrical in nature. Electrolytic solutions contain ions having opposite electrical charges, opposite charges attract, so perhaps the substances from which these ions come consist of positive and negatively charged atoms held together by electrostatic attraction.
It turns out that this is not true generally, but a model built on this assumption does a fairly good job of explaining a rather small but important class of compounds that are called ionic solids. The most well known example of such a compound is sodium chloride, which consists of two interpenetrating lattices of Na+ and Cl ions arranged in such as way that every ion of one type is surrounded (in three dimensional space) by six ions of opposite charge.
The main limitation of this model is that it applies really well only to the small class of solids composed of Group 1 and 2 elements with highly electronegative elements such as the halogens. Although compounds such as CuCl2 dissociate into ions when they dissolve in water, the fundamental units making up the solid are more like polymeric chains of covalently-bound CuCl2 molecules that have little ionic character.
Shared-Electrons (Covalent Bonding)
This model originated with the theory developed by G.N. Lewis in 1916, and it remains the most widely-used model of chemical bonding. The essential element s of this model can best be understood by examining the simplest possible molecule. This is the hydrogen molecule ion H2+, which consists of two nuclei and one electron. First, however, think what would happen if we tried to make the even simpler molecule H22+. Since this would consist only of two protons whose electrostatic charges would repel each other at all distances, it is clear that such a molecule cannot exist; something more than two nuclei are required for bonding to occur.
In the hydrogen molecule ion H2+ we have a third particle, an electron. The effect of this electron will depend on its location with respect to the two nuclei. If the electron is in the space between the two nuclei, it will attract both protons toward itself, and thus toward each other. If the total attraction energy exceeds the internuclear repulsion, there will be a net bonding effect and the molecule will be stable. If, on the other hand, the electron is off to one side, it will attract both nuclei, but it will attract the closer one much more strongly, owing to the inverse-square nature of Coulomb's law. As a consequence, the electron will now help the electrostatic repulsion to push the two nuclei apart.
We see, then, that the electron is an essential component of a chemical bond, but that it must be in the right place: between the two nuclei. Coulomb's law can be used to calculate the forces experienced by the two nuclei for various positions of the electron. This allows us to define two regions of space about the nuclei, as shown in the figure. One region, the binding region, depicts locations at which the electron exerts a net binding effect on the new nuclei. Outside of this, in the antibinding region, the electron will actually work against binding.
This simple picture illustrates the number one rule of chemical bonding: chemical bonds form when electrons can be simultaneously close to two or more nuclei. It should be pointed out that this principle applies also to the ionic model; as will be explained later in this chapter, the electron that is "lost" by a positive ion ends up being closer to more nuclei (including the one from whose electron cloud it came) in the compound.
• The polar covalent model: A purely covalent bond can only be guaranteed when the electronegativities (electron-attracting powers) of the two atoms are identical. When atoms having different electronegativities are joined, the electrons shared between them will be displaced toward the more electronegative atom, conferring a polarity on the bond which can be described in terms of percent ionic character. The polar covalent model is thus an generalization of covalent bonding to include a very wide range of behavior.
• The Coulombic model: This is an extension of the ionic model to compounds that are ordinarily considered to be non-ionic. Combined hydrogen is always considered to exist as the hydride ion H, so that methane can be treated as if it were C4+ H–4. This is not as bizarre as it might seem at first if you recall that the proton has almost no significant size, so that it is essentially embedded in an electron pair when it is joined to another atom in a covalent bond. This model, which is not as well known as it deserves to be, has considerable predictive power, both as to bond energies and structures.
• The VSEPR model: The "valence shell electron repulsion" model is not so much a model of chemical bonding as a scheme for explaining the shapes of molecules. It is based on the quantum mechanical view that bonds represent electron clouds- physical regions of negative electric charge that repel each other and thus try to stay as far apart as possible.
Quantum Models
Quantum models of bonding take into account the fact that a particle as light a the electron cannot really be said to be in any single location. The best we can do is define a region of space in which the probability of finding the electron has some arbitrary value which will always be less than unity. The shape of this volume of space is called an orbital and is defined by a mathematical function that relates the probability to the (x,y,z) coordinates of the molecule. Like other models of bonding, the quantum models attempt to show how more electrons can be simultaneously close to more nuclei. Instead of doing so through purely geometrical arguments, they attempt this by predicting the nature of the orbitals which the valence electrons occupy in joined atoms.
• The hybrid orbital model: This was developed by Linus Pauling in 1931 and was the first quantum-based model of bonding. It is based on the premise that if the atomic s, p, and d orbitals occupied by the valence electrons of adjacent atoms are combined in a suitable way, the hybrid orbitals that result will have the character and directional properties that are consistent with the bonding pattern in the molecule. The rules for bringing about these combinations turn out to be remarkably simple, so once they were worked out it became possible to use this model to predict the bonding behavior in a wide variety of molecules. The hybrid orbital model is most usefully applied to the p-block elements the first two rows of the periodic table, and is especially important in organic chemistry.
• The molecular orbital model: This model takes a more fundamental approach by regarding a molecule as a collection of valence electrons and positive cores. Just as the nature of atomic orbitals derives from the spherical symmetry of the atom, so will the properties of these new molecular orbitals be controlled by the interaction of the valence electrons with the multiple positive centers of these atomic cores. These new orbitals, unlike those of the hybrid model, are delocalized; that is, they do not "belong" to any one atom but extend over the entire region of space that encompasses the bonded atoms. The available (valence) electrons then fill these orbitals from the lowest to the highest, very much as in the Aufbau principle that you learned for working out atomic electron configurations. For small molecules (which are the only ones we will consider here), there are simple rules that govern the way that atomic orbitals transform themselves into molecular orbitals as the separate atoms are brought together. The real power of molecular orbital theory, however, comes from its mathematical formation which lends itself to detailed predictions of bond energies and other properties.
• The electron-tunneling model: A common theme uniting all of the models we have discussed is that bonding depends on the fall in potential energy that occurs when opposite charges are brought together. In the case of covalent bonds, the shared electron pair acts as a kind of "electron glue" between the joined nuclei. In 1962, however, it was shown that this assumption is not strictly correct, and that instead of being concentrated in the space between the nuclei, the electron orbitals become even more concentrated around the bonded nuclei. At the same time however, they are free to "move" between the two nuclei by a process known as tunneling. This refers to a well-known quantum mechanical effect that allows electrons (or other particles small enough to exhibit wavelike properties) to pass ("tunnel") through a barrier separating two closely adjacent regions of low potential energy. One result of this is that the effective volume of space available to the electron is increased, and according to the uncertainty principle this will reduce the kinetic energy of the electron.
The electron-tunneling model
According to this model, the bonding electrons act as a kind of fluid that concentrates in the region of each nucleus (lowering the potential energy) and at the same time is able to freely flow between them (reducing the kinetic energy). A summary of the concept, showing its application to a simple molecule, is shown on the next page. Despite its conceptual simplicity and full acknowledgment of the laws of quantum mechanics, this model is not widely known and is rarely taught.
Chemical bonding occurs when one or more electrons can be simultaneously close to two nuclei. But how can this be arranged? The conventional picture of the shared electron bond places the bonding electrons in the region between the two nuclei. This makes a nice picture, but it is not consistent with the principle that opposite charges attract. This would imply that the electrons would be "happiest" (at the lowest potential energy) when they are very close to a nucleus, not half a bond-length away from two of them!
This plot shows how the potential energy of an electron in the hydrogen atom varies with its distance from the nucleus. Notice how the energy falls without limit as the electron approaches the nucleus, represented here as a proton, \(H^+\). If potential energy were the only consideration, the electron would fall right into the nucleus where its potential energy would be minus infinity.
When an electron is added to the proton to make a neutral hydrogen atom, it tries to get as close to the nucleus as possible. The Heisenberg uncertainty principle requires the total energy of the electron energy to increase as the volume of space it occupies diminishes. As the electron gets closer to the nucleus, the nuclear charge confines the electron to such a tiny volume of space that its energy rises, allowing it to "float" slightly away from the nucleus without ever falling into it.
The shaded region above shows the range of energies and distances from the nucleus the electron is able to assume within the 1s orbital. The electron can thus be regarded as a fluid that occupies a vessel whose walls conform to the red potential energy curves shown above. Note that as the potential energy falls, the kinetic energy increases, but only half as fast (this is called the virial theorem). Thus close to the nucleus, the kinetic energy is large and so is the electron's effective velocity. The top of the shaded area defines the work required to raise its potential energy to zero, thus removing it from the atom; this corresponds, of course, to the ionization energy.
The Tunneling Effect
A quantum particle can be described by a waveform which is the plot of a mathematical function related to the probability of finding the particle at a given location at any time. If the particle is confined to a box, it turns out that the wave does not fall to zero at the walls of the box, but has a finite probability of being found outside it. This means that a quantum particle is able to penetrate, or "tunnel through" its confining boundaries. This remarkable property is called the tunnel effect.
In terms of the electron fluid model introduced above, the fluid is able to "leak out" of the atom if another low-energy location can be found nearby.
Electron tunneling in the simplest molecule
Suppose we now bring a bare proton up close to a hydrogen atom. Each nucleus has its own potential well, but only that of the hydrogen atom is filled, as indicated by the shading in the leftmost potential well.
But the electron fluid is able to tunnel through the potential energy barrier separating the two wells; like any liquid, it will seek a common level in the two sides of the container as shown below. The electron is now "simultaneously close to two nuclei" while never being in between them. Bear in mind that this would be physically impossible for a real liquid composed of real molecules; this is purely a quantum effect that is restricted to a low-mass particle such as the electron.
Because the same amount of electron fluid is now shared between the two wells, its level in both is lower. The difference between what it is now and what is was before corresponds to the bond energy of the hydrogen molecule ion.
The dihydrogen molecule
Now let's make a molecule of dihydrogen. We start with two hydrogen atoms, each with one electron. But there is a problem here: both potential energy wells are already filled with electron fluid; there is no room for any more without pushing the energy way up.
But quantum theory again comes to the rescue! If the two electrons have opposite spins, the two fluids are able to interpenetrate each other, very much as two gases are able to occupy the same container. This is depicted by the double shading in the diagram below.
When the two hydrogen atoms are within tunneling distance, half of the electron fluid (really the probability of finding the electron) from each well flows into the other well. Because the two fluids are able to interpenetrate, the level is not much different from what it was in the H2+ ion, but the greater density of the electron-fluid between the two nuclei makes H2 a strongly bound molecule.
So why does dihelium not exist?
If we tried to join two helium atoms in this way, we would be in trouble. The electron well of He already contains two electrons of opposite spin. There is no room for more electron fluid (without raising the energy), and thus no way the electrons in either He atom can be simultaneously close to two nuclei.
Ionic Bonding
Even before G.N.Lewis developed his theory of the shared electron pair bond, it was believed that bonding in many solid salts could be satisfactorily explained on the basis of simple electrostatic forces between the positive and negative ions which are assumed to be the basic molecular units of these compounds. Lewis himself continued to recognize this distinction, which has continued to be a part of the tradition of chemistry; the shared electron pair bond is known as the covalent bond, while the other type is the ionic or electrovalent bond.
The covalent bond is formed when two atoms are able to share electrons:
whereas the ionic bond is formed when the "sharing" is so unequal that an electron from atom A is completely lost to atom B, resulting in a pair of ions:
The two extremes of electron sharing represented by the covalent and ionic models appear to be generally consistent with the observed properties of molecular and ionic solids and liquids. But does this mean that there are really two kinds of chemical bonds, ionic and covalent?
Bonding in ionic solids
According to the ionic electrostatic model, solids such as NaCl consist of positive and negative ions arranged in a crystal lattice. Each ion is attracted to neighboring ions of opposite charge, and is repelled by ions of like charge; this combination of attractions and repulsions, acting in all directions, causes the ion to be tightly fixed in its own location in the crystal lattice.
Since electrostatic forces are nondirectional, the structure of an ionic solid is determined purely by geometry: two kinds of ions, each with its own radius, will fall into whatever repeating pattern will achieve the lowest possible potential energy. Surprisingly, there are only a small number of possible structures.
Is there such as thing as a "purely" ionic bond?
When two elements form an ionic compound, is an electron really lost by one atom and transferred to the other one? In order to deal with this question, consider the data on the ionic solid LiF. The average radius of the neutral Li atom is about 2.52Å.
Now if this Li atom reacts with an atom of F to form LiF, what is the average distance between the Li nucleus and the electron it has "lost" to the fluorine atom? The answer is 1.56Å; the electron is now closer to the lithium nucleus than it was in neutral lithium! So the answer to the above question is both yes and no: yes, the electron that was now in the 2s orbital of Li is now within the grasp of a fluorine 2p orbital, but no, the electron is now even closer to the Li nucleus than before, so how can it be "lost"? The one thing that is inarguably true about LiF is that there are more electrons closer to positive nuclei than there are in the separated Li and F atoms. But this is just the condition that gives rise to all forms of chemical bonding:
Chemical bonds form when electrons can be simultaneously near two or more nuclei
It is obvious that the electron-pair bond brings about this situation, and this is the reason for the stability of the covalent bond. What is not so obvious (until you look at the numbers such as were quoted for LiF above) is that the "ionic" bond results in the same condition; even in the most highly ionic compounds, both electrons are close to both nuclei, and the resulting mutual attractions bind the nuclei together. This being the case, is there really any fundamental difference between the ionic and covalent bond?
The answer, according to modern chemical thinking is probably "no"; in fact, there is some question as to whether it is realistic to consider that these solids consist of "ions" in the usual sense. The preferred picture that seems to be emerging is one in which the electron orbitals of adjacent atom pairs are simply skewed so as to place more electron density around the "negative" element than around the "positive" one.
This being said, it must be reiterated that the ionic model of bonding is a useful one for many purposes, and there is nothing wrong with using the term "ionic bond" to describe the interactions between the atoms in "ionic solids" such as LiF and NaCl.
Polar covalence
If there is no such thing as a "completely ionic" bond, can we have one that is completely covalent? The answer is yes, if the two nuclei have equal electron attracting powers. This situation is guaranteed to be the case with homonuclear diatomic molecules— molecules consisting of two identical atoms. Thus in Cl2, O2, and H2, electron sharing between the two identical atoms must be exactly even; in such molecules, the center of positive charge corresponds exactly to the center of negative charge: halfway between the two nuclei.
Electronegativity
This term was introduced earlier in the course to denote the relative electron-attracting power of an atom. The electronegativity is not the same as the electron affinity; the latter measures the amount of energy released when an electron from an external source "falls into" a vacancy within the outermost orbital of the atom to yield an isolated negative ion.
The products of bond formation, in contrast, are not ions and they are not isolated; the two nuclei are now drawn closely together by attraction to the region of high electron density between them. Any shift of electron density toward one atom takes place at the energetic expense of stealing it from the other atom.
It is important to understand that electronegativity is not a measurable property of an atom in the sense that ionization energies and electron affinity are; electronegativity is a property that an atom displays when it is bonded to another. Any measurement one does make must necessarily depend on the properties of both of the atoms.
By convention, electronegativities are measured on a scale on which the highest value, 4.0, is arbitrarily assigned to fluorine. A number of electronegativity scales have been proposed, each based on slightly different criteria. The most well known of these is due to Linus Pauling, and is based on a study of bond energies in a variety of compounds.
The periodic trends in electronegativity are about what one would expect; the higher the nuclear charge and the smaller the atom, the more strongly attractive will it be to an outer-shell electron of an atom within binding distance. The division between the metallic and nonmetallic elements is largely that between those that have Pauling electronegativies greater than about 1.7, and those that have smaller electronegativities.
The greater the electronegativity difference between two elements A and B, the more polar will be their molecule AB. It is important to point out, however, that the pairs having the greatest electronegativity differences, the alkali halides, are solids order ordinary conditions and exist as molecules only in the rarefied conditions of the gas phase. Even these ionic solids possess a certain amount of covalent character, so, as discussed above, there is no such thing as a "purely ionic" bond. It has become more common to place binary compounds on a scale something like that shown here, in which the degree of shading is a rough indication of the number of compounds at any point on the covalent-ionic scale.
Covalent or ionic: a false dichotomy
The covalent-ionic continuum described above is certainly an improvement over the old covalent -versus - ionic dichotomy that existed only in the textbook and classroom, but it is still only a one-dimensional view of a multidimensional world, and thus a view that hides more than it reveals. The main thing missing is any allowance for the type of bonding that occurs between more pairs of elements than any other: metallic bonding. Intermetallic compounds are rarely even mentioned in introductory courses, but since most of the elements are metals, there are a lot of them, and many play an important role in metallurgy. In metallic bonding, the valence electrons lose their association with individual atoms; they form what amounts to a mobile "electron fluid" that fills the space between the crystal lattice positions occupied by the atoms, (now essentially positive ions.) The more readily this electron delocalization occurs, the more "metallic" the element.
Thus instead of the one-dimension chart shown above, we can construct a triangular diagram whose corners represent the three extremes of "pure" covalent, ionic, and metallic bonding.
We can take this a step farther by taking into account collection of weaker binding effects known generally as van der Waals forces. Contrary to what is often implied in introductory textbooks, these are the major binding forces in most of the common salts that are not alkali halides; these include NaOH, CaCl2, MgSO4. They are also significant in solids such as CuCl2 and solid SO3 in which infinite covalently-bound chains are held together by ion-induced dipole and similar forces.
The only way to represent this four-dimensional bonding-type space in two dimensions is to draw a projection of a tetrahedron, each of its four corners representing the "pure" case of one type of bonding.
Note that some of the entries on this diagram (ice, CH4, and the two parts of NH4ClO4) are covalently bound units, and their placement refers to the binding between these units. Thus the H2O molecules in ice are held together mainly by hydrogen bonding, which is a van der Waals force, with only a small covalent contribution.
Note: the triangular and tetrahedral diagrams above were adapted from those in the excellent article by William B. Jensen, "Logic, history and the chemistry textbook", Part II, J. Chemical Education 1998: 817-828. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.01%3A_Three_Views_of_Chemical_Bonding.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which are presented below.
• How would you define a chemical bond?
• What is meant by the connectivity of a molecule? What additional information might be needed in order to specify its structure?
• Explain the difference between stability and reactivity, and how these factors might prevent a given structure from existing long enough to qualify as a molecule.
• Sketch out a potential energy curve for a typical diatomic molecule, and show how it illustrates the bond energy and bond length.
• Explain how the heat released or absorbed in a chemical reaction can be related to the bond energies of the reactants and products.
• State the major factors that determine the distance between two bonded atoms.
• Describe, in a general way, how the infrared spectrum of a substance can reveal details about its molecular structure.
The concept of chemical bonding lies at the very core of Chemistry; it is what enables about one hundred elements to form the more than fifty million known chemical substances that make up our physical world. Before we get into the theory of chemical bonding, we need to define what we are talking about: Exactly what is a chemical bond? And what observable properties can we use to distinguish one kind of bond from another? This is the first of ten lessons that will help familiarize you with the fundamental concepts of this very broad subject.
What is a chemical bond?
You probably learned some time ago that chemical bonds are what hold atoms together to form the more complicated aggregates that we know as molecules and extended solids. Chemists talk about bonds all the time, and draw pictures of them as lines joining atom symbols. Teachers often identify them as the little sticks that connect the spheres that represent atoms in a plastic molecular model. So it's not surprising that we sometimes tend to think of chemical bonds as “things”. But no one has ever seen a chemical bond, and there is no reason to believe that they really even exist as physical objects.
"SOMETIMES IT SEEMS to me that a bond between two atoms has become so real, so tangible, so friendly, that I can almost see it. Then I awake with a little shock, for a chemical bond is not a real thing. It does not exist. No one has ever seen one. No one ever can. It is a figment of our own imagination." C.A. Coulson (1910-1974) was an English theoretical chemist who played a central role in the development of quantum theories of chemical bonding.
It is more useful to regard a chemical bond as an effect that causes certain atoms to join together to form enduring structures that have unique physical and chemical properties.
So although the "chemical bond" is no more than a convenient fiction, chemical bonding, which leads to the near-infinity of substances (31 million in mid-2007), lies at the very core of chemistry. The forces that hold bonded atoms together are basically just the same kinds of electrostatic attractions that bind the electrons of an atom to its positively-charged nucleus;
Chemical Bonds
Chemical bonding occurs when one or more electrons are simultaneously attracted to two (or more) nuclei.
This is the most important fact about chemical bonding that you should know, but it is not of itself a workable theory of bonding because it does not describe the conditions under which bonding occurs, nor does it make useful predictions about the properties of the bonded atoms.
What is a molecule?
Even at the end of the 19th century, when compounds and their formulas had long been in use, some prominent chemists doubted that molecules (or atoms) were any more than a convenient model. Molecules suddenly became real in 1905, when Albert Einstein showed that Brownian motion, the irregular microscopic movements of tiny pollen grains floating in water, could be directly attributed to collisions with molecule-sized particles.
Most people think of molecules as the particles that result when atoms become joined together in some way. This conveys the general picture, but a somewhat better definition that we will use in these lessons is:
A molecule is an aggregate of atoms that possesses distinctive observable properties
A more restrictive definition distinguishes between a "true" molecule that exists as an independent particle, and an extended solid that can only be represented by its simplest formula. Methane, CH4, is an example of the former, while sodium chloride, which does not contain any discrete NaCl units, is the most widely-known extended solid. But because we want to look at chemical bonding in the most general way, we will avoid making this distinction here except in a few special cases. In order to emphasize this "aggregate of atoms" definition, we will often use terms such as "chemical species" and "structures" in place of "molecules" in this lesson.
The definition written above is an operational one; that is, it depends on our ability to observe and measure the molecule's properties. Clearly, this means that the molecule must retain its identity for a period of time long enough to carry out the measurements. For most of the molecules that chemistry deals with, this presents no difficulty. But it does happen that some structures that we can write formulas for, such as He2, have such brief lives that no significant properties have been observed. So to some extent, what we consider to be a molecule depends on the technology we use to observe them, and this will necessarily change with time.
Structure, structure, structure!
And what are those properties that characterize a particular kind of molecule and distinguish it from others? Just as real estate is judged by "location, location, location", the identity of a chemical species is defined by its structure. In its most fundamental sense, the structure of a molecule is specified by the identity of its constituent atoms and the sequence in which they are joined together, that is, by the bonding connectivity. This, in turn, defines the bonding geometry— the spatial relationship between the bonded atoms.
The importance of bonding connectivity is nicely illustrated by the structures of the two compounds ethanol and dimethyl ether, both of which have the simplest formula C2H6O. The structural formulas reveal the very different connectivities of these two molecules whose physical and chemistry properties are quite different:
Structures without molecules: stability and reactivity
The precise definition of bonding energy is described in another lesson and is not important here. For the moment you only need to know that in any stable structure, the potential energy of its atoms is lower than that of the individual isolated atoms. Thus the formation of methane from its gaseous atoms (a reaction that cannot be observed under ordinary conditions but for which the energetics are known from indirect evidence)
\[ \ce{ 4 H(g) + C(g) → CH4}\]
is accompanied by the release of heat, and is thus an exothermic process. The quantity of heat released is related to the stability of the molecule. The smaller the amount of energy released, the more easily can the molecule absorb thermal energy from the environment, driving the above reaction in reverse and leading to the molecule's decomposition. A highly stable molecule such as methane must be subjected to temperatures of more than 1000°C for significant decomposition to occur. But the noble-gas molecule KrF2 is so weakly bound that it decomposes even at 0°C, and the structure He2 has never been observed. If a particular arrangement of atoms is too unstable to reveal its properties at any achievable temperature, then it does not qualify to be called a molecule.
There are many molecules that are energetically stable enough to meet the above criterion, but are so reactive that their lifetimes are too brief to make their observation possible. The molecule CH3, methyl, is a good example: it can be formed by electrical discharge in gaseous CH4, but it is so reactive that it combines with almost any molecule it strikes (even another CH3)within a few collisions. It was not until the development of spectroscopic methods (in which a molecule is characterized by the wavelengths of light that it absorbs or emits) that methyl was recognized as a stable albeit shamelessly promiscuous molecule that is an important intermediate in many chemical processes ranging from flames to atmospheric chemistry.
How we Depict Chemical Structures
Chemical species are traditionally represented by structural formulas such as the one for ascorbic acid (vitamin C) which we show here. The lines, of course, represent the "chemical bonds" of the molecule. More importantly, the structural formula of a molecule defines its connectivity, as was illustrated in the comparison of ethanol and dimethyl ether shown above.
methane
Note how this shows CH4 to be roughly spherical.
Ordinary structural formula, showing connectivity only.
Ball-and-stick model, showing the "chemical bonds" and bonding geometry, but with the individual atoms unrealistically separated.
Space-filling model, showing relative sizes of the atoms and general shape of the molecule, but not all atoms visible. No obvious "chemical bonds" here!
One limitation of such formulas is that they are drawn on a two-dimensional paper or screen, whereas most molecules have a three-dimensional shape. The wedge-shaped lines in the structural formula are one way of indicating which bonds extend above or below the viewing plane. You will probably be spared having to learn this convention until you get into second-year Chemistry. Three-dimensional models (either real plastic ones or images that incorporate perspective and shading) reveal much more about a molecule's structure. The ball-and-stick and space-filling renditions are widely employed, but each has its limitations, as seen in the following examples:
But what would a molecule "really" look like if you could view it through a magical microscope of some kind? A possible answer would be this computer-generated view of nicotine. At first you might think it looks more like a piece of abstract sculpture than a molecule, but it does reveal the shape of the negative charge-cloud that envelops the collection of atom cores and nuclei hidden within. This can be very important for understanding how the molecule interacts with the similar charge-clouds that clothe solvent and bioreceptor molecules.
Finally, we get to see one! In 2009, IBM scientists in Switzerland succeeded in imaging a real molecule, using a technique known as atomic force microscopy in which an atoms-thin metallic probe is drawn ever-so-slightly above the surface of an immobilized pentacene molecule cooled to nearly absolute zero. In order to improve the image quality, a molecule of carbon monoxide was placed on the end of the probe. The image produced by the AFM probe is shown at the very bottom. What is actually being imaged is the surface of the electron clouds of the molecule, which consists of five fused hexagonal rings of carbon atoms with hydrogens on its periphery. The tiny bumps that correspond to these hydrogen atom attest to the remarkable resolution of this experiment.
Visualization of molecular structures
The purpose of rendering a molecular structure in a particular way is not to achieve "realism" (whatever that might be), but rather to convey useful information of some kind. Modern computer rendering software takes its basic data from various kinds of standard structural databases which are compiled either from experimental X-ray scattering data, or are calculated from theory.
As was mentioned above, it is often desirable to show the "molecular surface"— the veil of negative charge that originates in the valence electrons of the atoms but which tends to be spread over the entire molecule to a distance that can significantly affect van der Waals interactions with neighboring molecules. It is often helpful to superimpose images representing the atoms within the molecule, scaled to their average covalent radii, and to draw the "bonding lines" expressing their connectivity.
Knowing the properties of molecular surfaces is vitally important to understanding any process that depends on one molecule remaining in physical contact with another. Catalysis is one example, but one of the main interests at the present time is biological signaling, in which a relatively small molecule binds to or "docks" with a receptor site on a much larger one, often a protein. Sophisticated molecular modeling software such as was used to produce these images is now a major tool in many areas of research biology.
Visualizing very large molecules such as carbohydrates and proteins that may contain tens of thousands of atoms presents obvious problems. The usual technique is to simplify the major parts of the molecule, representing major kinds of extended structural units by shapes such as ribbons or tubes which are twisted or bent to approximate their conformations. These are then gathered to reveal the geometrical relations of the various units within the overall structure. Individual atoms, if shown at all, are restricted to those of special interest.
Study of the surface properties of large molecules is crucial for understanding how proteins, carbohydrates, and DNA interact with smaller molecules, especially those involved in transport of ions and small molecule across cell membranes, immune-system behavior, and signal transduction processes such as the "turning on" of genes.
Observable Properties of Bonded Atom Pairs
When we talk about the properties of a particular chemical bond, we are really discussing the relationship between two adjacent atoms that are part of the molecule.Diatomic molecules are of course the easiest to study, and the information we derive from them helps us interpret various kinds of experiments we carry out on more complicated molecules.
It is important to bear in mind that the exact properties of a specific kind of bond will be determined in part by the nature of the other bonds in the molecule; thus the energy and length of the C–H bond will be somewhat dependent on what other atoms are connected to the carbon atom. Similarly, the C-H bond length can vary by as much a 4 percent between different molecules. For this reason, the values listed in tables of bond energy and bond length are usually averages taken over a variety of compounds that contain a specific atom pair..
In some cases, such as C—O and C—C, the variations can be much greater, approaching 20 percent. In these cases, the values fall into groups which we interpret as representative of single- and multiple bonds: double, and triple.
Potential energy curves
The energy of a system of two atoms depends on the distance between them. At large distances the energy is zero, meaning “no interaction”. At distances of several atomic diameters attractive forces dominate, whereas at very close approaches the force is repulsive, causing the energy to rise. The attractive and repulsive effects are balanced at the minimum point in the curve. Plots that illustrate this relationship are known as Morse curves, and they are quite useful in defining certain properties of a chemical bond.
The internuclear distance at which the potential energy minimum occurs defines the bond length. This is more correctly known as the equilibrium bond length, because thermal motion causes the two atoms to vibrate about this distance. In general, the stronger the bond, the smaller will be the bond length.
Attractive forces operate between all atoms, but unless the potential energy minimum is at least of the order of RT, the two atoms will not be able to withstand the disruptive influence of thermal energy long enough to result in an identifiable molecule. Thus we can say that a chemical bond exists between the two atoms in \(\ce{H2}\). The weak attraction between argon atoms does not allow \(\ce{Ar2}\) to exist as a molecule, but it does give rise to the van Der Waals force that holds argon atoms together in its liquid and solid forms.
Potential energy and kinetic energy Quantum theory tells us that an electron in an atom possesses kinetic energy K as well as potential energy P, so the total energy E is always the sum of the two: E = P + K. The relation between them is surprisingly simple: K = –0.5 P. This means that when a chemical bond forms (an exothermic process with ΔE < 0), the decrease in potential energy is accompanied by an increase in the kinetic energy (embodied in the momentum of the bonding electrons), but the magnitude of the latter change is only half as much, so the change in potential energy always dominates. The bond energy –ΔE has half the magnitude of the fall in potential energy.
Bond energies
The bond energy is the amount of work that must be done to pull two atoms completely apart; in other words, it is the same as the depth of the “well” in the potential energy curve shown above. This is almost, but not quite the same as the bond dissociation energy actually required to break the chemical bond; the difference is the very small zero-point energy. This relationship will be clarified below in the section on bond vibrational frequencies. Bond energies are usually determined indirectly from thermodynamic data, but there are two main experimental ways of measuring them directly:
1. The direct thermochemical method involves separating the two atoms by an electrical discharge or some other means, and then measuring the heat given off when they recombine. Thus the energy of the C—C single bond can be estimated from the heat of the recombination reaction between methyl radicals, yielding ethane: \[CH_3 + CH_3 → H_3C–CH_3\] Although this method is simple in principle, it is not easy to carry out experimentally. The highly reactive components must be prepared in high purity and in a stream of moving gas.
2. The spectroscopic method is based on the principle that absorption of light whose wavelength corresponds to the bond energy will often lead to the breaking of the bond and dissociation of the molecule. For some bonds, this light falls into the green and blue regions of the spectrum, but for most bonds ultraviolet light is required. The experiment is carried out by observing the absorption of light by the substance being studied as the wavelength is decreased. When the wavelength is sufficiently small to break the bond, a characteristic change in the absorption pattern is observed.
Spectroscopy is quite easily carried out and can yield highly precise results, but this method is only applicable to a relatively small number of simple molecules. The major problem is that the light must first be absorbed by the molecule, and relatively few molecules happen to absorb light of a wavelength that corresponds to a bond energy.
Experiments carried out on diatomic molecules such as O2 and CS yield unambiguous values of bond energy, but for more complex molecules there are complications. For example, the heat given off in the CH3 combination reaction written above will also include a small component that represents the differences in the energies of the C-H bonds in methyl and in ethane. These can be corrected for by experimental data on reactions such as
\[CH_3 + H → CH_4\]
\[CH_2 + H → CH_3\]
By assembling a large amount of experimental information of this kind, a consistent set of average bond energies can be obtained (see table below.) The energies of double bonds are greater than those of single bonds, and those of triple bonds are higher still.
Use of bond energies in estimating heats of reaction
One can often get a very good idea of how much heat will be absorbed or given off in a reaction by simply finding the difference in the total bond energies contained in the reactants and products. The strength of an individual bond such as O–H depends to some extent on its environment in a molecule (that is, in this example, on what other atom is connected to the oxygen atom), but tables of "average" energies of the various common bond types are widely available and can provide useful estimates of the quantity of heat absorbed or released in many chemical reaction.
Table 1: Average Bond Energies (kJ/mol)
Single Bonds Multiple Bonds
H—H
432
N—H
391
I—I
149
C = C
614
H—F
565
N—N
160
I—Cl
208
C ≡ C
839
H—Cl
427
N—F
272
I—Br
175
O = O
495
H—Br
363
N—Cl
200
C = O*
745
H—I
295
N—Br
243
S—H
347
C ≡ O
1072
N—O
201
S—F
327
N = O
607
C—H
413
O—H
467
S—Cl
253
N = N
418
C—C
347
O—O
146
S—Br
218
N ≡ N
941
C—N
305
O—F
190
S—S
266
C ≡ N
891
C—O
358
O—Cl
203
C = N
615
C—F
485
O—I
234
Si—Si
340
C—Cl
339
Si—H
393
C—Br
276
F—F
154
Si—C
360
C—I
240
F—Cl
253
Si—O
452
C—S
259
F—Br
237
Cl—Cl
239
Cl—Br
218
Br—Br
193
*C == O(CO2) = 799
Average bond energies are the averages of bond dissociation energies (see Table T3 for more complete list). For example the average bond energy of O-H in H2O is 464 kJ/mol. This is due to the fact that the H-OH bond requires 498.7 kJ/mol to dissociate, while the O-H bond needs 428 kJ/mol. \[\dfrac{498.7\; kJ/mol + 428\; kJ/mol}{2}=464\; kJ/mol\]
Example 1: Dichloromethane
Consider the reaction of chlorine with methane to produce dichloromethane and hydrogen chloride:
\[\ce{CH4(g) + 2 Cl2(g) → CH2Cl2(g) + 2 HCl(g)}\]
In this reaction, two C–Cl bonds and two H–Cl bonds are broken, and two new C–Cl and H–Cl bonds are formed. The net change associated with the reaction is
2(C–H) + 2(Cl–Cl) – 2(C–Cl) – 2(H–Cl) = (830 + 486 –660 – 864) kJ
which comes to –208 kJ per mole of methane; this agrees quite well with the observed heat of reaction, which is –202 kJ/mol.
Bond lengths and angles
The length of a chemical bond the distance between the centers of the two bonded atoms (the internuclear distance.) Bond lengths have traditionally been expressed in Ångstrom units, but picometers are now preferred (1Å = 10-8 cm = 100 pm.) Bond lengths are typically in the range 1-2 Å or 100-200 pm. Even though the bond is vibrating, equilibrium bond lengths can be determined experimentally to within ±1 pm.
Bond lengths depend mainly on the sizes of the atoms, and secondarily on the bond strengths, the stronger bonds tending to be shorter. Bonds involving hydrogen can be quite short; The shortest bond of all, H–H, is only 74 pm. Multiply-bonded atoms are closer together than singly-bonded ones; this is a major criterion for experimentally determining the multiplicity of a bond. This trend is clearly evident in the above plot which depicts the sequence of carbon-carbon single, double, and triple bonds.
The most common method of measuring bond lengths in solids is by analysis of the diffraction or scattering of X-rays when they pass through the regularly-spaced atoms in the crystal. For gaseous molecules, neutron- or electron-diffraction can also be used.
The complete structure of a molecule requires a specification of the coordinates of each of its atoms in three-dimensional space. This data can then be used by computer programs to construct visualizations of the molecule as discussed above. One such visualization of the water molecule, with bond distances and the HOH bond angle superimposed on a space-filling model, is shown here. (It is taken from an excellent reference source on water). The colors show the results of calculations that depict the way in which electron charge is distributed around the three nuclei.
Bond stretching and infrared absorption
When an atom is displaced from its equilibrium position in a molecule, it is subject to a restoring force which increases with the displacement. A spring follows the same law (Hooke’s law); a chemical bond is therefore formally similar to a spring that has weights (atoms) attached to its two ends. A mechanical system of this kind possesses a natural vibrational frequency which depends on the masses of the weights and the stiffness of the spring. These vibrations are initiated by the thermal energy of the surroundings; chemically-bonded atoms are never at rest at temperatures above absolute zero.
On the atomic scale in which all motions are quantized, a vibrating system can possess a series of vibrational frequencies, or states. These are depicted by the horizontal lines in the potential energy curve shown here. Notice that the very bottom of the curve does not correspond to an allowed state because at this point the positions of the atoms are precisely specified, which would violate the uncertainty principle. The lowest-allowed, or ground vibrational state is the one denoted by 0, and it is normally the only state that is significantly populated in most molecules at room temperature. In order to jump to a higher state, the molecule must absorb a photon whose energy is equal to the distance between the two states.
For ordinary chemical bonds, the energy differences between these natural vibrational frequencies correspond to those of infrared light. Each wavelength of infrared light that excites the vibrational motion of a particular bond will be absorbed by the molecule. In general, the stronger the bond and the lighter the atoms it connects, the higher will be its natural stretching frequency and the shorter the wavelength of light absorbed by it. Studies on a wide variety of molecules have made it possible to determine the wavelengths absorbed by each kind of bond. By plotting the degree of absorption as a function of wavelength, one obtains the infrared spectrum of the molecule which allows one to "see" what kinds of bonds are present.
Infrared spectrum of alcohol The low points in the plot below indicate the frequencies of infrared light that are absorbed by ethanol (ethyl alcohol), CH3CH2OH. Notice how stretching frequencies involving hydrogen are higher, reflecting the smaller mass of that atom. Only the most prominent absorption bands are noted here.
Actual infrared spectra are complicated by the presence of more complex motions (stretches involving more than two atoms, wagging, etc.), and absorption to higher quantum states (overtones), so infrared spectra can become quite complex. This is not necessarily a disadvantage, however, because such spectra can serve as a "fingerprint" that is unique to a particular molecule and can be helpful in identifying it. Largely for this reason, infrared spectrometers are standard equipment in most chemistry laboratories. Now that you know something about bond stretching vibrations, you can impress your friends by telling them why water is blue!
Infrared Absorption and Global Warming
The aspect of bond stretching and bending frequencies that impacts our lives most directly is the way that some of the gases of the atmosphere absorb infrared light and thus affect the heat balance of the Earth. Owing to their symmetrical shapes, the principal atmospheric components N2 and O2 do not absorb infrared light, but the minor components water vapor and carbon dioxide are strong absorbers, especially in the long-wavelength region of the infrared. Absorption of infrared light by a gas causes its temperature to rise, so any source of infrared light will tend to warm the atmosphere; this phenomenon is known as the greenhouse effect.
The incoming radiation from the Sun (which contains relatively little long-wave infrared light) passes freely through the atmosphere and is absorbed by the Earth's surface, warming it up and causing it to re-emit some of this energy as long-wavelength infrared. Most of the latter is absorbed by the H2O and CO2 , the major greenhouse gas is in the unpolluted atmosphere, effectively trapping the radiation as heat. Thus the atmosphere is heated by the Earth, rather than by direct sunlight. Without the “greenhouse gases” in the atmosphere, the Earth's heat would be radiated away into space, and our planet would be too cold for life.
Radiation balance of the Earth In order to maintain a constant average temperature, the quantity of radiation (sunlight) absorbed by the surface must be exactly balanced by the quantity of long-wavelength infrared emitted by the surface and atmosphere and radiated back into space. Atmospheric gases that absorb this infrared light (depicted in red on the right part of this diagram) partially block this emission and become warmer, raising the Earth's temperature. This diagram is from the U. of Oregon Web page referenced below.
Since the beginning of the Industrial Revolution in the 19th century, huge quantities of additional greenhouse gases have been accumulating in the atmosphere. Carbon dioxide from fossil fuel combustion has been the principal source, but intensive agriculture also contributes significant quantities of methane (CH4) and nitrous oxide (N2O) which are also efficient far-infrared absorbers. The measurable increase in these gases is believed by many to be responsible for the increase in the average temperature of the Earth that has been noted over the past 50 years— a trend that could initiate widespread flooding and other disasters if it continues. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.02%3A_Molecules_-_Properties_of_Bonded_Atoms.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented.
• Comment on the distinction between a theory and a model in the context of chemical bonding.
• What is meant by a classical model of chemical bonding?
Why do atoms bind together— sometimes? The answer to this question would ideally be a simple, easily understood theory that would not only explain why atoms bind together to form molecules, but would also predict the three-dimensional structures of the resulting compounds as well as the energies and other properties of the bonds themselves. Unfortunately, no one theory exists that accomplishes these goals in a satisfactory way for all of the many categories of compounds that are known. Moreover, it seems likely that if such a theory does ever come into being, it will be far from simple.
About Models in Science
When we are faced the need to find a scientific explanation for a complex phenomenon such as bonding, experience has shown that it is often best to begin by developing a model. A scientific model is something like a theory in that it should be able to explain observations and to make useful predictions. But whereas a theory can be discredited by a single contradictory case, a model can be useful even if it does not encompass all instances of the effects it attempts to explain. We do not even require that a model be a credible representation of reality; all we ask is that it be able to explain the behavior of those cases to which it is applicable in terms that are consistent with the model itself.
An example of a model that you may already know about is the kinetic molecular theory of gases. Despite its name, this is really a model (at least at the level that beginning students use it) because it does not even try to explain the observed behavior of real gases. Nevertheless, it serves as a tool for developing our understanding of gases, and as an essential starting point for more elaborate treatments.
One thing is clear: chemical bonding is basically electrical in nature, the result of attraction between bodies of opposite charge; bonding occurs when outer-shell electrons are simultaneously attracted to the positively-charged nuclei of two or more nearby atoms. The need for models arises when we try to understand why
• Not all pairs of atoms can form stable bonds
• Different elements can form different numbers of bonds (this is expressed as "combining power" or "valence".)
• The geometric arrangement of the bonds ("bonding geometry") around a given kind of atom is a property of the element.
Given the extraordinary variety of ways in which atoms combine into aggregates, it should come as no surprise that a number of useful bonding models have been developed. Most of them apply only to certain classes of compounds or attempt to explain only a restricted range of phenomena. In this section we will provide brief descriptions of some of the bonding models; the more important of these will be treated in much more detail in later lessons in this unit.
Some early views of chemical bonding
Intense speculation about “chemical affinity” began in the 18th century. Some likened the tendency of one atom to “close” with another as an expression of a human-like kind of affection. Others attributed bonding to magnetic-like forces (left) or to varying numbers of “hooks” on different kinds of atoms (right). The latter constituted a primitive (and extremely limited) way of explaining the different combining powers (valances) of the different elements.
"There are no such things..."
Napoleon's definition of history as a set of lies agreed on by historians seems to have a parallel with chemical bonding and chemists. At least in Chemistry, we can call the various explanations "models" and get away with it even if they are demonstrably wrong, as long as we find them useful. In a provocative article (J Chem Educ 1990 67(4) 280-298), J. F. Ogilvie tells us that there are no such things as orbitals, or, for that matter, non-bonding electrons, bonds, or even uniquely identifiable atoms within molecules. This idea disturbed a lot of people (teachers and textbook authors preferred to ignore it) and prompted a spirited rejoinder (J Chem Ed 1992 69(6) 519-521) from Linus Pauling, father of the modern quantum-mechanical view of the chemical bond.
But the idea has never quite gone away. Richard Bader of McMaster University has developed a quantitative "atoms in molecules" model that depicts molecules as a collection of point-like nuclei embedded in a diffuse cloud of electrons. There are no "bonds" in this model, but only "bond paths" that correspond to higher values of electron density along certain directions that are governed by the manner in which the positive nuclei generate localized distortions of the electron cloud.
Classical models of the chemical bond
By classical, we mean models that do not take into account the quantum behavior of small particles, notably the electron. These models generally assume that electrons and ions behave as point charges which attract and repel according to the laws of electrostatics. Although this completely ignores what has been learned about the nature of the electron since the development of quantum theory in the 1920’s, these classical models have not only proven extremely useful, but the major ones also serve as the basis for the chemist’s general classification of compounds into “covalent” and “ionic” categories.
The Ionic Model
Ever since the discovery early in the 19th century that solutions of salts and other electrolytes conduct electric current, there has been general agreement that the forces that hold atoms together must be electrical in nature. Electrolytic solutions contain ions having opposite electrical charges; opposite charges attract, so perhaps the substances from which these ions come consist of positive and negatively charged atoms held together by electrostatic attraction.
It turns out that this is not true generally, but a model built on this assumption does a fairly good job of explaining a rather small but important class of compounds that are called ionic solids. The most well known example of such a compound is sodium chloride, which consists of two interpenetrating lattices of Na+ and Cl ions arranged in such as way that every ion of one type is surrounded (in three dimensional space) by six ions of opposite charge.
One can envision the formation of a solid NaCl unit by a sequence of events in which one mole of gaseous Na atoms lose electrons to one mole of Cl atoms, followed by condensation of the resulting ions into a crystal lattice:
Na(g) → Na+(g) + e– +494 kJ ionization energy
Cl(g) + e– → Cl(g) –368 kJ electron affinity
Na+(g) + Cl(g) –498 kJ lattice energy
Na(g) + Cl(g) → NaCl(s) –372 kJ Sum: Na-Cl bond energy
Note: positive energy values denote endothermic processes, while negative ones are exothermic.
Since the first two energies are known experimentally, as is the energy of the sum of the three processes, the lattice energy can be found by difference. It can also be calculated by averaging the electrostatic forces exerted on each ion over the various directions in the solid, and this calculation is generally in good agreement with observation, thus lending credence to the model. The sum of the three energy terms is clearly negative, and corresponds to the liberation of heat in the net reaction (bottom row of the table), which defines the Na–Cl “bond” energy.
The ionic solid is more stable than the equivalent number of gaseous atoms simply because the three-dimensional NaCl structure allows more electrons to be closer to more nuclei. This is the criterion for the stability of any kind of molecule; all that is special about the “ionic” bond is that we can employ a conceptually simple electrostatic model to predict the bond strength.
The main limitation of this model is that it applies really well only to the small class of solids composed of Group 1 and 2 elements with highly electronegative elements such as the halogens. Although compounds such as CuCl2 dissociate into ions when they dissolve in water, the fundamental units making up the solid are more like polymeric chains of covalently-bound CuCl2 molecules that have little ionic character.
Shared-electron (covalent) model
This model originated with the theory developed by G.N. Lewis in 1916, and it remains the most widely-used model of chemical bonding. It is founded on the idea that a pair of electrons shared between two atoms can create a mutual attraction, and thus a chemical bond.
Usually each atom contributes one electron (one of its valence electrons) to the pair, but in some cases both electrons come from one of the atoms. For example, the bond between hydrogen and chlorine in the hydrogen chloride molecule is made up of the single 1s electron of hydrogen paired up with one of chlorine's seven valence (3p) electrons. The stability afforded by this sharing is thought to derive from the noble gas configurations (helium for hydrogen, argon for chlorine) that surround the bound atoms.
The origin of the electrostatic binding forces in this model can best be understood by examining the simplest possible molecule. This is the hydrogen molecule ion H2+, which consists of two nuclei and one electron.
First, however, think what would happen if we tried to make the even simpler molecule H22+. Since this would consist only of two protons whose electrostatic charges would repel each other at all distances, it is clear that such a molecule cannot exist; something more than two nuclei are required for bonding to occur.
H2+ we have a third particle, the electron. The effect of this electron will depend on its location with respect to the two nuclei. If the electron is in the space between the two nuclei (the binding region in the diagram), it will attract both protons toward itself, and thus toward each other. If the total attraction energy exceeds the internuclear repulsion, there will be a net bonding effect and the molecule will be stable. If, on the other hand, the electron is off to one side (in an antibinding region), it will attract both nuclei, but it will attract the closer one much more strongly, owing to the inverse-square nature of Coulomb’s law. As a consequence, the electron will now actively work ag against bonding by helping to push the two nuclei apart.
Polar covalent model
A purely covalent bond can only be guaranteed when the electronegativities (electron-attracting powers) of the two atoms are identical. When atoms having different electronegativities are joined, the electrons shared between them will be displaced toward the more electronegative atom, conferring a polarity on the bond which can be described in terms of percent ionic character.
Coulombic model
This is an extension of the ionic model to compounds that are ordinarily considered to be non-ionic. Combined hydrogen is always considered to exist as the hydride ion H, so that methane can be treated as if it were C4+ H4.
This is not as bizarre as it might seem at first if you recall that the proton has almost no significant size, so that it is essentially embedded in an electron pair when it is joined to another atom in a covalent bond. This model, which is not as well known as it deserves to be, has surprisingly good predictive power, both as to bond energies and structures.
VSEPR model
The “valence shell electron repulsion” model is not so much a model of chemical bonding as a scheme for explaining the shapes of molecules. It is based on the quantum mechanical view that bonds represent electron clouds— physical regions of negative electric charge that repel each other and thus try to stay as far apart as possible. We will explore this concept in much greater detail in a later unit.
3 Quantum-mechanical models
These models of bonding take into account the fact that a particle as light as the electron cannot really be said to be in any single location. The best we can do is define a region of space in which the probability of finding the electron has some arbitrary value which will always be less than unity. The shape of this volume of space is called an orbital and is defined by a mathematical function that relates the probability to the (x,y,z) coordinates of the molecule.
Like other models of bonding, the quantum models attempt to show how more electrons can be simultaneously close to more nuclei. Instead of doing so through purely geometrical arguments, they attempt this by predicting the nature of the orbitals which the valence electrons occupy in joined atoms.
The hybrid orbital model
This was developed by Linus Pauling in 1931 and was the first quantum-based model of bonding. It is based on the premise that if the atomic s,p, and d orbitals occupied by the valence electrons of adjacent atoms are combined in a suitable way, the hybrid orbitals that result will have the character and directional properties that are consistent with the bonding pattern in the molecule. The rules for bringing about these combinations turn out to be remarkably simple, so once they were worked out it became possible to use this model to predict the bonding behavior in a wide variety of molecules. The hybrid orbital model is most usefully applied to the p-block elements in the first few rows of the periodic table, and is especially important in organic chemistry.
The molecular orbital model
This model takes a more fundamental approach by regarding a molecule as a collection of valence electrons and positive cores. Just as the nature of atomic orbitals derives from the spherical symmetry of the atom, so will the properties of these new molecular orbitals be controlled by the interaction of the valence electrons with the multiple positive centers of these atomic cores.
These new orbitals, unlike those of the hybrid model, are delocalized; that is, they do not “belong” to any one atom but extend over the entire region of space that encompasses the bonded atoms. The available (valence) electrons then fill these orbitals from the lowest to the highest, very much as in the Aufbau principle that you learned for working out atomic electron configurations. For small molecules (which are the only ones we will consider here), there are simple rules that govern the way that atomic orbitals transform themselves into molecular orbitals as the separate atoms are brought together. The real power of molecular orbital theory, however, comes from its mathematical formulation which lends itself to detailed predictions of bond energies and other properties.
The electron-tunneling model
A common theme uniting all of the models we have discussed is that bonding depends on the fall in potential energy that occurs when opposite charges are brought together. In the case of covalent bonds, the shared electron pair acts as a kind of “electron glue” between the joined nuclei. In 1962, however, it was shown that this assumption is not strictly correct, and that instead of being concentrated in the space between the nuclei, the electron orbitals become even more concentrated around the bonded nuclei. At the same time however, they are free to “move” between the two nuclei by a process known as tunneling.
This refers to a well-known quantum mechanical effect that allows electrons (or other particles small enough to exhibit wavelike properties) to pass (“tunnel”) through a barrier separating two closely adjacent regions of low potential energy. One result of this is that the effective volume of space available to the electron is increased, and according to the uncertainty principle this will reduce the kinetic energy of the electron.
According to this model, the bonding electrons act as a kind of fluid that concentrates in the region of each nucleus (lowering the potential energy) and at the same time is able to freely flow between them (reducing the kinetic energy). Despite its conceptual simplicity and full acknowledgment of the laws of quantum mechanics, this model is less known than it deserves to be and is unfortunately absent from most textbooks. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.03%3A_Models_of_Chemical_Bonding.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas which have been presented below.
• Define electronegativity, and describe the general way in which electronegativities depend on the location of an element in the periodic table.
• Explain how electronegativity values relate to the polar nature of a chemical bond.
• What two factors determine the magnitude of an electric dipole moment ?
• Sketch structural diagrams that illustrate how the presence of polar bonds in a molecule can either increase or diminish the magnitude of the molecular dipole moment.
• Calculate the formal charge of each atom in a structure, and comment on its significance in relation to the polarity of that structure.
• Select the more likely of two or more electron-dot structures for a given species.
• Explain the difference between formal charge and oxidation number.
• Describe the limiting cases of covalent and ionic bonds. Explain what both categories of bonds have in common.
The electrons constituting a chemical bond are simultaneously attracted by the electrostatic fields of the nuclei of the two bonded atoms. In a homonuclear molecule such as O2 the bonding electrons will be shared equally by the two atoms. In general, however, differences in the sizes and nuclear charges of the atoms will cause one of them to exert a greater attraction on the bonding pair, causing the electron cloud to be displaced toward the more strongly-attracting atom.
Electronegativity
The electronegativity of an atom denotes its relative electron-attracting power in a chemical bond. It is important to understand that electronegativity is not a measurable property of an atom in the sense that ionization energies and electron affinities are, although it can be correlated with both of these properties. The actual electron-attracting power of an atom depends in part on its chemical environment (that is, on what other atoms are bonded to it), so tabulated electronegativities should be regarded as no more than predictors of the behavior of electrons, especially in more complicated molecules. There are several ways of computing electronegativities, which are expressed on an arbitrary scale. The concept of electronegativity was introduced by Linus Pauling and his 0-4 scale continues to be the one most widely used.
The 0-4 electronegativity scale of Pauling is the best known of several arbitrary scales of this kind. Electronegativity values are not directly observable, but are derived from measurable atomic properties properties such as ionization energy and electron affinity. The place of any atom on this scale provides a good indication of its ability to compete with another atom in attracting a shared electron pair to it, but the presence of bonds to other atoms, and of multiple- or nonbonding electron pairs may make predictions about the nature a given bond less reliable.
An atom that has a small electronegativity is said to be electropositive. As the diagram shows, the metallic elements are generally electropositive. The position of hydrogen in this regard is worth noting; although physically a nonmetal, much of its chemistry is metal-like.
Dipole moments
When non-identical atoms are joined in a covalent bond, the electron pair will be attracted more strongly to the atom that has the higher electronegativity. As a consequence, the electrons will not be shared equally; the center of the negative charges in the molecule will be displaced from the center of positive charge. Such bonds are said to be polar and to possess partial ionic character, and they may confer a polar nature on the molecule as a whole.
A polar molecule acts as an electric dipole which can interact with electric fields that are created artificially or that arise from nearby ions or polar molecules. Dipoles are conventionally represented as arrows pointing in the direction of the negative end. The magnitude of interaction with the electric field is given by the permanent electric dipole moment of the molecule. The dipole moment corresponding to an individual bond (or to a diatomic molecule) is given by the product of the quantity of charge displaced q and the bond length r:
$μ = q \times r$
In SI units, q is expressed in coulombs and r in meters, so μ has the dimensions of $C \cdot m$. If one entire electron charge is displaced by 100 pm (a typical bond length), then
$μ = (1.6022 \times 10^{–19}\; C) \times (10^{–10}\; m) = 1.6 \times 10^{–29}\; C \cdot m = 4.8 \;D$
The quantity denoted by D, the Debye unit, is still commonly used to express dipole moments. It was named after Peter Debye (1884-1966), the Dutch-American physicist who pioneered the study of dipole moments and of electrical interactions between particles; he won the Nobel Prize for Chemistry in 1936.
How dipole moments are measured
When a solution of polar molecules is placed between two oppositely-charged plates, they will tend to align themselves along the direction of the field. This process consumes energy which is returned to the electrical circuit when the field is switched off, an effect known as electrical capacitance.
Measurement of the capacitance of a gas or solution is easy to carry out and serves as a means of determining the magnitude of the dipole moment of a substance.
Example $1$
Estimate the percent ionic character of the bond in the hydrogen fluoride molecule from the experimental data shown at the right.
Dipole moments as vector sums
In molecules containing more than one polar bond, the molecular dipole moment is just the vector combination of what can be regarded as individual "bond dipole moments". Being vectors, these can reinforce or cancel each other, depending on the geometry of the molecule; it is therefore not uncommon for molecules containing polar bonds to be nonpolar overall, as in the example of carbon dioxide:
The zero dipole moment of CO2 is one of the simplest experimental methods of demonstrating the linear shape of this molecule.
H2O, by contrast, has a very large dipole moment which results from the two polar H–O components oriented at an angle of 104.5°. The nonbonding pairs on oxygen are a contributing factor to the high polarity of the water molecule. In molecules containing nonbonding electrons or multiple bonds, the elecronegativity difference may not correctly predict the bond polarity. A good example of this is carbon monoxide, in which the partial negative charge resides on the carbon, as predicted by its negative formal charge (below.)
Electron densities in a molecule (and the dipole moments that unbalanced electron distributions can produce) are now easily calculated by molecular modeling programs. In this example, for methanol CH3OH, the blue area centered on hydrogen represents a positive charge, the red area centered where we expect the lone pairs to be located represents a negative charge, while the light green around methyl is approximately neutral. The manner in which the individual bonds contribute to the dipole moment of the molecule is nicely illustrated by the series of chloromethanes:
(Bear in mind that all four positions around the carbon atom are equivalent in this tetrahedral molecule, so there are only four chloromethanes.)
Formal charge and oxidation number
Although the total number of valence electrons in a molecule is easily calculated, there is not always a simple and unambiguous way of determining how many reside in a particular bond or as non-bonding pairs on a particular atom. For example, one can write valid Lewis octet structures for carbon monoxide showing either a double or triple bond between the two atoms, depending on how many nonbonding pairs are placed on each: C::O::: and :C:::O: (see Problem Example 3 below). The choice between structures such as these is usually easy to make on the principle that the more electronegative atom tends to surround itself with the greater number of electrons. In cases where the distinction between competing structures is not all that clear, an arbitrarily-calculated quantity known as the formal charge can often serve as a guide.
The formal charge on an atom is the electric charge it would have if all bonding electrons were shared equally with its bonded neighbors.
The formal charge on an atom is calculated by the following formula:
in which the core charge is the electric charge the atom would have if all its valence electrons were removed. In simple cases, the formal charge can be worked out visually directly from the Lewis structure, as is illustrated farther on.
Example $2$: Formal Charge
Find the formal charges of all the atoms in the sulfuric acid structure shown here.
Solution
The atoms here are hydrogen, sulfur, and double- and single-bonded oxygens. Remember that a double bond is made up of two electron-pairs.
• hydrogen: FC = 1 – 0 – 1 = 0
• sulfur: FC = 6 – 0 – 6 = 0
• hydroxyl oxygen: FC = 6 – 4 – 2 = 0
• double-bonded oxygen: FC = 6 – 4 – 2 = 0
Using formal charge to select the best Lewis structure
The general rule for choosing between alternative structures is that the one involving the smallest formal charges is most favored, although the following example shows that this is not always the case.
Example $1$: Carbon Monoxide
Write out some structures for carbon monoxide CO, both those that do and do not obey the octet rule, and select the "best" on the basis of the formal charges.
Solution
Structure that obeys the octet rule:
a) For :C:::O: Carbon: 4 – 2 – 3 = –1; Oxygen: 6 – 2 – 3 = +1
Structures that do not obey the octet rule (for carbon):
b) For :C:O::: Carbon: 4 – 2 – 1 = +1; Oxygen: 6 – 6 – 1 = –1
c) For :C::O:: Carbon: 4 – 2 –2 = 0; Oxygen: 6 – 4 – 2 = 0
Comment: All three structures are acceptable (because the formal charges add up to zero for this neutral molecule) and contribute to the overall structure of carbon monoxide, although not equally. Both experiment and more advanced models show that the triple-bonded form (a) predominates. Formal charge, which is no more than a bookkeeping scheme for electrons, is by itself unable to predict this fact.
In a species such as the thiocyanate ion $SCN^-$ in which two structures having the same minimal formal charges can be written, we would expect the one in which the negative charge is on the more electronegative atom to predominate.
The electrons in the structures of the top row are the valence electrons for each atom; an additional electron (purple) completes the nitrogen octet in this negative ion. The electrons in the bottom row are divided equally between the bonded atoms; the difference between these numbers and those above gives the formal charges.
Formal charge can also help answer the question “where is the charge located?” that is frequently asked about polyatomic ions. Thus by writing out the Lewis structure for the ammonium ion NH4+, you should be able to convince yourself that the nitrogen atom has a formal charge of +1 and each of the hydrogens has 0, so we can say that the positive charge is localized on the central atom.
Oxidation number
This is another arbitrary way of characterizing atoms in molecules. In contrast to formal charge, in which the electrons in a bond are assumed to be shared equally, oxidation number is the electric charge an atom would have if the bonding electrons were assigned exclusively to the more electronegative atom. Oxidation number serves mainly as a tool for keeping track of electrons in reactions in which they are exchanged between reactants, and for characterizing the “combining power” of an atom in a molecule or ion.
The following diagram compares the way electrons are assigned to atoms in calculating formal charge and oxidation number in carbon monoxide.
Ionic compounds
The shared-electron pair model introduced by G.N. Lewis showed how chemical bonds could form in the absence of electrostatic attraction between oppositely-charged ions. As such, it has become the most popular and generally useful model of bonding in all substances other than metals. A chemical bond occurs when electrons are simultaneously attracted to two nuclei, thus acting to bind them together in an energetically-stable arrangement. The covalent bond is formed when two atoms are able to share a pair of electrons:
In general, however, different kinds of atoms exert different degrees of attraction on their electrons, so in most cases the sharing will not be equal. One can even imagine an extreme case in which the sharing is so unequal that the resulting "molecule" is simply a pair of ions:
The resulting substance is sometimes said to contain an ionic bond. Indeed, the properties of a number of compounds can be adequately explained using the ionic model. But does this mean that there are really two kinds of chemical bonds, ionic and covalent? According to the ionic electrostatic model, solids such as NaCl consist of positive and negative ions arranged in a crystal lattice. Each ion is attracted to neighboring ions of opposite charge, and is repelled by ions of like charge; this combination of attractions and repulsions, acting in all directions, causes the ion to be tightly fixed in its own location in the crystal lattice.
Since electrostatic forces are nondirectional, the structure of an ionic solid is determined purely by geometry: two kinds of ions, each with its own radius, will fall into whatever repeating pattern will achieve the lowest possible potential energy. Surprisingly, there are only a small number of possible structures; one of the most common of these, the simple cubic lattice of NaCl, is shown here.
Is there such as thing as an ionic bond?
When two elements form an ionic compound, is an electron really lost by one atom and transferred to the other one? In order to deal with this question, consider the data on the ionic solid LiF. The average radius of the neutral Li atom is about 2.52Å. Now if this Li atom reacts with an atom of F to form LiF, what is the average distance between the Li nucleus and the electron it has “lost” to the fluorine atom? The answer is 1.56Å; the electron is now closer to the lithium nucleus than it was in neutral lithium!
So the answer to the above question is both yes and no: yes, the electron that was now in the 2s orbital of Li is now within the grasp of a fluorine 2p orbital, but no, the electron is now even closer to the Li nucleus than before, so how can it be “lost”? The one thing that is inarguably true about LiF is that there are more electrons closer to positive nuclei than there are in the separated Li and F atoms. But this is just the rule we stated at the beginning of this unit: chemical bonds form when electrons can be simultaneously near two or more nuclei.
It is obvious that the electron-pair bond brings about this situation, and this is the reason for the stability of the covalent bond. What is not so obvious (until you look at the numbers such as are quoted for LiF above) is that the “ionic” bond results in the same condition; even in the most highly ionic compounds, both electrons are close to both nuclei, and the resulting mutual attractions bind the nuclei together. This being the case, is there really any fundamental difference between the ionic and covalent bond?
The answer, according to modern chemical thinking is probably “no”; in fact, there is some question as to whether it is realistic to consider that these solids consist of “ions” in the usual sense. The preferred picture that seems to be emerging is one in which the electron orbitals of adjacent atom pairs are simply skewed so as to place more electron density around the “negative” element than around the “positive” one.
This being said, it must be reiterated that the ionic model of bonding is a useful one for many purposes, and there is nothing wrong with using the term “ionic bond” to describe the interactions between the atoms in the very small class of “ionic solids” such as LiF and NaCl.
"Covalent, ionic or metallic" is an oversimplification!
If there is no such thing as a “completely ionic” bond, can we have one that is completely covalent? The answer is yes, if the two nuclei have equal electron attracting powers. This situation is guaranteed to be the case with homonuclear diatomic molecules-- molecules consisting of two identical atoms. Thus in Cl2, O2, and H2, electron sharing between the two identical atoms must be exactly even; in such molecules, the center of positive charge corresponds exactly to the center of negative charge: halfway between the two nuclei.
Categorizing all chemical bonds as either ionic, covalent, or metallic is a gross oversimplification; as this diagram shows, there are examples of substances that exhibit varying degrees of all three bonding characteristics.
Dative (Coordinate) covalent Bonds
In most covalent bonds, we think of the electron pair as having a dual parentage, one electron being contributed by each atom. There are, however, many cases in which both electrons come from only one atom. This can happen if the donor atom has a non-bonding pair of electrons and the acceptor atom has a completely empty orbital that can accommodate them. These are called dative or coordinate covalent bonds.
This is the case, for example, with boron trifluoride and ammonia. In BF3, one the 2p orbitals is unoccupied and can accommodate the lone pair on the nitrogen atom of ammonia. The electron acceptor, BF3, acts as a Lewis acid here, and NH3 is the Lewis base. Bonds of this type (sometimes known as coordinate covalent or dative bonds) tend to be rather weak (usually 50-200kJ/mol); in many cases the two joined units retain sufficient individuality to justify writing the formula as a molecular complex or adduct. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.04%3A_Polar_Covalence.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Describe the manner in which repulsion between electron-pairs affects the orientation of the regions that contain them.
• Define coordination geometry, and describe the particular geometry associated with electron-pair repulsion between two, three, four, five, or six identical bonding regions.
• Explain the distinction between coordination geometry and molecular geometry, and provide an illustration based on the structure of water or ammonia.
• Draw a diagram of a tetrahedral or octahedral molecule.
The Lewis electron-dot structures you have learned to draw have no geometrical significance other than depicting the order in which the various atoms are connected to one another. Nevertheless, a slight extension of the simple shared-electron pair concept is capable of rationalizing and predicting the geometry of the bonds around a given atom in a wide variety of situations.
Electron-pair repulsion
The valence shell electron pair repulsion (VSEPR) model that we describe here focuses on the bonding and nonbonding electron pairs present in the outermost (“valence”) shell of an atom that connects with two or more other atoms. Like all electrons, these occupy regions of space which we can visualize as electron clouds— regions of negative electric charge, also known as orbitals— whose precise character can be left to more detailed theories
.
The covalent model of chemical bonding assumes that the electron pairs responsible for bonding are concentrated into the region of apace between the bonded atoms. The fundamental idea of VSEPR thoery is that these regions of negative electric charge will repel each other, causing them (and thus the chemical bonds that they form) to stay as far apart as possible. Thus the two electron clouds contained in a simple triatomic molecule AX2 will extend out in opposite directions; an angular separation of 180° places the two bonding orbitals as far away from each other they can get. We therefore expect the two chemical bonds to extend in opposite directions, producing a linear molecule.
If the central atom also contains one or more pairs of nonbonding electrons, these additional regions of negative charge will behave very much like those associated with the bonded atoms. The orbitals containing the various bonding and nonbonding pairs in the valence shell will extend out from the central atom in directions that minimize their mutual repulsions. If the central atom possesses partially occupied d-orbitals, it may be able to accommodate five or six electron pairs, forming what is sometimes called an “expanded octet”.
Digonal and trigonal coordination
Linear molecules
As we stated above, a simple triatomic molecule of the type \(AX_2\) has its two bonding orbitals 180° apart, producing a molecule that we describe as having linear geometry. Examples of triatomic molecules for which VSEPR theory predicts a linear shape are BeCl2 (which, you will notice, doesn't possess enough electrons to conform to the octet rule) and CO2. If you write out the electron dot formula for carbon dioxide, you will see that the C-O bonds are double bonds. This makes no difference to VSEPR theory; the central carbon atom is still joined to two other atoms, and the electron clouds that connect the two oxygen atoms are 180° apart.
Trigonal molecules
In an AX3 molecule such as BF3, there are three regions of electron density extending out from the central atom. The repulsion between these will be at a minimum when the angle between any two is (360° ÷ 3) = 120°. This requires that all four atoms be in the same plane; the resulting shape is called trigonal planar, or simply trigonal.
Tetrahedral coordination
Methane, CH4, contains a carbon atom bonded to four hydrogens. What bond angle would lead to the greatest possible separation between the electron clouds associated with these bonds? In analogy with the preceding two cases, where the bond angles were 360°/2=180° and 360°/3=120°, you might guess 360°/4=90°; if so, you would be wrong. The latter calculation would be correct if all the atoms were constrained to be in the same plane (we will see cases where this happens later), but here there is no such restriction. Consequently, the four equivalent bonds will point in four geometrically equivalent directions in three dimensions corresponding to the four corners of a tetrahedron centered on the carbon atom. The angle between any two bonds will be 109.5°. This is called tetrahedral coordination.
This is the most important coordination geometry in Chemistry: it is imperative that you be able to sketch at least a crude perspective view of a tetrahedral molecule.
It is interesting to note that the tetrahedral coordination of carbon in most of its organic compounds was worked out in the nineteenth century on purely geometrical grounds and chemical evidence, long before direct methods of determining molecular shapes were developed. For example, it was noted that there is only one dichloromethane, CH2Cl2.
If the coordination around the carbon were square, then there would have to be two isomers of CH2Cl2, as shown in the pair of structures here. The distances between the two chlorine atoms would be different, giving rise to differences in physical properties would allow the two isomers to be distinguished and separated.
The existence of only one kind of CH2Cl2 molecule means that all four positions surrounding the carbon atom are geometrically equivalent, which requires a tetrahedral coordination geometry. If you study the tetrahedral figure closely, you may be able to convince yourself that it represents the connectivity shown on both of the "square" structures at the top. A three-dimensional ball-and-stick mechanical model would illustrate this very clearly.
Tetrahedrally-coordinated carbon chains
tetrahedra joined end-to-end.
Similar alkane chains having the general formula H3C–(CH2)n–CH3 (or CnH2n+2) can be built up; a view of pentane, C5H12, is shown below.
Notice that these "straight chain hydrocarbons" (as they are often known) have a carbon "backbone" structure that is not really straight, as is illustrated by the zig-zag figure that is frequently used to denote hydrocarbon structures.
Coordination geometry and molecular geometry
Coordination number refers to the number of electron pairs that surround a given atom; we often refer to this atom as the central atom even if this atom is not really located at the geometrical center of the molecule. If all of the electron pairs surrounding the central atom are shared with neighboring atoms, then the coordination geometry is the same as the molecular geometry. The application of VSEPR theory then reduces to the simple problem of naming (and visualizing) the geometric shapes associated with various numbers of points surrounding a central point (the central atom) at the greatest possible angles. Both classes of geometry are named after the shapes of the imaginary geometric figures (mostly regular solid polygons) that would be centered on the central atom and would have an electron pair at each vertex.
If one or more of the electron pairs surrounding the central atom is not shared with a neighboring atom (that is, if it is a lone pair), then the molecular geometry is simpler than the coordination geometry, and it can be worked out by inspecting a sketch of the coordination geometry figure.
Tetrahedral coordination with lone pairs
In the examples we have discussed so far, the shape of the molecule is defined by the coordination geometry; thus the carbon in methane is tetrahedrally coordinated, and there is a hydrogen at each corner of the tetrahedron, so the molecular shape is also tetrahedral.
The AXE Method
It is common practice to represent bonding patterns by "generic" formulas such as \(AX_4\), \(AX_2E_2\), etc., in which "X" stands for bonding pairs and "E" denotes lone pairs. This convention is known as the "AXE Method."
The bonding geometry will not be tetrahedral when the valence shell of the central atom contains nonbonding electrons, however. The reason is that the nonbonding electrons are also in orbitals that occupy space and repel the other orbitals. This means that in figuring the coordination number around the central atom, we must count both the bonded atoms and the nonbonding pairs.
The water molecule: \(AX_2E_2\)
In the water molecule, the central atom is O, and the Lewis electron dot formula predicts that there will be two pairs of nonbonding electrons. The oxygen atom will therefore be tetrahedrally coordinated, meaning that it sits at the center of the tetrahedron as shown below.
Two of the coordination positions are occupied by the shared electron-pairs that constitute the O–H bonds, and the other two by the non-bonding pairs. Thus although the oxygen atom is tetrahedrally coordinated, the bonding geometry (shape) of the H2O molecule is described asbent.
There is an important difference between bonding and non-bonding electron orbitals. Because a nonbonding orbital has no atomic nucleus at its far end to draw the electron cloud toward it, the charge in such an orbital will be concentrated closer to the central atom. As a consequence, nonbonding orbitals exert more repulsion on other orbitals than do bonding orbitals. Thus in H2O, the two nonbonding orbitals push the bonding orbitals closer together, making the H–O–H angle 104.5° instead of the tetrahedral angle of 109.5°.
Ammonia: \(AX_3E\)
The electron-dot structure of NH3 places one pair of nonbonding electrons in the valence shell of the nitrogen atom. This means that there are three bonded atoms and one lone pair, for a coordination number of four around the nitrogen, the same as occurs in H2O. We can therefore predict that the three hydrogen atom will lie at the corners of a tetrahedron centered on the nitrogen atom. The lone pair orbital will point toward the fourth corner of the tetrahedron, but since that position will be vacant, the NH3molecule itself cannot be tetrahedral. Instead, it assumes a pyramidal shape. More precisely, the shape is that of a trigonal pyramid (i.e., a pyramid having a triangular base). The hydrogen atoms are all in the same plane, with the nitrogen above (or below, or to the side; molecules of course don’t know anything about “above” or “below”!) The fatter orbital containing the non-bonding electrons pushes the bonding orbitals together slightly, making the H–N–H bond angles about 107°.
Computer-generated image of NH3 molecule showing electrostatic potential (red=+, blue=–.)
Central atoms with five bonds
Compounds of the type AX5 are formed by some of the elements in Group 15 of the periodic table; PCl5 and AsF5 are examples.
In what directions can five electron pairs arrange themselves in space so as to minimize their mutual repulsions? In the cases of coordination numbers 2, 3, 4, and 6, we could imagine that the electron pairs distributed themselves as far apart as possible on the surface of a sphere; for the two higher numbers, the resulting shapes correspond to the regular polyhedron having the same number of sides. The problem with coordination number 5 is that there is no such thing as a regular polyhedron with five vertices.
Regular Polyhedra
In 1758, the great mathematician Euler proved that there are only five regular convex polyhedra, known as the platonic solids: tetrahedron (4 triangular faces), octahedron (6 triangular faces), icosahedron (20 triangular faces), cube (6 square faces), and dodecahedron (12 pentagonal faces). Chemical examples of all are known; the first icosahedral molecule, \(LaC_{60}\) (in which the La atom has 20 nearest C neighbors) was prepared in 1986.
Besides the five regular solids, there can be 15 semi-regular isogonal solids in which the faces have different shapes, but the vertex angles are all the same. These geometrical principles are quite important in modern structural chemistry.
The shape of PCl5 and similar molecules is a trigonal bipyramid. This consists simply of two triangular-base pyramids joined base-to-base. Three of the chlorine atoms are in the plane of the central phosphorus atom (equatorial positions), while the other two atoms are above and below this plane (axial positions). Equatorial and axial atoms have different geometrical relationships to their neighbors, and thus differ slightly in their chemical behavior.
In 5-coordinated molecules containing lone pairs, these non-bonding orbitals (which you will recall are closer to the central atom and thus more likely to be repelled by other orbitals) will preferentially reside in the equatorial plane. This will place them at 90° angles with respect to no more than two axially-oriented bonding orbitals.
Using this reasoning, we can predict that an AX4E molecule (that is, a molecule in which the central atom A is coordinated to four other atoms “X” and to one nonbonding electron pair) such as SF4 will have a “see-saw” shape; substitution of more nonbonding pairs for bonded atoms reduces the triangular bipyramid coordination to even simpler molecular shapes, as shown below.
Octahedral coordination
Just as four electron pairs experience the minimum repulsion when they are directed toward the corners of a tetrahedron, six electron pairs will try to point toward the corners of an octahedron. An octahedron is not as complex a shape as its name might imply; it is simply two square-based pyramids joined base to base. You should be able to sketch this shape as well as that of the tetrahedron.
The shaded plane shown in this octahedrally-coordinated molecule is only one of three equivalent planes defined by a four-fold symmetry axis. All the ligands are geometrically equivalent; there are no separate axial and equatorial positions in an AX6 molecule.
At first, you might think that a coordination number of six is highly unusual; it certainly violates the octet rule, and there are only a few molecules (SF6 is one) where the central atom is hexavalent. It turns out, however, that this is one of the most commonly encountered coordination numbers in inorganic chemistry. There are two main reasons for this:
• Many transition metal ions form coordinate covalent bonds with lone-pair electron donor atoms such as N (in NH3) and O (in H2O). Since transition elements can have an outer configuration of d10s2, up to six electron pairs can be accommodated around the central atom. A coordination number of 6 is therefore quite common in transition metal hydrates, such as Fe(H2O)63+.
• Although the central atom of most molecules is bonded to fewer than six other atoms, there is often a sufficient number of lone pair electrons to bring the total number of electron pairs to six.
Octahedral coordination with lone pairs
There are well known examples of 6-coordinate central atoms with 1, 2, and 3 lone pairs. Thus all three of the molecules whose shapes are depicted below possess octahedral coordination around the central atom. Note also that the orientation of the shaded planes shown in the two rightmost images are arbitrary; since all six vertices of an octahedron are identical, the planes could just as well be drawn in any of the three possible vertical orientations.
Summary of VSEPR theory
The VSEPR model is an extraordinarily powerful one, considering its great simplicity. Its application to predicting molecular structures can be summarized as follows:
1. 1. Electron pairs surrounding a central atom repel each other; this repulsion will be minimized if the orbitals containing these electron pairs point as far away from each other as possible.
2. 2. The coordination geometry around the central atom corresponds to the polyhedron whose number of vertices is equal to the number of surrounding electron pairs (coordination number). Except for the special case of 5, and the trivial cases of 2 and 3, the shape will be one of the regular polyhedra.
3. 3. If some of the electron pairs are nonbonding, the shape of the molecule will be simpler than that of the coordination polyhedron.
4. 4. Orbitals that contain nonbonding electrons are more concentrated near the central atom, and therefore offer more repulsion than bonding pairs to other orbitals.
While VSEPR theory is quite good at predicting the general shapes of most molecules, it cannot yield exact details. For example, it does not explain why the bond angle in H2O is 104.5°, but that in H2S is about 90°. This is not surprising, considering that the emphasis is on electronic repulsions, without regard to the detailed nature of the orbitals containing the electrons, and thus of the bonds themselves.
The Valence Shell Electron Repulsion theory was developed in the 1960s by Ronald Gillespie of McMaster University (Hamilton, Ontario, Canada) and Ronald Nyholm (University College, London). It is remarkable that what seems to be a logical extension of the 1916 Lewis shared-electron pair model of bonding took so long to be formulated; it was first presented in the authors' classic article Inorganic Stereochemistry published in the 1957 Chemical Society of London Quarterly Reviews (Vol 11, pg 339). Although it post-dates the more complete quantum mechanical models, it is easy to grasp and within a decade had become a staple of every first-year college chemistry course. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.05%3A_Molecular_Geometry.txt |
Learning Objectives
• Explain why the sharing of atomic orbitals (as implied in the Lewis model) cannot adequately account for the observed bonding patterns in simple molecules.
• Sketch out a diagram illustrating how the plots of atomic s- and p- orbital wave functions give rise to a pair of hybrid orbitals.
• Draw "orbital box" diagrams showing how combinations of an atomic s orbital and various numbers of p orbitals create sp, sp2, and sp3 hybrid orbitals.
• Show how hybrid orbitals are involved in the molecules methane, water, and ammonia.
As useful and appealing as the concept of the shared-electron pair bond is, it raises a somewhat troubling question that we must sooner or later face: what is the nature of the orbitals in which the shared electrons are contained? Up until now, we have been tacitly assuming that each valence electron occupies the same kind of atomic orbital as it did in the isolated atom. As we shall see below, his assumption very quickly leads us into difficulties.
Atomic orbitals alone do not work for Molecules
Consider how we might explain the bonding in a compound of divalent beryllium, such as beryllium hydride, BeH2. The beryllium atom, with only four electrons, has a configuration of 1s22s2. Note that the two electrons in the 2s orbital have opposite spins and constitute a stable pair that has no tendency to interact with unpaired electrons on other atoms.
The only way that we can obtain two unpaired electrons for bonding in beryllium is to promote one of the 2s electrons to the 2p level. However, the energy required to produce this excited-state atom would be sufficiently great to discourage bond formation. It is observed that Be does form reasonably stable bonds with other atoms. Moreover, the two bonds in BeH2 and similar molecules are completely equivalent; this would not be the case if the electrons in the two bonds shared Be orbitals of different types, as in the "excited state" diagram above.
These facts suggest that it is incorrect to assume that the distribution of valence electrons that are shared with other atoms can be described by atomic-type s, p, and d orbitals at all.
Remember that these different orbitals arise in the first place from the interaction of the electron with the single central electrostatic force field associated with the positive nucleus. An outer-shell electron in a bonded atom will be under the influence of a force field emanating from two positive nuclei, so we would expect the orbitals in the bonded atoms to have a somewhat different character from those in free atoms. In fact, as far as valence electrons are concerned, we can throw out the concept of atomic orbital altogether and reassign the electrons to a new set of molecular orbitals that are characteristic of each molecular configuration. This approach is indeed valid, but we will defer a discussion of it until a later unit.
For now, we will look at a less-radical model that starts out with the familiar valence-shell atomic orbitals, and allows them to combine to form hybrid orbitals whose shapes conform quite well to the bonding geometry that we observe in a wide variety of molecules.
What are hybrid orbitals?
First, recall that the electron, being a quantum particle, cannot have a distinct location; the most we can do is define the region of space around the nucleus in which the probability of finding the electron exceeds some arbitrary value, such as 90% or 99%. This region of space is the orbital. Because of the wavelike character of matter, the orbital corresponds to a standing wave pattern in 3-dimensional space which we can often represent more clearly in 2-dimensional cross section. The quantity that is varying (“waving”) is a number denoted by ψ (psi) whose value varies from point to point according to the wave function for that particular orbital.
Orbitals of all types are simply mathematical functions that describe particular standing-wave patterns that can be plotted on a graph but have no physical reality of their own. Because of their wavelike nature, two or more orbitals (i.e., two or more functions ψ) can be combined both in-phase and out-of-phase to yield a pair of resultant orbitals which, to be useful, must have squares that describe actual electron distributions in the atom or molecule.
The s,p,d and f orbitals that you are familiar with are the most convenient ones for describing the electron distribution in isolated atoms because assignment of electrons to them according to the usual rules always yields an overall function Ψ2 that predicts a spherically symmetric electron distribution, consistent with all physical evidence that atoms are in fact spherical. For atoms having more than one electron, however, the s,p,d, f basis set is only one of many possible ways of arriving at the same observed electron distribution. We use it not because it is unique, but because it is the simplest.
In the case of a molecule such as BeH2, we know from experimental evidence that the molecule is linear and therefore the electron density surrounding the central atom is no longer spherical, but must be concentrated along two directions 180° apart, and we need to construct a function Ψ2 having these geometrical properties. There are any number of ways of doing this, but it is convenient is to use a particular set of functions ψ (which we call hybrid orbitals) that are constructed by combining the atomic s,p,d,and f functions that are already familiar to us.
You should understand that hybridization is not a physical phenomenon; it is merely a mathematical operation that combines the atomic orbitals we are familiar with in such a way that the new (hybrid) orbitals possess the geometric and other properties that are reasonably consistent with what we observe in a wide range (but certainly not in all) molecules. In other words, hybrid orbitals are abstractions that describe reality fairly well in certain classes of molecules (and fortunately, in much of the very large class of organic substances) and are therefore a useful means of organizing a large body of chemical knowledge... but they are far from infallible.
Hybridization is not a physical phenomenon; it is merely a mathematical operation that combines the atomic orbitals we are familiar with in such a way that the new (hybrid) orbitals possess the geometric and other properties that are reasonably consistent with what we observe in a wide range (but certainly not in all) molecules.
This approach, which assumes that the orbitals remain more or less localized on one central atom, is the basis of the theory which was developed in the early 1930s, mainly by Linus Pauling.
Linus Pauling
Linus Pauling (1901-1994) was the most famous American chemist of the 20th century and the author of the classic book The Nature of the Chemical Bond. His early work pioneered the application of X-ray diffraction to determine the structure of complex molecules; he then went on to apply quantum theory to explain these observations and predict the bonding patterns and energies of new molecules. Pauling, who spent most of his career at Cal Tech, won the Nobel Prize for Chemistry in 1954 and the Peace Prize in 1962.
"In December 1930 Pauling had his famous 'breakthrough' where, in a rush of inspiration, he 'stayed up all night, making, writing out, solving the equations, which were so simple that I could solve them in a few minutes'. This flurry of calculations would eventually become the first of Pauling's germinal series of papers on the nature of the chemical bond. 'I just kept getting more and more euphorious as time went by', Pauling would recall. "
Although the hybrid orbital approach has proven very powerful (especially in organic chemistry), it does have its limitations. For example, it predicts that both H2O and H2S will be tetrahedrally coordinated bent molecules with bond angles slightly smaller than the tetrahedral angle of 109.5° owing to greater repulsion by the nonbonding pair. This description fits water (104.5°) quite well, but the bond angle in hydrogen sulfide is only 92°, suggesting that atomic p orbitals (which are 90° apart) provide a better description of the electron distribution about the sulfur atom than do sp3 hybrid orbitals.
The hybrid orbital model is fairly simple to apply and understand, but it is best regarded as one special way of looking at a molecule that can often be misleading. Another viewpoint, called the molecular orbital theory, offers us a complementary perspective that it is important to have if we wish to develop a really thorough understanding of chemical bonding in a wider range of molecules.
Constructing hybrid orbitals
Below: "Constructive" and "destructive" combinations of 2p and 2s wave functions (line plots) give rise to the sp hybrid function shown at the right. The solid figures depict the corresponding probability functions ψ2.
Hybrid orbitals are constructed by combining the ψ functions for atomic orbitals. Because wave patterns can combine both constructively and destructively, a pair of atomic wave functions such as the s- and p- orbitals shown at the left can combine in two ways, yielding the sp hybrids shown.
From an energy standpoint, we can represent the transition from atomic s- and p-orbitals to an sp hybrid orbital in this way:
Notice here that 1) the total number of occupied orbitals is conserved, and 2) the two sp hybrid orbitals are intermediate in energy between their parent atomic orbitals. In terms of plots of the actual orbital functions ψ we can represent the process as follows:
The probability of finding the electron at any location is given not by ψ, but by ψ2, whose form is roughly conveyed by the solid figures in this illustration.
Hybrids derived from atomic s- and p orbitals
Digonal bonding: sp-hybrid orbitals
Returning to the example of BeH2, we can compare the valence orbitals in the free atoms with those in the beryllium hydride molecule as shown here. It is, of course, the overlap between the hydrogen-1s orbitals and the two lobes of the beryllium sp-hybrid orbitals that constitutes the two Be—H "bonds" in this molecule. Notice that whereas a single p-orbital has lobes on both sides of the atom, a single sp-hybrid has most of its electron density on one side, with a minor and more spherical lobe on the other side. This minor lobe is centered on the central atom (some textbook illustrations don't get this right.)
As far as the shape of the molecule is concerned, the result is exactly the same as predicted by the VSEPR model (although hybrid orbital theory predicts the same result in a more fundamental way.) We can expect any central atom that uses sp-hybridization in bonding to exhibit linear geometry when incorporated into a molecule.
Trigonal (sp2) hybridization
We can now go on to apply the same ideas to some other simple molecules. In boron trifluoride, for example, we start with the boron atom, which has three outer-shell electrons in its normal or ground state, and three fluorine atoms, each with seven outer electrons. As is shown in this configuration diagram, one of the three boron electrons is unpaired in the ground state. In order to explain the trivalent bonding of boron, we postulate that the atomic s- and p- orbitals in the outer shell of boron mix to form three equivalent hybrid orbitals. These particular orbitals are called sp2 hybrids, meaning that this set of orbitals is derived from one s- orbital and two p-orbitals of the free atom.
This illustration shows how an s-orbital mixes with two p orbitals to form a set of three sp2 hybrid orbitals. Notice again how the three atomic orbitals yield the same number of hybrid orbitals.
Boron Trifluoride BF3 is a common example of sp2 hybridization. The molecule has plane trigonal geometry.
Tetrahedral (sp3) hybridization
Let us now look at several tetravalent molecules, and see what kind of hybridization might be involved when four outer atoms are bonded to a central atom. Perhaps the commonest and most important example of this bond type is methane, CH4.
orbitals of carbon mix into four sp3 hybrid orbitals which are chemically and geometrically identical; the latter condition implies that the four hybrid orbitals extend toward the corners of a tetrahedron centered on the carbon atom.
Methane is the simplest hydrocarbon; the molecule is approximately spherical, as is shown in the space-filling model:
By replacing one or more of the hydrogen atoms in CH4 with another sp3 hybridized carbon fragments, hydrocarbon chains of any degree of complexity can be built up. The simplest of these is ethane:
This shows how an sp3 orbital on each of two two carbon atoms join (overlap) to form a carbon-carbon bond, and then the remaining carbon sp3 orbital overlaps with six hydrogen 1s orbitals to form the ethane molecule.
Lone pair electrons in hybrid orbitals
If lone pair electrons are present on the central atom, these can occupy one or more of the sp3 orbitals. This causes the molecular geometry to be different from the coordination geometry, which remains tetrahedral. In the ammonia molecule, for example, the nitrogen atom normally has three unpaired p electrons, but by mixing the 2s and 2p orbitals, we can create four sp3-hybrid orbitals just as in carbon. Three of these can form shared-electron bonds with hydrogen, resulting in ammonia, NH3. The fourth of the sp3 hybrid orbitals contains the two remaining outer-shell electrons of nitrogen which form a non-bonding lone pair. In acidic solutions these can coordinate with a hydrogen ion, forming the ammonium ion NH4+.
Although no bonds are formed by the lone pair in NH3, these electrons do give rise to a charge cloud that takes up space just like any other orbital.
In the water molecule, the oxygen atom can form four sp3 orbitals. Two of these are occupied by the two lone pairs on the oxygen atom, while the other two are used for bonding. The observed H-O-H bond angle in water (104.5°) is less than the tetrahedral angle (109.5°); one explanation for this is that the non-bonding electrons tend to remain closer to the central atom and thus exert greater repulsion on the other orbitals, thus pushing the two bonding orbitals closer together.
Molecular ions
Hybridization can also help explain the existence and structure of many inorganic molecular ions. Consider, for example, electron configurations of zinc in the compounds in the illustrations below. The tetrachlorozinc ion (top row) is another structure derived from zinc and chlorine. As we might expect, this ion is tetrahedral; there are four chloride ions surrounding the central zinc ion. The zinc ion has a charge of +2, and each chloride ion is –1, so the net charge of the complex ion is –2.
At the bottom is shown the electron configuration of atomic zinc, and just above it, of the divalent zinc ion. Notice that this ion has no electrons at all in its 4-shell. In zinc chloride, shown in the next row up, there are two equivalent chlorine atoms bonded to the zinc. The bonding orbitals are of sp character; that is, they are hybrids of the 4s and one 4p orbital of the zinc atom. Since these orbitals are empty in the isolated zinc ion, the bonding electrons themselves are all contributed by the chlorine atoms, or rather, the chloride ions, for it is these that are the bonded species here. Each chloride ion possesses a complete octet of electrons, and two of these electrons occupy each sp bond orbital in the zinc chloride complex ion. This is an example of a coordinate covalent bond, in which the bonded atom contributes both of the electrons that make up the shared pair. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.06%3A_The_Hybrid_Orbital_Model.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Sketch out diagrams showing the hybridization and bonding in compounds containing single, double, and triple carbon-carbon bonds.
• Define sigma and pi bonds.
• Describe the hybridization and bonding in the benzene molecule.
This is a continuation of the previous page which introduced the hybrid orbital model and illustrated its use in explaining how valence electrons from atomic orbitals of s and p types can combine into equivalent shared-electron pairs known as sp, sp2, and sp3 hybrid orbitals. In this lesson, we extend this idea to compounds containing double and triple bonds, and to those in which atomic d electrons are involved (and which do not follow the octet rule.)
Hybrid types and Multiple bonds
We have already seen how sp hybridization in carbon leads to its combining power of four in the methane molecule. Two such tetrahedrally coordinated carbons can link up together to form the molecule ethane C2H6. In this molecule, each carbon is bonded in the same way as the other; each is linked to four other atoms, three hydrogens and one carbon. The ability of carbon-to-carbon linkages to extend themselves indefinitely and through all coordination positions accounts for the millions of organic molecules that are known.
Trigonal hybridization in carbon: the double bond
Carbon and hydrogen can also form a compound ethylene (ethene) in which each carbon atom is linked to only three other atoms. Here, we can regard carbon as being trivalent. We can explain this trivalence by supposing that the orbital hybridization in carbon is in this case not sp3, but is sp2 instead; in other words, only two of the three porbitals of carbon mix with the 2s orbital to form hybrids; the remaining p-orbital, which we will call the i orbital, remains unhybridized. Each carbon is bonded to three other atoms in the same kind of plane trigonal configuration that we saw in the case of boron trifluoride, where the same kind of hybridization occurs. Notice that the bond angles around each carbon are all 120°.
This alternative hybridization scheme explains how carbon can combine with four atoms in some of its compounds and with three other atoms in other compounds. You may be aware of the conventional way of depicting carbon as being tetravalent in all its compounds; it is often stated that carbon always forms four bonds, but that sometimes, as in the case of ethylene, one of these may be a double bond. This concept of the multiple bond preserves the idea of tetravalent carbon while admitting the existence of molecules in which carbon is clearly combined with fewer than four other atoms.
These three views of the ethylene molecule emphasize different aspects of the disposition of shared electron pairs in the various bonding orbitals of ethene (ethylene). (a) The "backbone" structure consisting of σ (sigma) bonds formed from the three sp2-hybridized orbitals on each carbon. (b) The π (pi) bonding system formed by overlap of the unhybridized pz orbital on each carbon. The π orbital has two regions of electron density extending above and below the plane of the molecule. (c) A cutaway view of the combined σ and π system.
orbital that is perpendicular to the molecular plane. These two parallel pz orbitals will interact with each other; the two orbitals merge, forming a sausage-like charge cloud (the π bond) that extends both above and below the plane of the molecule. It is the pair of electrons that occupy this new extended orbital that constitutes the “fourth” bond to each carbon, and thus the “other half” of the double bond in the molecule.
More about sigma and pi bonds
The σ (sigma) bond has its maximum electron density along the line-of-centers joining the two atoms (below left). Viewed end-on, the σ bond is cylindrically symmetrical about the line-of-centers. It is this symmetry, rather than its parentage, that defines the σ bond, which can be formed from the overlap of two s-orbitals, from two p-orbitals arranged end-to-end, or from an s- and a p-orbital. They can also form when sp hybrid orbitals on two atoms overlap end-to-end.
Pi orbitals, on the other hand, require the presence of two atomic p orbitals on adjacent atoms. Most important, the charge density in the π orbital is concentrated above and below the molecular plane; it is almost zero along the line-of-centers between the two atoms. It is this perpendicular orientation with respect to the molecular plane (and the consequent lack of cylindrical symmetry) that defines the π orbital. The combination of a σ bond and a π bond extending between the same pair of atoms constitutes the double bond in molecules such as ethylene.
Carbon-carbon triple bonds: sp hybridization in acetylene
We have not yet completed our overview of multiple bonding, however. Carbon and hydrogen can form yet another compound, acetylene (ethyne), in which each carbon is connected to only two other atoms: a carbon and a hydrogen. This can be regarded as an example of divalent carbon, but is usually rationalized by writing a triple bond between the two carbon atoms.
We assume here that since two geometrically equivalent bonds are formed by each carbon, this atom must be sp-hybridized in acetylene. On each carbon, one sp hybrid bonds to a hydrogen and the other bonds to the other carbon atom, forming the σ bond skeleton of the molecule. In addition to the sp hybrids, each carbon atom has two half-occupied p orbitals oriented at right angles to each other and to the interatomic axis. These two sets of parallel and adjacent p orbitals can thus merge into two sets of π orbitals.
The triple bond in acetylene is seen to consist of one σ bond joining the line-of-centers between the two carbon atoms, and two π bonds whose lobes of electron density are in mutually-perpendicular planes. The acetylene molecule is of course linear, since the angle between the two sp hybrid orbitals that produce the s skeleton of the molecule is 180°.
Multiple bonds between unlike atoms
Multiple bonds can also occur between dissimilar atoms. For example, in carbon dioxide each carbon atom has two unhybridized atomic p orbitals, and each oxygen atom still has one p orbital available. When the two O-atoms are brought up to opposite sides of the carbon atom, one of the p orbitals on each oxygen forms a π bond with one of the carbon p-orbitals. In this case, sp-hybridization is seen to lead to two double bonds. Notice that the two C–O π bonds are mutually perpendicular.
Similarly, in hydrogen cyanide, HCN, we assume that the carbon is sp-hybridized, since it is joined to only two other atoms, and is hence in a divalent state. One of the sp-hybrid orbitals overlaps with the hydrogen 1s orbital, while the other overlaps end-to-end with one of the three unhybridized p orbitals of the nitrogen atom. This leaves us with two nitrogen p-orbitals which form two mutually perpendicular π bonds to the two atomic p orbitals on the carbon. Hydrogen cyanide thus contains one single and one triple bond. The latter consists of a σ bond from the overlap of a carbon sp hybrid orbital with a nitrogen p orbital, plus two mutually perpendicular π bonds deriving from parallel atomic p orbitals on the carbon and nitrogen atoms.
The nitrate ion
Pi bond delocalization furnishes a means of expressing the structures of other molecules that require more than one electron-dot or structural formula for their accurate representation. A good example is the nitrate ion, which contains 24 electrons:
The electron-dot formula shown above is only one of three equivalent resonance structures that are needed to describe trigonal symmetry of this ion.
Nitrogen has three half-occupied p orbitals available for bonding, all perpendicular to one another. Since the nitrate ion is known to be planar, we are forced to assume that the nitrogen outer electrons are sp2-hybridized. The addition of an extra electron fills all three hybrid orbitals completely. Each of these filled sp2 orbitals forms a σ bond by overlap with an empty oxygen 2pz orbital; this, you will recall, is an example ofcoordinate covalent bonding, in which one of the atoms contributes both of the bonding electrons. The empty oxygen 2p orbital is made available when the oxygen electrons themselves become sp hybridized; we get three filled sp hybrid orbitals, and an empty 2p atomic orbital, just as in the case of nitrogen.
The π bonding system arises from the interaction of one of the occupied oxygen sporbitals with the unoccupied 2px orbital of the nitrogen. Notice that this, again, is a coordinate covalent sharing, except that in this instance it is the oxygen atom that donates both electrons.
Pi bonds can form in this way between the nitrogen atom and any of the three oxygens; there are thus three equivalent π bonds possible, but since nitrogen can only form one complete π bond at a time, the π bonding is divided up three ways, so that each N–O bond has a bond order of 4/3.
Conjugated Double Bonds
We have seen that the π bonding orbital is distinctly different in shape and symmetry from the σ bond. There is another important feature of the π bond that is of far-reaching consequence, particularly in organic and coordination chemistry. Consider, for example, an extended hydrocarbon molecule in which alternate pairs of carbon atoms are connected by double and single bonds. Each non-terminal carbon atom forms two σ bonds to two other carbons and to a hydrogen (not shown.) This molecule can be viewed as a series of ethylene units joined together end-to-end. Each carbon, being sp hybridized, still has a half-filled atomic p orbital. Since these p orbitals on adjacent carbons are all parallel, we can expect them to interact with each other to form π bonds between alternate pairs of carbon atoms as shown below.
But since each carbon atom possesses a half-filled p orbital, there is nothing unique about the π bond arrangement; an equally likely arrangement might be one in which the π bonding orbitals are shifted to neighboring pairs of carbons (middle illustration above). You will recall that when there are two equivalent choices for the arrangements single and double bonds in a molecule, we generally consider the structure to be aresonance hybrid. In keeping with this idea, we would expect the electron density in a π system of this kind to be extended or shared out evenly along the entire molecular framework, as shown in the bottom figure.
A system of alternating single and double bonds, as we have here, is called a conjugated system. Chemists say that the π bonds in a conjugated system are delocalized; they are, in effect, “smeared out” over the entire length of the conjugated part of the molecule. Each pair of adjacent carbon atoms is joined by a σ bond and "half" of a π bond, resulting in an a C-C bond order of 1.5. An even higher degree of conjugation exists in compounds containing extended (C=C)n chains. These compounds, known as cumulenes, exhibit interesting electrical properties, and whose derivatives can act as "organic wires".
Benzene
The classic example of π bond delocalization is found in the cyclic molecule benzene (C6H6) which consists of six carbon atoms bound together in a hexagonal ring. Each carbon has a single hydrogen atom attached to it. The lines in this figure represent the σ bonds in benzene. The basic ring structure is composed of σ bonds formed from overlap of sp2 hybrid orbitals on adjacent carbon atoms. The unhybridized carbon pz orbitals project above and below the plane of the ring. They are shown here as they might appear if they did not interact with one another.
But what happens, of course, is that the lobes of these atomic orbitals meld together to form circular rings of electron density above and below the plane of the molecule. The two of these together constitute the "second half" of the carbon-carbon double bonds in benzene. This computer-generated plot of electron density in the benzene molecule is derived from a more rigorous theory that does not involve hybrid orbitals; the highest electron density (blue) appears around the periphery of the ring, while the lowest (red) is in the "doughnut hole" in the center.
Hybrids involving d orbitals
In atoms that are below those in the first complete row of the periodic table, the simple octet rule begins to break down. For example, we have seen that PCl3 does conform to the octet rule but PCl5 does not. We can describe the bonding in PCl3 very much as we do NH3: four sp3-hybridized orbitals, three of which are shared with electrons from other atoms and the fourth containing a nonbonding pair.
Pentagonal bipyramid molecules: sp3d hybridization
hybrid orbitals directed toward the corners of a trigonal bipyramid, as is predicted by VSEPR theory.
Octahedral coordination: sp3d2 hybridization
The molecule sulfur hexafluoride SF6 exemplifies one of the most common types of d-orbital hybridization. The six bonds in this octahedrally-coordinated molecule are derived from mixing six atomic orbitals into a hybrid set. The easiest way to understand how these come about is to imagine that the molecule is made by combining an imaginary S6+ ion (which we refer to as the S(VI) valence state) with six F ions to form the neutral molecule. These now-empty 3s and 3p orbitals then mix with two 3d orbitals to form the sp3d2 hybrids.
Some of the most important and commonly encountered compounds which involve the dorbitals in bonding are the transition metal complexes. The term “complex” in this context means that the molecule is composed of two or more kinds of species, each of which can have an independent existence.
Square-planar molecules: dsp2 hybridization
For example, the ions Pt2+ and Cl can form the ion [PtCl4]2–. To understand the hybridization scheme, it helps to start with the neutral Pt atom, then imagine it losing two electrons to become an ion, followed by grouping of the two unpaired 5d electrons into a single d orbital, leaving one vacant.This vacant orbital, along with the 6s and two of the 6p orbitals, can then accept an electron pair from four chlorines.
All of the four-coordinated molecules we have discussed so far have tetrahedral geometry around the central atom. Methane, CH4, is the most well known example. It may come as something as a surprise, then, to discover that the tetrachlorplatinum (II) ion [PtCl4]2– has an essentially two-dimensional square-planar configuration. This type of bonding pattern is quite common when the parent central ion (Pt2+ in this case) contains only eight electrons in its outmost d-subshell.
Octahedral coordination: sp3d2 and d2sp3
Many of the most commonly encountered transition metal ions accept electron pairs from donors such as CN and NH3 (or lacking these, even from H2O) to form octahedral coordination complexes. The hexaminezinc(II) cation depicted below is typical.
In sp3d2 hybridization the bonding orbitals are derived by mixing atomic orbitals having the same principal quantum number (n = 4 in the preceding example). A slightly different arrangement, known as d2sp3 hybridization, involves d orbitals of lower principal quantum number. This is possible because of the rather small energy differences between the d orbitals in one “shell” with the s and p orbitals of the next higher one — hence the term “inner orbital” complex which is sometimes used to describe ions such as hexaminecobalt(III), shown below.. Both arrangements produce octahedral coordination geometries.
In some cases, the same central atom can form either inner or outer complexes depending on the particular ligand and the manner in which its electrostatic field affects the relative energies of the different orbitals.Thus the hexacyanoiron(II) ion utilizes the iron 3d orbitals, whereas hexaaquoiron(II) achieves a lower energy by accepting two H2O molecules in its 4d orbitals.
Final remarks about hybrid orbitals
As is the case with any scientific model, the hybridization model of bonding is useful only to the degree to which it can predict phenomena that are actually observed. Most models contain weaknesses that place limits on their general applicability. The need for caution in accepting this particular model is made more apparent when we examine the shapes of the molecules below the first full row of the periodic table. For example, we would expect the bonding in hydrogen sulfide to be similar to that in water, with tetrahedral geometry around the sulfur atom. Experiments, however, reveal that the H–S–H bond angle is only 92°. Hydrogen sulfide thus deviates much more from tetrahedral geometry than does water, and there is no apparent and clear reason why it should. It is certainly difficult to argue that electron-repulsion between the two nonbonding orbitals is pushing the H–S bonds closer together (as is supposed to happen to the H–O bonds in water); many would argue that this repulsion would be less in hydrogen sulfide than in water, since sulfur is a larger atom and is hence less electronegative.
orbitals does not apply to H2S. It looks like the “simple” explanation that bonding occurs through two half occupied atomic p orbitals 90° apart comes closer to the mark. Perhaps hybridization is not an all-or-nothing phenomenon; perhaps the two 3p orbitals are substantially intact in hydrogen sulfide, or are hybridized only slightly. In general, the hybridization model does not work very well with nonmetallic elements farther down in the periodic table, and there is as yet no clear explanation why. We must simply admit that we have reached one of the many points in chemistry where our theory is not sufficiently developed to give a clear and unequivocal answer. This does not detract, however, from the wide usefulness of the hybridization model in elucidating the bond character and bond shapes in the millions of molecules based on first-row elements, particularly of carbon.
Are hybrid orbitals real?
The justification we gave for invoking hybridization in molecules such as BeH2, BF3 and CH4 was that the bonds in each are geometrically and chemically equivalent, whereas the atomic s- and p-orbitals on the central atoms are not. By combining these into new orbitals of sp, sp2 and sp3 types we obtain the required number of completely equivalent orbitals. This seemed easy enough to do on paper; we just drew little boxes and wrote “sp2” or whatever below them. But what is really going on here?
The full answer is beyond the scope of this course, so we can only offer the following very general explanation. First, recall what we mean by “orbital”: a mathematical function ψ having the character of a standing wave whose square ψ2 is proportional to the probability of finding the electron at any particular location in space. The latter, the electron density distribution, can be observed (by X-ray scattering, for example), and in this sense is the only thing that is “real”.
A given standing wave (ψ-function) can be synthesized by combining all kinds of fundamental wave patterns (that is, atomic orbitals) in much the same way that a color we observe can be reproduced by combining different sets of primary colors in various proportions. In neither case does it follow that these original orbitals (or colors) are actually present in the final product. So one could well argue that hybrid orbitals are not “real”; they simply turn out to be convenient for understanding the bonding of simple molecules at the elementary level, and this is why we use them.
An alternative to hybrids: the Bent-Bond model
It turns out, in fact, that the electron distribution and bonding in ethylene can be equally well described by assuming no hybridization at all. The "bent bond" model requires only that the directions of some of the atomic-p orbitals be distorted sufficiently to provide the overlap needed for bonding; these are sometimes referred to as "banana bonds".
The smallest of the closed-ring hydrocarbons is cyclopropane, a planar molecule in which the C–C bond angles are 120°— quite a departure from the tetrahedral angle of 109.5° associated with sp3 hybridization! Theoretical studies suggest that the bent-bond model does quite well in predicting its properties. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.07%3A_The_Hybrid_Orbital_Model_II.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas
• In what fundamental way does the molecular orbital model differ from the other models of chemical bonding that have been described in these lessons?
• Explain how bonding and antibonding orbitals arise from atomic orbitals, and how they differ physically.
• Describe the essential difference between a sigma and a pi molecular orbital.
• Define bond order, and state its significance.
• Construct a "molecular orbital diagram" of the kind shown in this lesson for a simple diatomic molecule, and indicate whether the molecule or its positive and negative ions should be stable.
The molecular orbital model is by far the most productive of the various models of chemical bonding, and serves as the basis for most quantiative calculations, including those that lead to many of the computer-generated images that you have seen elsewhere in these units. In its full development, molecular orbital theory involves a lot of complicated mathematics, but the fundamental ideas behind it are quite easily understood, and this is all we will try to accomplish in this lesson.
This is a big departure from the simple Lewis and VSEPR models that were based on the one-center orbitals of individual atoms. The more sophisticated hybridization model recognized that these orbitals will be modified by their interaction with other atoms. But all of these valence-bond models, as they are generally called, are very limited in their applicability and predictive power, because they fail to recognize that distribution of the pooled valence electrons is governed by the totality of positive centers.
Molecular Orbitals
Chemical bonding occurs when the net attractive forces between an electron and two nuclei exceeds the electrostatic repulsion between the two nuclei. For this to happen, the electron must be in a region of space which we call the binding region. Conversely, if the electron is off to one side, in an anti-binding region, it actually adds to the repulsion between the two nuclei and helps push them away.
The easiest way of visualizing a molecular orbital is to start by picturing two isolated atoms and the electron orbitals that each would have separately. These are just the orbitals of the separate atoms, by themselves, which we already understand. We will then try to predict the manner in which these atomic orbitals interact as we gradually move the two atoms closer together. Finally, we will reach some point where the internuclear distance corresponds to that of the molecule we are studying. The corresponding orbitals will then be the molecular orbitals of our new molecule.
The hydrogen molecule ion: the simplest molecule
To see how this works, we will consider the simplest possible molecule, $\ce{H2^{+}}$. This is the hydrogen molecule ion, which consists of two nuclei of charge +1, and a single electron shared between them.
As two H nuclei move toward each other, the 1s atomic orbitals of the isolated atoms gradually merge into a new molecular orbital in which the greatest electron density falls between the two nuclei. Since this is just the location in which electrons can exert the most attractive force on the two nuclei simultaneously, this arrangement constitutes a bonding molecular orbital. Regarding it as a three- dimensional region of space, we see that it is symmetrical about the line of centers between the nuclei; in accord with our usual nomenclature, we refer to this as a σ (sigma) orbital.
Bonding and Antibonding Molecular Orbitals
There is one minor difficulty: we started with two orbitals (the 1s atomic orbitals), and ended up with only one orbital. Now according to the rules of quantum mechanics, orbitals cannot simply appear and disappear at our convenience. For one thing, this would raise the question of at just what internuclear distance do we suddenly change from having two orbitals, to having only one? It turns out that when orbitals interact, they are free to change their forms, but there must always be the same number. This is just another way of saying that there must always be the same number of possible allowed sets of electron quantum numbers.
How can we find the missing orbital? To answer this question, we must go back to the wave-like character of orbitals that we developed in our earlier treatment of the hydrogen atom. You are probably aware that wave phenomena such as sound waves, light waves, or even ocean waves can combine or interact with one another in two ways: they can either reinforce each other, resulting in a stronger wave, or they can interfere with and partially destroy each other. A roughly similar thing occurs when the “matter waves” corresponding to the two separate hydrogen 1s orbitals interact; both in-phase and out-of-phase combinations are possible, and both occur. The in-phase, reinforcing interaction yields the bonding orbital that we just considered. The other, corresponding to out-of-phase combination of the two orbitals, gives rise to a molecular orbital that has its greatest electron probability in what is clearly the antibonding region of space. This second orbital is therefore called an antibonding orbital.
When the two 1s wave functions combine out-of-phase, the regions of high electron probability do not merge. In fact, the orbitals act as if they actually repel each other. Notice particularly that there is a region of space exactly equidistant between the nuclei at which the probability of finding the electron is zero. This region is called a nodal surface, and is characteristic of antibonding orbitals. It should be clear that any electrons that find themselves in an antibonding orbital cannot possibly contribute to bond formation; in fact, they will actively oppose it.
We see, then, that whenever two orbitals, originally on separate atoms, begin to interact as we push the two nuclei toward each other, these two atomic orbitals will gradually merge into a pair of molecular orbitals, one of which will have bonding character, while the other will be antibonding. In a more advanced treatment, it would be fairly easy to show that this result follows quite naturally from the wave-like nature of the combining orbitals.
What is the difference between these two kinds of orbitals, as far as their potential energies are concerned? More precisely, which kind of orbital would enable an electron to be at a lower potential energy? Clearly, the potential energy decreases as the electron moves into a region that enables it to “see” the maximum amount of positive charge. In a simple diatomic molecule, this will be in the internuclear region— where the electron can be simultaneously close to two nuclei. The bonding orbital will therefore have the lower potential energy.
Molecular Orbital Diagrams
This scheme of bonding and antibonding orbitals is usually depicted by a molecular orbital diagram such as the one shown here for the dihydrogen ion H2+. Atomic valence electrons (shown in boxes on the left and right) fill the lower-energy molecular orbitals before the higher ones, just as is the case for atomic orbitals. Thus, the single electron in this simplest of all molecules goes into the bonding orbital, leaving the antibonding orbital empty.
Since any orbital can hold a maximum of two electrons, the bonding orbital in H2+is only half-full. This single electron is nevertheless enough to lower the potential energy of one mole of hydrogen nuclei pairs by 270 kJ— quite enough to make them stick together and behave like a distinct molecular species. Although H2+ is stable in this energetic sense, it happens to be an extremely reactive molecule— so much so that it even reacts with itself, so these ions are not commonly encountered in everyday chemistry.
Dihydrogen
If one electron in the bonding orbital is conducive to bond formation, might two electrons be even better? We can arrange this by combining two hydrogen atoms-- two nuclei, and two electrons. Both electrons will enter the bonding orbital, as depicted in the Figure.
We recall that one electron lowered the potential energy of the two nuclei by 270 kJ/mole, so we might expect two electrons to produce twice this much stabilization, or 540 kJ/mole.
Bond order is defined as the difference between the number of electron pairs occupying bonding and nonbonding orbitals in the molecule. A bond order of unity corresponds to a conventional "single bond".
Experimentally, one finds that it takes only 452 kJ to break apart a mole of hydrogen molecules. The reason the potential energy was not lowered by the full amount is that the presence of two electrons in the same orbital gives rise to a repulsion that acts against the stabilization. This is exactly the same effect we saw in comparing the ionization energies of the hydrogen and helium atoms.
Dihelium
With two electrons we are still ahead, so let’s try for three. The dihelium positive ion is a three-electron molecule. We can think of it as containing two helium nuclei and three electrons. This molecule is stable, but not as stable as dihydrogen; the energy required to break He2+ is 301 kJ/mole. The reason for this should be obvious; two electrons were accommodated in the bonding orbital, but the third electron must go into the next higher slot— which turns out to be the sigma antibonding orbital. The presence of an electron in this orbital, as we have seen, gives rise to a repulsive component which acts against, and partially cancels out, the attractive effect of the filled bonding orbital.
Taking our building-up process one step further, we can look at the possibilities of combining to helium atoms to form dihelium. You should now be able to predict that He2 cannot be a stable molecule; the reason, of course, is that we now have four electrons— two in the bonding orbital, and two in the antibonding orbital. The one orbital almost exactly cancels out the effect of the other. Experimentally, the bond energy of dihelium is only .084 kJ/mol; this is not enough to hold the two atoms together in the presence of random thermal motion at ordinary temperatures, so dihelium dissociates as quickly as it is formed, and is therefore not a distinct chemical species.
Diatomic molecules containing second-row atoms
The four simplest molecules we have examined so far involve molecular orbitals that derived from two 1s atomic orbitals. If we wish to extend our model to larger atoms, we will have to contend with higher atomic orbitals as well. One greatly simplifying principle here is that only the valence-shell orbitals need to be considered. Inner atomic orbitals such as 1s are deep within the atom and well-shielded from the electric field of a neighboring nucleus, so that these orbitals largely retain their atomic character when bonds are formed.
Dilithium
For example, when lithium, whose configuration is 1s22s1, bonds with itself to form Li2, we can forget about the 1s atomic orbitals and consider only the σ bonding and antibonding orbitals. Since there are not enough electrons to populate the antibonding orbital, the attractive forces win out and we have a stable molecule.
The bond energy of dilithium is 110 kJ/mole; notice that this value is less than half of the 270 kJ bond energy in dihydrogen, which also has two electrons in a bonding orbital. The reason, of course, is that the 2s orbital of Li is much farther from its nucleus than is the 1s orbital of H, and this is equally true for the corresponding molecular orbitals. It is a general rule, then, that the larger the parent atom, the less stable will be the corresponding diatomic molecule.
Lithium hydride
All the molecules we have considered thus far are homonuclear; they are made up of one kind of atom. As an example of a heteronuclear molecule, let’s take a look at a very simple example— lithium hydride. Lithium hydride is a stable, though highly reactive molecule. The diagram shows how the molecular orbitals in lithium hydride can be related to the atomic orbitals of the parent atoms. One thing that makes this diagram look different from the ones we have seen previously is that the parent atomic orbitals have widely differing energies; the greater nuclear charge of lithium reduces the energy of its 1s orbital to a value well below that of the 1s hydrogen orbital.
There are two occupied atomic orbitals on the lithium atom, and only one on the hydrogen. With which of the lithium orbitals does the hydrogen 1s orbital interact? The lithium 1s orbital is the lowest-energy orbital on the diagram. Because this orbital is so small and retains its electrons so tightly, it does not contribute to bonding; we need consider only the 2s orbital of lithium which combines with the 1s orbital of hydrogen to form the usual pair of sigma bonding and antibonding orbitals. Of the four electrons in lithium and hydrogen, two are retained in the lithium 1s orbital, and the two remaining ones reside in the σ orbital that constitutes the Li–H covalent bond.
The resulting molecule is 243 kJ/mole more stable than the parent atoms. As we might expect, the bond energy of the heteronuclear molecule is very close to the average of the energies of the corresponding homonuclear molecules. Actually, it turns out that the correct way to make this comparison is to take the geometric mean, rather than the arithmetic mean, of the two bond energies. The geometric mean is simply the square root of the product of the two energies.
The geometric mean of the H2 and Li2 bond energies is 213 kJ/mole, so it appears that the lithium hydride molecule is 30 kJ/mole more stable than it “is supposed” to be. This is attributed to the fact that the electrons in the 2σ bonding orbital are not equally shared between the two nuclei; the orbital is skewed slightly so that the electrons are attracted somewhat more to the hydrogen atom. This bond polarity, which we considered in some detail near the beginning of our study of covalent bonding, arises from the greater electron-attracting power of hydrogen— a consequence of the very small size of this atom. The electrons can be at a lower potential energy if they are slightly closer to the hydrogen end of the lithium hydride molecule. It is worth pointing out, however, that the electrons are, on the average, also closer to the lithium nucleus, compared to where they would be in the 2s orbital of the isolated lithium atom. So it appears that everyone gains and no one loses here!
$\Sigma$ and $\pi$ orbitals
The molecules we have considered thus far are composed of atoms that have no more than four electrons each; our molecular orbitals have therefore been derived from s-type atomic orbitals only. If we wish to apply our model to molecules involving larger atoms, we must take a close look at the way in which p-type orbitals interact as well. Although two atomic p orbitals will be expected to split into bonding and antibonding orbitals just as before, it turns out that the extent of this splitting, and thus the relative energies of the resulting molecular orbitals, depend very much on the nature of the particular p orbital that is involved.
You will recall that there are three possible p orbitals for any value of the principal quantum number. You should also recall that p orbitals are not spherical like s orbitals, but are elongated, and thus possess definite directional properties. The three p orbitals correspond to the three directions of Cartesian space, and are frequently designated px, py, and pz, to indicate the axis along which the orbital is aligned. Of course, in the free atom, where no coordinate system is defined, all directions are equivalent, and so are the p orbitals. But when the atom is near another atom, the electric field due to that other atom acts as a point of reference that defines a set of directions. The line of centers between the two nuclei is conventionally taken as the x axis. If this direction is represented horizontally on a sheet of paper, then the y axis is in the vertical direction and the z axis would be normal to the page.
These directional differences lead to the formation of two different classes of molecular orbitals. The above figure shows how two px atomic orbitals interact. In many ways the resulting molecular orbitals are similar to what we got when s atomic orbitals combined; the bonding orbital has a large electron density in the region between the two nuclei, and thus corresponds to the lower potential energy. In the out-of-phase combination, most of the electron density is away from the internuclear region, and as before, there is a surface exactly halfway between the nuclei that corresponds to zero electron density. This is clearly an antibonding orbital— again, in general shape, very much like the kind we saw in hydrogen and similar molecules. Like the ones derived from s-atomic orbitals, these molecular orbitals are σ (sigma) orbitals.
Sigma orbitals are cylindrically symmetric with respect to the line of centers of the nuclei; this means that if you could look down this line of centers, the electron density would be the same in all directions.
orbitals, we get the bonding and antibonding pairs that we would expect, but the resulting molecular orbitals have a different symmetry: rather than being rotationally symmetric about the line of centers, these orbitals extend in both perpendicular directions from this line of centers. Orbitals having this more complicated symmetry are called π (pi) orbitals. There are two of them, πy and πz differing only in orientation, but otherwise completely equivalent.
The different geometric properties of the π and σ orbitals causes the latter orbitals to split more than the π orbitals, so that the σ* antibonding orbital always has the highest energy. The σ bonding orbital can be either higher or lower than the π bonding orbitals, depending on the particular atom.
Second-Row Diatomics
If we combine the splitting schemes for the 2s and 2p orbitals, we can predict bond order in all of the diatomic molecules and ions composed of elements in the first complete row of the periodic table. Remember that only the valence orbitals of the atoms need be considered; as we saw in the cases of lithium hydride and dilithium, the inner orbitals remain tightly bound and retain their localized atomic character.
Dicarbon
Carbon has four outer-shell electrons, two 2s and two 2p. For two carbon atoms, we therefore have a total of eight electrons, which can be accommodated in the first four molecular orbitals. The lowest two are the 2s-derived bonding and antibonding pair, so the “first” four electrons make no net contribution to bonding. The other four electrons go into the pair of pibonding orbitals, and there are no more electrons for the antibonding orbitals— so we would expect the dicarbon molecule to be stable, and it is. (But being extremely reactive, it is known only in the gas phase.)
You will recall that one pair of electrons shared between two atoms constitutes a “single” chemical bond; this is Lewis’ original definition of the covalent bond. In C2 there are two paris of electrons in the π bonding orbitals, so we have what amounts to a double bond here; in other words, the bond order in dicarbon is two.
Dioxygen
The electron configuration of oxygen is 1s22s22p4. In O2, therefore, we need to accommodate twelve valence electrons (six from each oxygen atom) in molecular orbitals. As you can see from the diagram, this places two electrons in antibonding orbitals. Each of these electrons occupies a separate π* orbital because this leads to less electron-electron repulsion (Hund's Rule).
The bond energy of molecular oxygen is 498 kJ/mole. This is smaller than the 945 kJ bond energy of N2— not surprising, considering that oxygen has two electrons in an antibonding orbital, compared to nitrogen’s one.
The two unpaired electrons of the dioxygen molecule give this substance an unusual and distinctive property: O2 is paramagnetic. The paramagnetism of oxygen can readily be demonstrated by pouring liquid O2 between the poles of a strong permanent magnet; the liquid stream is trapped by the field and fills up the space between the poles.
Since molecular oxygen contains two electrons in an antibonding orbital, it might be possible to make the molecule more stable by removing one of these electrons, thus increasing the ratio of bonding to antibonding electrons in the molecule. Just as we would expect, and in accord with our model, O2+ has a bond energy higher than that of neutral dioxygen; removing the one electron actually gives us a more stable molecule. This constitutes a very good test of our model of bonding and antibonding orbitals. In the same way, adding an electron to O2 results in a weakening of the bond, as evidenced by the lower bond energy of O2. The bond energy in this ion is not known, but the length of the bond is greater, and this is indicative of a lower bond energy. These two dioxygen ions, by the way, are highly reactive and can be observed only in the gas phase. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.08%3A_Molecular_Orbital_Theory.txt |
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
• Define the terms coordination complex, ligand, polydentate, and chelate.
• Explain the origins of d-orbital splitting; that is, why the energies of certain atomic-d orbitals are more affected by some ligands than others in an octahedral complex.
• Why are many coordination complexes highly colored?
• Explain the meaning of high-spin and low-spin complexes, and illustrate in a general way how a particular set of ligands can change one kind into another. Also, describe how these differences are observed experimentally.
• Describe the role of iron in heme and the general structural components of hemoglobin.
Complexes such as Cu(NH3)62+ have been known and studied since the mid-nineteenth century. and their structures had been mostly worked out by 1900. Although the hybrid orbital model was able to explain how neutral molecules such as water or ammonia could bond to a transition metal ion, it failed to explain many of the special properties of these complexes. Finally, in 1940-60, a model known as ligand field theory was developed that is able to organize and explain most of the observed properties of these compounds. Since that time, coordination complexes have played major roles in cellular biochemistry and inorganic catalysis.
What is a Complex?
If you have taken a lab course in chemistry, you have very likely admired the deep blue color of copper sulfate crystals, CuSO4·5H2O. The proper name of this substance is copper(II) sulfate pentahydrate, and it is typical of many salts that incorporate waters of hydration into their crystal structures. It is also a complex, a term used by chemists to describe a substance composed of two other substances (in this case, CuSO4 and H2O) each of which is capable of an independent existence.
The binding between the components of a complex is usually weaker than a regular chemical bond; thus most solid hydrates can be decomposed by heating, driving off the water and yielding the anhydrous salt:
$\underbrace{\ce{CuSO4 \cdot 5 H2O}}_{\text{blue}} \rightarrow \underbrace{\ce{CuSO_{4 (s)}}}_{\text{white}} + 5 H_2O$
Driving off the water in this way also destroys the color, turning it from a beautiful deep blue to a nondescript white. If the anhydrous salt is now dissolved in water, the blue color now pervades the entire solution. It is apparent that the presence of water is somehow necessary for the copper(II) ion to take on a blue color, but why should this be?
A very common lab experiment that most students carry out is to add some dilute ammonia to a copper sulfate solution. At first, the solution turns milky as the alkaline ammonia causes the precipitation of copper hydroxide:
$\ce{Cu^{2+} + 2 OH^{–} \rightarrow Cu(OH)2 (s)}$
However, if more ammonia is added, the cloudiness disappears and the solution assumes an intense deep blue color that makes the original solution seem pale by comparison. The equation for this reaction is usually given as
$\ce{Cu^{2+} + 6 NH3 \rightarrow Cu(NH3)6^{2+} } \label{ammine}$
The new product is commonly known as the copper-ammonia complex ion, or more officially, hexamminecopper(II) complex ion.
Equation $\ref{ammine}$ is somewhat misleading, however, in that it implies the formation of a new complex where none existed before. In fact, since about 1895, it has been known that the ions of most transition metals dissolve in water to form complexes with water itself, so a better representation of the reaction of dissolved copper with ammonia would be
$\ce{Cu(H2O)6^{2+} + 6 NH3 \rightarrow Cu(NH3)6^{2+} + 6 H2O}$
In effect, the ammonia binds more tightly to the copper ion than does water, and it thus displaces the latter when it comes into contact with the hexaaquocopper(II) ion, as the dissolved form of Cu2+ is properly known.
Most transition metals dissolve in water to form complexes with water itself.
The basics of Coordination Complexes
Although our primary focus in this unit is on bonding, the topic of coordination complexes is so important in chemistry and biochemistry that some of their basic features are worth knowing about, even if their detailed chemistry is beyond the scope of this course. These complexes play an especially crucial role in physiology and biochemistry. Thus heme, the oxygen-carrying component of red blood cells (and the source of the red color) is basically a complex of iron, and the part of chlorophyll that converts sunlight into chemical energy within green plants is a magnesium complex.
Some Definitions
We have already defined a complex as a substance composed of two or more components capable of an independent existence. A coordination complex is one in which a central atom or ion is joined to one or more ligands (Latin ligare, to tie) through what is called a coordinate covalent bond in which both of the bonding electrons are supplied by the ligand. In such a complex the central atom acts as an electron-pair acceptor (Lewis acid — think of H+ which has no electrons at all, but can accept a pair from something like Cl) and the ligand as an electron-pair donor (Lewis base ). The central atom and the ligands coordinated to it constitute the coordination sphere. Thus the salt [Co(NH3)5Cl]Cl2 is composed of the complex ion [Co(NH3)5Cl]2+ and two Cl ions; components within the square brackets are inside the coordination sphere, whereas the two chloride ions are situated outside the coordination sphere. These latter two ions could be replaced by other ions such as NO3 without otherwise materially changing the nature of the salt.
The central atoms of coordination complexes are most often cations (positive ions), but may in some cases be neutral atoms, as in nickel carbonyl Ni(CO)4.
Ligands composed of ions such as F or small molecules such as H2O or CN possess more than one set of lone pair electrons, but only one of these pairs can coordinate with a central ion. Such ligands are said to be monodentate (“one tooth”.) Larger ligands may contain more than one atom capable of coordinating with a single central ion, and are described as polydentate. Thus ethylenediamine (shown below) is a bidentate ligand. Polydentate ligands whose geometry enables them to occupy more than one coordinating position of a central ion act as chelating agents (Greek χελος, chelos, claw) and tend to form extremely stable complexes known as chelates.
Chelation is widely employed in medicine, water-treatment, analytical chemistry and industry for binding and removing metal ions of particular kinds. Some of the more common ligands (chelating agents) are shown here:
Structure and bonding in transition metal complexes
Complexes such as Cu(NH3)62+ have been known and studied since the mid-nineteenth century. Why they should form, or what their structures might be, were complete mysteries. At that time all inorganic compounds were thought to be held together by ionic charges, but ligands such as water or ammonia are of course electrically neutral. A variety of theories such as the existence of “secondary valences” were concocted, and various chain-like structures such as CuNH3-NH3-NH3-NH3-NH3-NH3 were proposed. Finally, in the mid-1890s, after a series of painstaking experiments, the chemist Alfred Werner (Swiss, 1866-1919) presented the first workable theory of complex ion structures.
Werner claimed that his theory first came to him in a flash after a night of fitful sleep; by the end of the next day he had written his landmark paper that eventually won him the 1913 Nobel Prize in Chemistry.
Werner was able to show, in spite of considerable opposition, that transition metal complexes consist of a central ion surrounded by ligands in a square-planar, tetrahedral, or octahedral arrangement. This was an especially impressive accomplishment at a time long before X-ray diffraction and other methods had become available to observe structures directly. His basic method was to make inferences of the structures from a careful examination of the chemistry of these complexes and particularly the existence of structural isomers. For example, the existence of two different compounds AX4 having the same composition shows that its structure must be square-planar rather than tetrahedral.
What holds them together?
An understanding of the nature of the bond between the central ion and its ligands would have to await the development of Lewis’ shared-electron pair theory and Pauling’s valence-bond picture. We have already shown how hybridization of the d orbitals of the central ion creates vacancies able to accommodate one or more pairs of unshared electrons on the ligands. Although these models correctly predict the structures of many transition metal complexes, they are by themselves unable to account for several of their special properties:
• The metal-to-ligand bonds are generally much weaker than ordinary covalent bonds;
• Some complexes utilize “inner” d orbitals of the central ion, while others are “outer-orbital” complexes;
• Transition metal ions tend to be intensely colored.
Paramagnetism of coordination complexes
Unpaired electrons act as tiny magnets; if a substance that contains unpaired electrons is placed near an external magnet, it will undergo an attraction that tends to draw it into the field. Such substances are said to be paramagnetic, and the degree of paramagnetism is directly proportional to the number of unpaired electrons in the molecule. Magnetic studies have played an especially prominent role in determining how electrons are distributed among the various orbitals in transition metal complexes.
Studies of this kind are carried out by placing a sample consisting of a solution of the complex between the poles of an electromagnet. The sample is suspended from the arm of a sensitive balance, and the change in apparent weight is measured with the magnet turned on and off. An increase in the weight when the magnet is turned on indicates that the sample is attracted to the magnet (paramagnetism) and must therefore possess one or more unpaired electrons. The precise number can be determined by calibrating the system with a substance whose electron configuration is known.
Crystal field theory
The current model of bonding in coordination complexes developed gradually between 1930-1950. In its initial stages, the model was a purely electrostatic one known as crystal field theory which treats the ligand ions as simple point charges that interact with the five atomic d orbitals of the central ion. It is this theory which we describe below.
It is remarkable that this rather primitive model, quite innocent of quantum mechanics, has worked so well. However, an improved and more complete model that incorporates molecular orbital theory is known as ligand field theory. In an isolated transition metal atom the five outermost d orbitals all have the same energy which depends solely on the spherically symmetric electric field due to the nuclear charge and the other electrons of the atom. Suppose now that this atom is made into a cation and is placed in solution, where it forms a hydrated species in which six H2O molecules are coordinated to the central ion in an octahedralarrangement. An example of such an ion might be hexaaquotitanium(III), Ti(H2O)63+.
The ligands (H2O in this example) are bound to the central ion by electron pairs contributed by each ligand. Because the six ligands are located at the corners of an octahedron centered around the metal ion, these electron pairs are equivalent to clouds of negative charge that are directed from near the central ion out toward the corners of the octahedron. We will call this an octahedral electric field, or the ligand field.
d-orbital splitting
The differing shapes of the five kinds of d orbitals cause them to interact differently with the electric fields created by the coordinated ligands. This diagram (from a Purdue U. chemistry site) shows outlines of five kinds of d orbitals.
The green circles represent the coordinating electron-pairs of the ligands located at the six corners of the octahedron around the central atom. The two d orbitals at the bottom have regions of high electron density pointing directly toward the ligand orbitals; the resulting electron-electron repulsion raises the energy of these d orbitals.
Although the five d orbitals of the central atom all have the same energy in a spherically symmetric field, their energies will not all be the same in the octahedral field imposed by the presence of the ligands. The reason for this is apparent when we consider the different geometrical properties of the five d orbitals. Two of the d orbitals, designated dx2 and dx2-y2, have their electron clouds pointing directly toward ligand atoms. We would expect that any electrons that occupy these orbitals would be subject to repulsion by the electron pairs that bind the ligands that are situated at corresponding corners of the octahedron. As a consequence, the energies of these two d orbitals will be raised in relation to the three other d orbitals whose lobes are not directed toward the octahedral positions.
The number of electrons in the d orbital of the central atom is easily determined from the location of the element in the periodic table, taking in account, of course, of the number of electrons removed in order to form the positive ion.
The effect of the octahedral ligand field due to the ligand electron pairs is to split the d orbitals into two sets whose energies differ by a quantity denoted by Δ ("delta") which is known as the d orbital splitting energy. Note that both sets of central-ion d orbitals are repelled by the ligands and are both raised in energy; the upper set is simply raised by a greater amount. Both the total energy shift and Δ are strongly dependent on the particular ligands.
Why are transition metal complexes often highly colored?
Returning to our example of Ti(H2O)63+, we note that Ti has an outer configuration of 4s23d2, so that Ti3+ will be a d1 ion. This means that in its ground state, one electron will occupy the lower group of d orbitals, and the upper group will be empty. The d-orbital splitting in this case is 240 kJ per mole which corresponds to light of blue-green color; absorption of this light promotes the electron to the upper set of d orbitals, which represents the exited state of the complex. If we illuminate a solution of Ti(H2O)63+ with white light, the blue-green light is absorbed and the solution appears violet in color.
High- and low spin complexes
The magnitude of the d orbital splitting depends strongly on the nature of the ligand and in particular on how strong an electrostatic field is produced by its electron pair bond to the central ion.
If Δ is not too large then the electrons that occupy the d orbitals do so with their spins unpaired until a d5 configuration is reached, just as occurs in the normal Aufbau sequence for atomic electron configurations. Thus a weak-field ligand such as H2O leads to a “high spin” complex with Fe(II).
In contrast to this, the cyanide ion acts as a strong-field ligand; the d orbital splitting is so great that it is energetically more favorable for the electrons to pair up in the lower group of d orbitals rather than to enter the upper group with unpaired spins. Thus hexacyanoiron(II) is a “low spin” complex— actually zero spin, in this particular case.
Different d orbital splitting patterns occur in square planar and tetrahedral coordination geometries, so a very large number of arrangements are possible. In most complexes the value of Δ corresponds to the absorption of visible light, accounting for the colored nature of many such compounds in solution and in solids such as $\ce{CuSO4·5H2O}$ ()Figure $1$.
Coordination Complexes in Biochemistry
Approximately one-third of the chemical elements are present in living organisms. Many of these are metallic ions whose function within the cell depends on the formation of d-orbital coordination complexes with small molecules such as porphyrins (see below). These complexes are themselves bound within proteins (metalloproteins) which provide a local environment that is essential for their function, which is either to transport or store diatomic molecule (oxygen or nitric oxide), to transfer electrons in oxidation-reduction processes, or to catalyze a chemical reaction. The most common of these utilize complexes of Fe and Mg, but other micronutrient metals including Cu, Mn, Mo, Ni, Se, and Zn are also important.
Hemoglobin
Hemoglobin is one of a group of heme proteins that includes myoglobin, cytochrome-c, and catalase. Hemoglobin performs the essential task of transporting dioxygen molecules from the lungs to the tissues in which it is used to oxidize glucose, this oxidation serving as the source of energy required for cellular metabolic processes.
Hemoglobin consists of four globin protein subunits (depicted by different colors in this diagram) joined together by weak intermolecular forces. Each of these subunits contains, buried within it, a molecule of heme, which serves as the active site of oxygen transport. The presence of hemoglobin increases the oxygen carrying capacity of 1 liter of blood from 5 to 250 ml. Hemoglobin is also involved in blood pH regulation and CO2 transport.
Heme itself consists of an iron atom coordinated to a tetradentate porphyrin. When in the ferrous (Fe2+ state) the iron binds to oxygen and is converted into Fe3+. Because a bare heme molecule would become oxidized by the oxygen without binding to it, the adduct must be stabilized by the surrounding globin protein. In this environment, the iron becomes octahedrally-coordinated through binding to a component of the protein in a fifth position, and in the sixth position either by an oxygen molecule or by a water molecule, depending on whether the hemoglobin is in its oxygenated state (in arteries) or deoxygenated state (in veins).
The heme molecule (purple) is enfolded within the polypeptide chain as shown here. The complete hemoglobin molecule contains four of these subunits, and all four must be present for it to function. The binding of O2 to heme in hemoglobin is not a simple chemical equilibrium; the binding efficiency is regulated by the concentrations of H+, CO2, and organic phosphates. It is remarkable that the binding sites for these substances are on the outer parts of the globin units, far removed from the heme. The mechanism of this exquisite molecular-remote-control arises from the fact that the Fe2+ion is too large to fit inside the porphyrin, so it sits slightly out of the porphyrin plane. This Fe radius diminishes when it is oxygenated, allowing it to move into the plane. In doing so, it pulls the protein component to which it is bound with it, triggering a sequence of structural changes that extend throughout the protein.
Myoglobin is another important heme protein that is found in muscles. Unlike hemoglobin, which consists of four protein subunits, myoglobin is made up of only one unit. Its principal function is to act as an oxygen storage reservoir, enabling vigorous muscle activity at a rate that could not be sustained by delivery of oxygen through the bloodstream. Myoglobin is responsible for the red color of meat. Cooking of meat releases the O2 and oxidizes the iron to the +3 state, changing the color to brown.
Carbon monoxide poisoning
Other ligands, notably cyanide ion and carbon monoxide, are able to bind to hemoglobin much more strongly than does iron, thereby displacing it and rendering hemoglobin unable to transport oxygen. Air containing as little as 1 percent CO will convert hemoglobin to carboxyhemoglobin in a few hours, leading to loss of consciousness and death. Even small amounts of carbon monoxide can lead to substantial reductions in the availability of oxygen. The 400-ppm concentration of CO in cigarette smoke will tie up about 6% of the hemoglobin in heavy smokers; the increased stress this places on the heart as it works harder to compensate for the oxygen deficit is believed to be one reason why smokers are at higher risk for heart attacks. CO binds to hemoglobin 200 times more tightly than does $O_2$.
Chlorophyll
Chlorophyll is the light-harvesting pigment present in green plants. Its name comes from the Greek word χλορος (chloros), meaning “green”- the same root from which chlorine gets its name. Chlorophyll consists of a ring-shaped tetradentate ligand known as a porphin coordinated to a central magnesium ion. A histidine residue from one of several types of associated proteins forms a fifth coordinate bond to the Mg atom.
The light energy trapped by chlorophyll is utilized to drive a sequence of reactions whose net effect is to bring about the reduction of CO2 to glucose (C6H12O6) in a process known as photosynthesis which serves as the fuel for all life processes in both plants and animals. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.09%3A_Bonding_in_Coordination_Complexes.txt |
Learning Objectives
• Explain the fundamental difference between the bonding in metallic solids compared to that in other types of solids and within molecules. Name some physical properties of metals that reflect this difference.
• Sketch out a diagram illustrating how a simple molecular-orbital approach to bonding in metals of Groups 1 and 2 always leaves some upper MO's empty.
• Describe, at the simplest level, the origin of electron "bands" in metals.
• Describe how the electrical and thermal conductivity of metals can be explained according to band theory.
• Explain why the electrical conductivity of a metal decreases with temperature, whereas that of a semiconductor increases.
Most of the known chemical elements are metals, and many of these combine with each other to form a large number of intermetallic compounds. The special properties of metals— their bright, lustrous appearance, their high electrical and thermal conductivities, and their malleability— suggest that these substances are bound together in a very special way.
Properties of metals
The fact that the metallic elements are found on the left side of the periodic table offers an important clue to the nature of how they bond together to form solids.
• These elements all possess low electronegativities and readily form positive ions Mn+. Because they show no tendency to form negative ions, the kind of bonding present in ionic solids can immediately be ruled out.
• The metallic elements have empty or nearly-empty outer p-orbitals, so there are never enough outer-shell electrons to place an octet around an atom.
These points lead us to the simplest picture of metals, which regards them as a lattice of positive ions immersed in a “sea of electrons” which can freely migrate throughout the solid. In effect the electropositive nature of the metallic atoms allows their valence electrons to exist as a mobile fluid which can be displaced by an applied electric field, hence giving rise to their high electrical conductivities. Because each ion is surrounded by the electron fluid in all directions, the bonding has no directional properties; this accounts for the high malleability and ductility of metals.
This view is an oversimplification that fails to explain metals in a quantitative way, nor can it account for the differences in the properties of individual metals. A more detailed treatment, known as the bond theory of metals, applies the idea of resonance hybrids to metallic lattices. In the case of an alkali metal, for example, this would involve a large number of hybrid structures in which a given Na atom shares its electron with its various neighbors.
Molecular orbitals in metals
will look. These are all constructed by combining the individual atomicMO’s that are so closely spaced in energy that they form what is known as a band of allowed energies. In metallic lithium only the lower half of this band is occupied.
Origin of Metallic Properties
Metallic solids possess special properties that set them apart from other classes of solids and make them easy to identify and familiar to everyone. All of these properties derive from the liberation of the valence electrons from the control of individual atoms, allowing them to behave as a highly mobile fluid that fills the entire crystal lattice. What were previously valence-shell orbitals of individual atoms become split into huge numbers of closely-spaced levels known as bands that extend throughout the crystal.
Melting Point and Strength
The strength of a metal derives from the electrostatic attraction between the lattice of positive ions and the fluid of valence electrons in which they are immersed. The larger the nuclear charge (atomic number) of the atomic kernel and the smaller its size, the greater this attraction. As with many other periodic properties, these work in opposite ways, as is seen by comparing the melting points of some of the Group 1-3 metals (right). Other factors, particularly the lattice geometry are also important, so exceptions such as is seen in Mg are not surprising.
In general, the transition metals with their valence-level d electrons are stronger and have higher melting points: Fe, 1539°C; Re 3180, Os 2727; W 3380°C.
Malleability and ductility
These terms refer respectively to how readily a solid can be shaped by pressure (forging, hammering, rolling into a sheet) and by being drawn out into a wire. Metallic solids are known and valued for these qualities, which derive from the non-directional nature of the attractions between the kernel atoms and the electron fluid. The bonding within ionic or covalent solids may be stronger, but it is also directional, making these solids subject to fracture (brittle) when struck with a hammer, for example. A metal, by contrast, is more likely to be simply deformed or dented.
Electrical conductivity: why are metals good conductors?
In order for a substance to conduct electricity, it must contain charged particles (charge carriers) that are sufficiently mobile to move in response to an applied electric field. In the case of ionic solutions and melts, the ions themselves serve this function. (Ionic solids contain the same charge carriers, but because they are fixed in place, these solids are insulators.) In metals the charge carriers are the electrons, and because they move freely through the lattice, metals are highly conductive. The very low mass and inertia of the electrons allows them to conduct high-frequency alternating currents, something that electrolytic solutions are incapable of. In terms of the band structure, application of an external field simply raises some of the electrons to previously unoccupied levels which possess greater momentum.
The conductivity of an electrolytic solution decreases as the temperature falls due to the decrease in "viscosity" which inhibits ionic mobility. The mobility of the electron fluid in metals is practically unaffected by temperature, but metals do suffer a slight conductivity decrease (opposite to ionic solutions) as the temperature rises; this happens because the more vigorous thermal motions of the kernel ions disrupts the uniform lattice structure that is required for free motion of the electrons within the crystal. Silver is the most conductive metal, followed by copper, gold, and aluminum.
Metals conduct electricity readily because of the essentially infinite supply of higher-energy empty MOs that electrons can populate as they acquire higher kinetic energies. This diagram illustrates the overlapping band structure (explained farther on) in beryllium. The MO levels are so closely spaced that even thermal energies can provide excitation and cause heat to rapidly spread through the solid.
Electrical conductivities of the metallic elements vary over a wide range. Notice that those of silver and copper (the highest of any metal) are in classes by themselves. Gold and aluminum follow close behind.
Thermal conductivity: why do metals conduct heat?
Everyone knows that touching a metallic surface at room temperature produces a colder sensation than touching a piece of wood or plastic at the same temperature. The very high thermal conductivity of metals allows them to draw heat out of our bodies very efficiently if they are below body temperature. In the same way, a metallic surface that is above body temperature will feel much warmer than one made of some other material. The high thermal conductivity of metals is attributed to vibrational excitations of the fluid-like electrons; this excitation spreads through the crystal far more rapidly than it does in non-metallic solids which depend on vibrational motions of atoms which are much heavier and possess greater inertia.
Appearance: Why are metals shiny?
We usually recognize a metal by its “metallic luster”, which refers to its ability of reflect light. When light falls on a metal, its rapidly changing electromagnetic field induces similar motions in the more loosely-bound electrons near the surface (this could not happen if the electrons were confined to the atomic valence shells.) A vibrating charge is itself an emitter of electromagnetic radiation, so the effect is to cause the metal to re-emit, or reflect, the incident light, producing the shiny appearance. What color is a metal? With the two exceptions of copper and gold, the closely-spaced levels in the bands allow metals to absorb all wavelengths equally well, so most metals are basically black, but this is ordinarily evident only when the metallic particles are so small that the band structure is not established.
The distinctive color of gold is a consequence of Einstein's theory of special relativity acting on the extremely high momentum of the inner-shell electrons, increasing their mass and causing the orbitals to contract. The outer (5d) electrons are less affected, and this gives rise to increased blue-light absorption, resulting in enhanced reflection of yellow and red light.
Thermionic Effect
The electrons within the electron fluid have a distribution of velocities very much like that of molecules in a gas. When a metal is heated sufficiently, a fraction of these electrons will acquire sufficient kinetic energy to escape the metal altogether; some of the electrons are essentially “boiled out” of the metal. This thermionic effect, which was first observed by Thomas Edison, was utilized in vacuum tubes which served as the basis of electronics from its beginning around 1910 until semiconductors became dominant in the 1960’s.
Band Structure of Metals
Most metals are made of atoms that have an outer configuration of s2, which we would expect to completely fill the band of MO’s we have described. With the band completely filled and no empty levels above, we would not expect elements such as beryllium to be metallic. What happens is that the empty p orbitals also split into a band. Although the energy of the 2p orbital of an isolated Be atom is about 160 kJ greater than that of the 2s orbital, the bottom part of the 2p band overlaps the upper part of the 2s band, yielding a continuous conduction band that has plenty of unoccupied orbitals. It is only when these bands become filled with 2p electrons that the elements lose their metallic character.
This diagram illustrates the band structure in a 3rd-row metal such as Na or Mg, and how it arises from MO splitting in very small units M2 - M6. The conduction bands for the "infinite" molecule MN are shaded.
In most metals there will be bands derived from the outermost s-, p-, and d atomic levels, leading to a system of bands, some of which will overlap as described above. Where overlap does not occur, the almost continuous energy levels of the bands are separated by a forbidden zone, or band gap. Only the outermost atomic orbitals form bands; the inner orbitals remain localized on the individual atoms and are not involved in bonding.
In its mathematical development, the band model relies strongly on the way that the free electrons within the metal interact with the ordered regularity of the crystal lattice. The alternative view shown here emphasizes this aspect by showing the inner orbitals as localized to the atomic cores, while the valence electrons are delocalized and belong to the metal as a whole, which in a sense constitutes a huge molecule in its own right. | textbooks/chem/General_Chemistry/Chem1_(Lower)/09%3A_Chemical_Bonding_and_Molecular_Structure/9.10%3A_Bonding_in_Metals.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.