chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
All sections in this chapter. So three more pages need to be made by you.
Thumbnail: Lime Jell-O. (CC BY 2.0; Gisela Francisco).
Contributors and Attributions
Sorangel Rodriguez-Velazquez (American University). Chemistry of Cooking by Sorangel Rodriguez-Velazquez is licensed under a Creative Commons Attribution-NonCommercial ShareAlike 4.0 International License, except where otherwise noted
01: Thickening and Concentrating Flavors
Viscous means “sticky”’ and the term viscosity refers to the way in which the chocolate flows. Chocolate comes in various viscosities, and the confectioner chooses the one that is most appropriate to his or her needs. The amount of cocoa butter in the chocolate is largely responsible for the viscosity level. Emulsifiers like lecithin can help thin out melted chocolate, so it flows evenly and smoothly. Because it is less expensive than cocoa butter at thinning chocolate, it can be used to help lower the cost of chocolate.
Molded pieces such as Easter eggs require a chocolate of less viscosity. That is, the chocolate should be somewhat runny so it is easier to flow into the moulds. This is also the case for coating cookies and most cakes, where a thin, attractive and protective coating is all that is needed. A somewhat thicker chocolate is advisable for things such as ganache and flavoring of creams and fillings. Where enrobers (machines to dip chocolate centers) are used, the chocolate may also be thinner to ensure that there is an adequate coat of couverture.
Viscosity varies between manufacturers, and a given type of chocolate made by one manufacturer may be available in more than one viscosity. Bakers sometimes alter the viscosity depending on the product. A vegetable oil is sometimes used to thin chocolate for coating certain squares. This makes it easier to cut afterwards.
Chips, Chunks, and Other Baking Products
Content and quality of chocolate chips and chunks vary from one manufacturer to another. This chocolate is developed to be more heat stable for use in cookies and other baking where you want the chips and chunks to stay whole. Ratios of chocolate liquor, sugar, and cocoa butter differ. All these variables affect the flavor. Chips and chunks may be pure chocolate or have another fat substituted for the cocoa butter. Some high quality chips have up to 65% chocolate liquor, but in practice, liquor content over 40% tends to smear in baking, so high ratios defeat the purpose.
Many manufacturers package their chips or chunks by count (ct) size. This refers to how many pieces there are in 1 kg of the product. As the count size number increases, the size of the chip gets smaller. With this information, you can choose the best size of chip for the product you are producing.
Other chocolate products available are chocolate sprinkles or “hail,” used as a decoration; chocolate curls, rolls, or decorative shapes for use on cakes and pastries; and chocolate sticks or “batons,” which are often baked inside croissants.
1.02: Thickening Agents
Learning Objectives
• Identify and describe thickening agents used in the food service industry
• Describe the production and properties of thickening agents Describe the function of thickening agents in baking
Two types of thickening agents are recognized: starches and gums. Most thickening agents are of vegetable origin; the only exception is gelatin. All the starches are products of the land; some of the gums are of marine origin.
Bakers use thickening agents primarily to:
• Make fillings easier to handle and bake
• Firm up products to enable them to be served easily
• Provide a glossy “skin” to improve finish and reduce drying | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.01%3A_Viscosity.txt |
Cornstarch
Cornstarch is the most common thickening agent used in the industry. It is mixed with water or juice and boiled to make fillings and to give a glossy semi-clear finish to products. Commercial cornstarch is made by soaking maize in water containing sulphur dioxide. The soaking softens the corn and the sulphur dioxide prevents possible fermentation. It is then crushed and passed to water tanks where the germ floats off. The mass is then ground fine and, still in a semi-fluid state, passed through silk screens to remove the skin particles. After filtration, the product, which is almost 100% starch, is dried.
Cornstarch in cold water is insoluble, granular, and will settle out if left standing. However, when cornstarch is cooked in water, the starch granules absorb water, swell, and rupture, forming a translucent thickened mixture. This phenomenon is called gelatinization. Gelatinization usually begins at about 60°C (140°F), reaching completion at the boiling point.
The commonly used ingredients in a starch recipe affect the rate of gelatinization of the starch. Sugar, added in a high ratio to the starch, will inhibit the granular swelling. The starch gelatinization will not be completed even after prolonged cooking at normal temperature. The result is a filling of thin consistency, dull color, and a cereal taste. Withhold some of the sugar from the cooking step in such cases, and add it after gelatinization of the starch has been completed.
Other ingredients such as egg, fat, and dry milk solids have a similar effect. Fruits with high acidity such as rhubarb will also inhibit starch setting. Cook the starch paste first and add the fruit afterward.
In cooking a filling, about 1.5 kg (3 1/3 lb.) of sugar should be cooked with the water or juice for every 500 g (18 oz.) of starch used as a thickener. Approximately 100 g (4 oz.) of starch is used to thicken 1 L of water or fruit juice. The higher the acidity of the fruit juice, the more thickener required to hold the gel. Regular cornstarch thickens well but makes a cloudy solution. Another kind of cornstarch, waxy maize starch, makes a more fluid mix of great clarity.
Pre-gelatinized Starches
Pre-gelatinized starches are mixed with sugar and then added to the water or juice. They thicken the filling in the presence of sugar and water without heating. This is due to the starch being precooked and not requiring heat to enable it to absorb and gelatinize. There are several brands of these starches on the market (e.g., Clear Jel), and they all vary in absorption properties. For best results, follow the manufacturer’s guidelines. Do not put pre-gelatinized starch directly into water, as it will form lumps immediately.
Note
If fruit fillings are made with these pre-cooked starches, there is a potential for breakdown if the fillings are kept. Enzymes in the uncooked fruit may “attack” the starch and destroy some of the gelatinized structure. For example, if you are making a week’s supply of pie filling from fresh rhubarb, use a regular cooked formula.
Arrowroot
Arrowroot is a highly nutritious farinaceous starch obtained from the roots and tubers of various West Indian plants. It is used in the preparation of delicate soups, sauces, puddings, and custards.
Agar-Agar
Agar-agar is a jelly-like substance extracted from red seaweed found off the coasts of Japan, California, and Sri Lanka. It is available in strips or slabs and in powder form. Agar-agar only dissolves in hot water and is colorless. Use it at 1% to make a firm gel. It has a melting point much higher than gelatin and its jellying power is eight times greater. It is used in pie fillings and to some extent in the stiffening of jams. It is a permitted ingredient in some dairy products, including ice cream at 0.5%. One of its largest uses is in the production of materials such as piping jelly and marshmallow.
Algin (Sodium Alginate)
Extracted from kelp, this gum dissolves in cold water and a 1% concentration to give a firm gel. It has the disadvantage of not working well in the presence of acidic fruits. It is popular in uncooked icings because it works well in the cold state and holds a lot of moisture. It reduces stickiness and prevents recrystallization.
Carrageenan or Irish Moss
Carrageenan is another marine gum extracted from red seaweed. It is used as a thickening agent in various products, from icing stabilizers to whipping cream, at an allowable rate of 0.1% to 0.5%.
Gelatin
Gelatin is a glutinous substance made from the bones, connective tissues, and skins of animals. The calcium is removed and the remaining substance is soaked in cold water. Then it is heated to 40°C to 60°C (105°F 140°F). The partially evaporated liquid is defatted and coagulated on glass plates and then poured into moulds. When solid, the blocks of gelatin are cut into thin layers and dried on wire netting.
Gelatin is available in sheets of leaf gelatin, powders, granules, or flakes. Use it at a 1% ratio. Like some of the other gelling agents, acidity adversely affects its gelling capacity.
The quality of gelatin often varies because of different methods of processing and manufacturing. For this reason, many bakers prefer leaf gelatin because of its reliable strength.
Gum Arabic or Acacin
This gum is obtained from various kinds of trees and is soluble in hot or cold water. Solutions of gum arabic are used in the bakery for glazing various kinds of goods, particularly marzipan fruits.
Gum Tragacanth
This gum is obtained from several species of Astragalus, low-growing shrubs found in Western Asia. It can be purchased in flakes or powdered form. Gum tragacanth was once used to make gum paste and gum paste wedding ornaments, but due to high labour costs and a prohibitive price for the product, its use nowadays is uncommon.
Pectin
Pectin is a mucilaginous substance (gummy substance extracted from plants), occurring naturally in pears, apples, quince, oranges, and other citrus fruits. It is used as the gelling agent in traditional jams and jellies.
1.04: Coagulation
Coagulation is defined as the transformation of proteins from a liquid state to a solid form. Once proteins are coagulated, they cannot be returned to their liquid state. Coagulation often begins around 38°C (100°F), and the process is complete between 71°C and 82°C (160°F and 180°F). Within the baking process, the natural structures of the ingredients are altered irreversibly by a series of physical, chemical, and biochemical interactions. The three main types of protein that cause coagulation in the bakeshop are outlined below.
Egg proteins
Eggs contain many different proteins. The white, or albumen, contains approximately 40 different proteins, the most predominant being ovalbumin (54%) and ovotransferrin (12%). The yolk contains mostly lipids (fats), but also lipoproteins. These different proteins will all coagulate when heated, but do so at different temperatures. The separated white of an egg coagulates between 60°C and 65°C (140°F and 149°F) and the yolk between 62°C and 70°C (144°F and 158°F), which is why you can cook an egg and have a fully set white and a still runny yolk. These temperatures are raised when eggs are mixed into other liquids. For example, the coagulation and thickening of an egg, milk, and sugar mixture, as in custard, will take place between 80°C and 85°C (176°F and 185°F) and will start to curdle at 88°C to 90°C (190°F and 194°F).
Dairy and soy proteins
Casein, a semi-solid substance formed by the coagulation of milk, is obtained and used primarily in cheese. Rennet, derived from the stomach linings of cattle, sheep, and goats, is used to coagulate, or thicken, milk during the cheese-making process. Plant-based rennet is also available. Chymosin (also called rennin) is the enzyme used to produce rennet, and is responsible for curdling the milk, which will then separate into solids (curds) and liquid (whey).
Milk and milk products will also coagulate when treated with an acid, such as citric acid (lemon juice) or vinegar, used in the preparation of fresh ricotta, and tartaric acid, used in the preparation of mascarpone, or will naturally curdle when sour as lactic acid develops in the milk. In some cases, as in the production of yogurt or crème fraîche, acid-causing bacteria are added to the milk product to cause the coagulation. Similarly, tofu is made from soybean milk that has been coagulated with the use of either salt, acid, or enzyme-based coagulants.
Flour proteins (gluten)
Two main proteins are found in wheat flour: glutenin and gliadin (smaller quantities are also found in other grains). During mixing and in contact with liquid, these two form into a stretchable substance called gluten. The coagulation of gluten is what happens when bread bakes; that is, it is the firming or hardening of these gluten proteins, usually caused by heat, which solidify to form a firm structure. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.03%3A_Types_of_Thickening_Agents.txt |
Hydrocolloids
A hydrocolloid is a substance that forms a gel in contact with water. There are two main categories: Thermo-reversible gel: A gel that melts upon reheating and sets upon cooling. Examples are gelatin and agar agar. Thermo-irreversible gel: A gel that does not melt upon reheating. Examples are cornstarch and pectin. Excessive heating, however, may cause evaporation of the water and shrinkage of the gel. Hydrocolloids do not hydrate (or dissolve) instantly, and that hydration is associated with swelling, which easily causes lumping. It is therefore necessary to disperse hydrocolloids in water. Classically, this has always been done with cornstarch, where a portion of the liquid from the recipe is mixed to form a “slurry” before being added to the cooking liquid. This can also be done with an immersion blender or a conventional blender, or by mixing the hydrocolloid with a helping agent such as sugar, oil, or alcohol prior to dispersion in water.
Starches
Starch gelatinization is the process where starch and water are subjected to heat, causing the starch granules to swell. As a result, the water is gradually absorbed in an irreversible manner. This gives the system a viscous and transparent texture. The result of the reaction is a gel, which is used in sauces, puddings, creams, and other food products, providing a pleasing texture. Starch-based gels are thermoirreversible, meaning that they do not melt upon heating (unlike gelatin, which we will discuss later). Excessive heating, however, may cause evaporation of the water and shrinkage of the gel.
The most common examples of starch gelatinization are found in sauce and pasta preparations and baked goods. In sauces, starches are added to liquids, usually while heating.
• The starch will absorb liquid and swell, resulting in the liquid becoming thicker. The type of starch determines the final product. Some starches will remain cloudy when cooked; others will remain clear.
• Pasta is made mostly of semolina wheat (durum wheat flour), which contains high amounts of starch. When pasta is cooked in boiling water, the starch in the pasta swells as it absorbs water, and as a result the texture of the pasta softens.
Starch molecules make up the majority of most baked goods, so starch is an important part of the structure. Although starches by themselves generally can’t support the shape of the baked items, they do give bulk to the structure. Starches develop a softer structure when baked than proteins do. The softness of the crumb of baked bread is due largely to the starch. The more protein structure there is, the chewier the bread.
Starches can be fairly straightforward extracts of plants, such as cornstarch, tapioca, or arrowroot, but there are also modified starches and pre-gelatinized starches available that have specific uses. See Table 1 for a list of different thickening and binding agents and their characteristics.
Table 1 – Common starches and gels used in the bakeshop
Starch or Gel Ratio Preparation Characteristics and Uses
Cornstarch 20-40 g starch thickens 1 L liquid A slurry (mixture of cornstarch and water) is mixed and added to a simmering liquid while whisking until it dissolves and the liquid thickens; or Cornstarch mixed with sugar, and cold liquid added Thickened mixture simmered until no starch taste remains Used to thicken sauces when a clear glossy texture is desired, such as dessert sauces and in Asian-inspired dishes Translucent, thickens further as it cools; forms a “sliceable” gel Sensitive to extended heat exposure, so products become thin if held at heat for long periods of time
Agar agar 15-30g agar agar sets 1 L liquid Powder dissolved in cold water Added to cold or simmering liquid Activates with heat, sets when cold Extracted from seaweed Used in Asian desserts and molecular gastronomy cooking Used in place of gelatin in vegetarian dishes Clear firm texture Does not thin when reheated, thickens more when cold
Waxy maize, waxy rice
Dissolved in cold water 20-40 g starch thickens 1 L liquid
Added to hot liquid while whisking until it dissolves and the liquid thickens
Used in desserts and dessert sauces Clear, does not thicken further as it cools Does not gel at cool temperatures, good for cold sauces Quite stable at extreme temperatures (heat and freezing)
Modified starches
Dissolved in cold water 20-40 g starch thickens 1 L liquid
Added to hot liquid while whisking until it dissolves and the liquid thickens
Modified starches are often used in commercially processed foods and convenience products Modified to improve specific characteristics (e.g., stability or texture under extreme conditions; heat and freezing) Translucent, thickens further as it cools
Pre-gelatinized starches
Powder, dissolved in cold liquid 20-40 g starch thickens 1 L liquid
Added to liquid at any temperature
Used when thickening liquids that might lose color or flavor during cooking Become viscous without the need for additional cooking Translucent, fairly clear, shiny, does NOT gel when cold
Arrowroot
Powder, dissolved in cold liquid 20-40 g starch thickens 1 L liquid
Added to hot liquid while whisking until it dissolves and the liquid thickens
Derived from cassava root Used in Asian cuisines Very clear; possesses a gooey texture Translucent, shiny, very light gel when cold
Gelatin 15-30 g gelatin sets 1 L liquid
Powder or sheets (leaves) dissolved in cold water Added to cold or simmering liquid Activates with heat, sets when cold
Derived from collagens in bones and meats of animals Used in aspic, glazes, cold sauces, and desserts Clear, firm texture Dissolves when reheated, thickens when cold
Gelling agents
Gelatin is a water-soluble protein extracted from animal tissue and used as a gelling agent, a thickener, an emulsifier, a whipping agent, a stabilizer, and a substance that imparts a smooth mouth feel to foods. It is thermo-reversible, meaning the setting properties or action can be reversed by heating. Gelatin is available in two forms: powder and sheet (leaf). Gelatin is often used to stabilize whipped cream and mousses; confectionery, such as gummy bears and marshmallows; desserts including pannacotta; commercial products like Jell-O; “lite” or low-fat versions of foods including some margarines; and dairy products such as yogurt and ice cream. Gelatin is also used in hard and soft gel capsules for the pharmaceutical industry.
Agar agar is an extract from red algae and is often used to stabilize emulsions or foams and to thicken or gel liquids. It is thermo-reversible and heat resistant. It is typically hydrated in boiling liquids and is stable across a wide range of acidity levels. It begins to gel once it cools to around 40ºC (100ºF) and will not melt until it reaches 85ºC (185ºF).
Pectin
Pectin is taken from citrus and other tree fruits (apples, pears, etc.). Pectin is found in many different foods such as jam, milk-based beverages, jellies, sweets, and fruit juices. Pectin is also used in molecular gastronomy mainly as a gelling agent, thickener, and stabilizer.
There are a variety of types of pectin that react differently according to the ingredients used. Low-methoxyl pectin (which is activated with the use of calcium for gelling) and high-methoxyl pectin that requires sugar for thickening are the two most common types used in cooking. High-methoxyl pectin is what is traditionally used to make jams and jellies. Low-methoxyl pectin is often used in modern cuisine due to the thermo-irreversible gel that it forms and its good reaction to calcium. Its natural capability to emulsify and gel creates stable preparations.
Increasingly, cooks, bakers, and pastry chefs are turning to many different gels, chemicals, and other substances used in commercial food processing as new ingredients to modify liquids or other foods. These will be outlined in detail in the section on molecular gastronomy. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.05%3A_Gelatinization.txt |
Many factors can influence crystallization in food. Controlling the crystallization process can affect whether a particular product is spreadable, or whether it will feel gritty or smooth in the mouth. In some cases, crystals are something you try to develop; in others, they are something you try to avoid. It is important to know the characteristics and quality of the crystals in different food. Butter, margarine, ice cream, sugar, and chocolate all contain different types of crystals, although they all contain fat crystals. For example, ice cream has fat crystals, ice crystals, and sometimes lactose crystals. The fact that sugar solidifies into crystals is extremely important in candy making. There are basically two categories of candies: crystalline (candies that contain crystals in their finished form, such as fudge and fondant); and non-crystalline (candies that do not contain crystals, such as lollipops, taffy, and caramels). Recipe ingredients and procedures for non-crystalline candies are specifically designed to prevent the formation of sugar crystals because they give the resulting candy a grainy texture. One way to prevent the crystallization of sucrose in candy is to make sure that there are other types of sugar—usually fructose and glucose—to get in the way and slow down or inhibit the process. Acids can also be added to “invert” the sugar, and to prevent or slow down crystallization. Fats added to certain confectionery items will have a similar effect.
When boiling sugar for any application, the formation of crystals is generally not desired. These are some of the things that can promote crystal growth:
• Pot and utensils that are not clean
• Sugar with impurities in it (A scoop used in the flour bin, and then used for sugar, may have enough particles on it to promote crystallization.)
• Water with a high mineral content (“hard water”)
• Too much stirring (agitation) during the boiling phase
Crystallization may be prevented by adding an interferent, such as acid (lemon, vinegar, tartaric, etc.) or glucose or corn syrup, during the boiling procedure. As mentioned above, ice cream can have ice and fat crystals that co-exist along with other structural elements (emulsion, air cells, and hydrocolloid stabilizers such as locust bean gum) that make up the “body” of the ice cream. Some of these components crystallize either partially or completely. The bottom line is that the nature of the crystalline phase in the food will determine the quality, appearance, texture, feel in the mouth, and stability of the product. The texture of ice cream is derived, in part, from the large number of small ice crystals. These small ice crystals provide a smooth texture with excellent melt-down and cooling properties. When these ice crystals grow larger during storage (recrystallization), the product becomes coarse and less enjoyable. Similar concerns apply to sugar crystals in fondant and frostings, and to fat crystals in chocolate, butter, and margarine.
Control of crystallization in fats is important in many food products, including chocolate, margarine, butter, and shortening. In these products, the aim is to produce the appropriate number, size, and distribution of crystals in the correct shape because the crystalline phase plays such a large role in appearance, texture, spreadability, and flavor release. Thus, understanding the processes that control crystallization is critical to controlling quality in these products.
To control crystallization in foods, certain factors must be controlled:
• Number and size of crystals
• Crystal distribution
• Proper polymorph (crystal shape)
Crystallization is important in working with chocolate. The tempering process, sometimes called precrystallization, is an important step that is used for decorative and moulding purposes, and is a major contributor to the mouth feel and enjoyment of chocolate. Tempering is a process that encourages the cocoa butter in the chocolate to harden into a specific crystalline pattern, which maintains the sheen and texture for a long time. When chocolate isn’t tempered properly it can have a number of problems. For example, it may not ever set up hard at room temperature; it may become hard, but look dull and blotchy; the internal texture may be spongy rather than crisp; and it can be susceptible to fat bloom, meaning the fats will migrate to the surface and make whitish streaks and blotches. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.06%3A_Crystallization.txt |
Non-traditional thickeners
In addition to traditional starches, there are new ways to thicken sauces and to change the texture of liquids. Some of these thickening agents work without heating and are simply blended with the cold liquid, such as modified starch or xanthan gum. These allow the creation of sauces and other liquids with a fresh, uncooked taste.
Foams, froths, and bubbles
Liquids can be stabilized with gelatin, lecithin, and other ingredients, and then used to create foams by whipping or using a special dispenser charged with nitrogen gas. A well-made foam adds an additional flavor dimension to the dish without adding bulk, and an interesting texture as the foamdissolves in the mouth (Figure 1).
Figure 1. “Dinner in the Dark 21-Dessert” by Esther Little is licensed under CC BY SA 2.0
Espuma
Espuma is the Spanish term for froth or foam, and it is created with the use of a siphon (ISO) bottle. This is a specific term, since culinary foams may be attained through other means.
Espuma from a siphon creates foam without the use of an emulsifying agent such as egg. As a result, it offers an unadulterated flavor of the ingredients used. It also introduces much more air into a preparation compared to other culinary aerating processes.
Espuma is created mainly with liquid that has air incorporated in it to create froth. But solid ingredients can be used too; these can be liquefied by cooking, puréeing, and extracting natural juices. It should be noted, though, that the best flavors to work with are those that are naturally diluted. Otherwise, the espuma tends to lose its flavor as air is introduced into it.
Stabilizers may be used alongside the liquids to help retain their shape longer; however, this is not always necessary. Prepared liquids can also be stored in a siphon bottle and kept for use. The pressure from the bottle will push out the aerated liquid, producing the espuma.
Foam
Foam is created by trapping air within a solid or liquid substance. Although culinary foams are most recently associated with molecular gastronomy, they are part of many culinary preparations that date back to even earlier times. Mousse, soufflé, whipped cream, and froth in cappuccino are just some examples of common foams. Common examples of “set” foams are bread, pancakes, and muffins.
Foam does not rely on pressure to encase air bubbles into a substance. Like espuma, foam may also be created with the help of a surfactant and gelling or thickening agents to help it hold shape. The production of a culinary foam starts with a liquid or a solid that has been puréed. The thickening or gelling agent is then diluted into this to form a solution. Once dissolved, the solution is whipped to introduce air into it.
The process of whipping is done until the foam has reached the desired stiffness. Note that certain ingredients may break down if they are whipped for too long, especially without the presence of a stabilizing agent.
Gels Turning a liquid, such as a vegetable juice or raspberry purée, into a solid not only gives it a different texture but also allows the food to be cut into many shapes, enabling different visual presentations (Figure 2). Regular gelatin can be used as well as other gelling agents, such as agar agar, which is derived from red algae.
Figure 2. “Papayagelee” by hedonistin is licensed under CC BY NC 2.0
Brittle gels
Gelling agents are often associated with jelly-like textures, which may range from soft to firm. However, certain gels produced by specific agents may not fit this description.
Rather than forming an elastic or pliable substance, brittle gels may also be formed. These are gels that are firm in nature yet fragile at the same time. This characteristic is caused by the formation of a gel network that is weak and susceptible to breaking. This property allows brittle gels to crumble in the mouth and create a melt-in-the-mouth feeling. As a result, new sensations and textures are experienced while dining. At the same time, tastes within a dish are also enhanced due to the flavour release caused by the gel breakdown.Brittle gels are made by diluting the gelling agent into a liquid substance such as water, milk, or a stock. This mixture is left to set to attain a gelled end product. It should be noted that the concentration of gelling agents used, as well as the amount of liquid, both affect gelation.
Agar agar is a common agent used to create brittle gels. However, when combined with sugar it tends to create a more elastic substance. Low-acyl gellan gum, locust bean gum, and carrageenan also create brittle gels.
Fluid gels
A fluid gel is a cross between a sauce, gel, and purée. It is a controlled liquid that has properties of all three preparations. A fluid gel displays viscosity and fluidity at the same time, being thick yet still spreadable.
Fluid gels behave as solids when undisturbed, and flow when exposed to sufficient agitation. They are used in many culinary dishes where fluids need to be controlled, and they provide a rich, creamy texture.
A fluid gel is created using a base liquid that can come from many different sources. The base liquid is commonly extracted from fruits and vegetables, taken from stocks, or even puréed from certain ingredients. The longer the substance is exposed to stress, and the more intense the outside stress, the more fluidity is gained. More fluidity causes a finer consistency in the gel.
Fluid gels can be served either hot or cold, as many of the gelling agents used for such preparations are stable at high temperatures.
Drying and powdering
Drying a food intensifies its flavour and, of course, changes its texture. Eating a piece of apple that has been cooked and then dehydrated until crisp is very different from eating a fresh fruit slice. If the dehydrated food is powdered, it becomes yet another flavour and texture experience.
When maltodextrin (or tapioca maltodextrin) is mixed with fat, it changes to a powder. Because maltodextrin dissolves in water, peanut butter (or olive oil) that has been changed to a powder changes back to an oil in the mouth.
Freezing
In molecular gastronomy, liquid nitrogen is often used to freeze products or to create a frozen item without the use of a freezer.
Liquid nitrogen is the element nitrogen in a liquefied state. It is a clear, colourless liquid with a temperature of -196°C (-321°F). It is classified as a cryogenic fluid, which causes rapid freezing when it comes into contact with living tissues.
The extremely cold temperatures provided by this liquefied gas are most often used in modern cuisine to produce frozen foams and ice cream. After freezing food, nitrogen boils away, creating a thick nitrogen fog that may also add to the aesthetic features of a dish.
Given the extreme temperature of liquid nitrogen, it must be handled with care. Mishandling may cause serious burns to the skin. Nitrogen must be stored in special flasks and handled only by trained people. Aprons, gloves, and other specially designed safety gear should be used when handling liquid nitrogen.
Used mainly in the form of a coolant for molecular gastronomy, liquid nitrogen is not ingested. It is poured directly onto the food that needs to be cooled, causing it to freeze. Any remaining nitrogen evaporates, although sufficient time must be provided to allow the liquefied gas to be eliminated and for the dish to warm up to the point that it will not cause damage during consumption.
Spherification
Spherification is a modern cuisine technique that involves creating semi-solid spheres with thin membranes out of liquids. Spheres can be made in various sizes and of various firmnesses, such as the “caviar” shown in Figure 3. The result is a burst-in-the-mouth effect, achieved with the liquid. Both flavour and texture are enhanced with this culinary technique.
There are two versions of the spherification process: direct and reverse.
In direct spherification, a flavoured liquid (containing either sodium alginate, gellan gum, or carrageenan) is dripped into a water bath that is mixed with calcium (either calcium chloride or calcium lactate). The outer layer is induced by calcium to form a thin gel layer, leaving a liquid centre. In this version, the spheres are easily breakable and should be consumed immediately.
Calcium chloride and sodium alginate are the two basic components used for this technique. Calcium chloride is a type of salt used in cheese making, and sodium alginate is taken from seaweed. The sodium alginate is used to gel the chosen liquid by dissolving it directly into the fluid. This causes the liquid to become sticky, and proper dissolving must be done by mixing. The liquid is then left to set to eliminate any bubbles.
Once ready, a bath is prepared with calcium chloride and water. The liquid is then dripped into the bath using a spoon or syringe depending on the desired sphere size. The gel forms a membrane encasing the liquid when it comes into contact with the calcium chloride. Once set, the spheres are then removed and rinsed with water to remove any excess calcium chloride.
In reverse spherification, a calcium-containing liquid (or ingredients mixed with a soluble calcium salt) is dripped into a setting bath containing sodium alginate. Surface tension causes the drop to become spherical. A skin of calcium alginate immediately forms around the top. Unlike in the direct version, the gelling stops and does not continue into the liquid orb. This results in thicker shells so the products do not have to be consumed immediately.
Figure 3. “White chocolate spaghetti with raspberry sauce and chocolate martini caviar” by ayngelina is licensed under CC BY NC-ND 2.0
Videos on spherification:
Specialty ingredients used in molecular gastronomy
There are a number of different ingredients used in molecular gastronomy as gelling, thickening, or emulsifying agents. Many of these are available in specialty food stores or can be ordered online.
Algin
Another name for sodium alginate, algin is a natural gelling agent taken from the cell walls of certain brown seaweed species.
Calcium chloride
Calcium chloride, also known as CaCl2, is a compound of chlorine and calcium that is a by-product of sodium bicarbonate (baking soda) manufacturing. At room temperature it is a solid salt, which is easily dissolved in water.
This is very salty and is often used for preservation, pickling, cheese production, and adding taste without increasing the amount of sodium. It is also used in molecular gastronomy in the spherification technique (see above) for the production of ravioli, spheres, pearls, and caviar (Figure 3).
Calcium lactate
Calcium lactate is a calcium salt resulting from the fermentation of lactic acid and calcium. It is a white crystalline power when solid and is highly soluble in cold liquids. It is commonly used as a calcium fortifier in various food products including beverages and supplements.
Calcium lactate is also used to regulate acidity levels in cheese and baking powder, as a food thickener, and as a preservative for fresh fruits. In molecular gastronomy, it is most commonly used for basic spherification and reverse spherification due to the lack of bitterness in the finished products.
Like calcium chloride, calcium lactate is used alongside sodium alginate. In regular spherification, it is used in the bath. It is also used as a thickener in reverse spherification.
Carob bean gum
Carob bean gum is another name for locust bean gum. It is often used to stabilize, texturize, thicken, and gel liquids in the area of modern cuisine, although it has been a popular thickener and stabilizer for many years.
Carrageenan
Carrageenan refers to any linear sulfated polysaccharide taken from the extracts of red algae. This seaweed derivative is classified mainly as iota, kappa, and lambda. It is a common ingredient in many foods.
There are a number of purposes that it serves, including binding, thickening, stabilizing, gelling, and emulsifying. Carrageenan can be found in ice cream, salad dressings, cheese, puddings, and many more foods. It is often used with dairy products because of its good interaction with milk proteins. Carrageenan also works well with other common kitchen ingredients and offers a smooth texture and taste that blends well and does not affect flavour.
More often than not, carrageenan is found in powder form, which is hydrated in liquid before being used. For best results, carrageenan powder should be sprinkled in cold liquid and blended well to dissolve, although it may also be melted directly in hot liquids.
Citric acid
Classified as a weak organic acid, citric acid is a naturally occurring preservative that can be found in citrus fruits. Produced as a result of the fermentation of sugar, it has a tart to bitter taste and is usually in powder form when sold commercially. It is used mainly as a preservative and acidulent, and it is a common food additive in a wide range of foods such as candies and soda. Other than extending shelf life by adjusting the acidity or pH of food, it can also help enhance flavours. It works especially well with other fruits, providing a fresh taste.
In modern cooking, citric acid is often used as an emulsifier to keep fats and liquids from separating. It is also a common component in spherification, where it may be used as an acid buffer.
Gellan gum
Gellan gum is a water-soluble, high-molecular-weight polysaccharide gum that is produced through the fermentation of carbohydrates in algae by the bacterium Pseudomonas elodea. This fermented carbohydrate is purified with isopropyl alcohol, then dried and milled to produce a powder.
Gellan gum is used as a stabilizer, emulsifier, thickener, and gelling agent in cooking. Aspics and terrines are only some of the dishes that use gellan. It comes in both high-acyl and low-acyl forms. High-acyl gellan gum produces a flexible elastic gel, while low-acyl gellan gum will give way to a more brittle gel.
Like many other hydrocolloids, gellan gum is used with liquids. The powder is normally dispersed in the chosen liquid to dissolve it. Once dissolved, the solution is then heated to facilitate liquid absorption and gelling by the hydrocolloid. A temperature between 85°C and 95°C (185°F and 203°F) will start the
dissolution process. Gelling will begin upon cooling around 10°C and 80°C (50°F and 176°F).
Gellan gum creates a thermo-irreversible gel and can withstand high heat without reversing in form. This makes it ideal for the creation of warm gels.
Guar gum
Guar gum, or guaran, is a carbohydrate. This galactomannan is taken from the seeds of the guar plant by dehusking, milling, and screening. The end product is a pale, off-white, loose powder. It is most commonly used as a thickening agent and stabilizer for sauces and dressings in the food industry. Baked goods such as bread may also use guar gum to increase the amount of soluble fibre. At the same time, it also aids with moisture retention in bread and other baked items.
Being a derivative of a legume, guar gum is considered to be vegan and a good alternative to starches. In modern cuisine, guar gum is used for the creation of foams from acidic liquids, for fluid gels, and for stabilizing foams.
Guar gum must first be dissolved in cold liquid. The higher the percentage of guar gum used, the more viscous the liquid will become. Dosage may also vary according to the ingredients used as well as desired results and temperature.
Iota carrageenan
Iota carrageenan is a hydrocolloid taken from red seaweed (Eucheuma denticulatum). It is one of three varieties of carrageenan and is used mainly as a thickening or gelling agent.
Gels produced from iota carrageenan are soft and flexible, especially when used with calcium salts. It produces a clear gel that exhibits little syneresis. Iota is a fast-setting gel that is thermo-reversible and remains stable through freezing and thawing. In modern cuisine it is used to create hot foams as well as custards and jellies with a creamy texture.
Like most other hydrocolloids, iota carrageenan must first be dispersed and hydrated in liquid before use. Unlike lambda carrageenan, it is best dispersed in cold liquid. Once hydrated, the solution must be heated to about 70°C (158°F) with shear to facilitate dissolution. Gelling will happen between 40°C and 70°C (104°F and 158°F) depending on the number of calcium ions present.
Kappa carrageenan
Kappa carrageenan is another type of red seaweed extract taken specifically from Kappaphycus alvarezii. Like other types of carrageenan, it is used as a gelling, thickening, and stabilizing agent. When mixed with water, kappa carrageenan creates a strong and firm solid gel that may be brittle in texture.
This particular variety of carrageenan blends well with milk and other dairy products. Since it is taken from seaweed, it is considered to be vegan and is an alternative to traditional gelling agents such as gelatin.
Kappa carrageenan is used in various cooking preparations including hot and cold gels, jelly toppings, cakes, breads, and pastries. When used in molecular gastronomy preparations and other dishes, kappa carrageenan should be dissolved in cold liquid.
Once dispersed, the solution must be heated between 40°C and 70°C (104°F and 158°F). Gelling will begin between 30°C and 60°C (86°F and 140°F). Kappa carrageenan is a thermo-reversible gel and will stay stable up to 70°C (158°F). Temperatures beyond this will cause the gel to melt and become liquid once again.
Locust bean gum
Locust bean gum, also known as LBG and carob bean gum, is a vegetable gum derived from Mediterranean-region carob tree seeds. This hydrocolloid is used to stabilize, texturize, thicken, and gel liquids in modern cuisine, although it has been a popular thickener and stabilizer for many years.
It has a neutral taste that does not affect the flavour of food that it is combined with. It also provides a creamy mouth feel and has reduced syneresis when used alongside pectin or carrageenan for dairy and fruit applications. The neutral behaviour of this hydrocolloid makes it ideal for use with a wide range of ingredients.
To use locust bean gum, it must be dissolved in liquid. It is soluble with both hot and cold liquids.
Maltodextrin
Maltodextrin is a sweet polysaccharide that is produced from starch, corn, wheat, tapioca, or potato through partial hydrolysis and spray drying. This modified food starch is a white powder that has the capacity to absorb and hold water as well as oil. It is an ideal additive since it has fewer calories than sugar and is easily absorbed and digested by the body in the form of glucose.
Coming from a natural source, it ranges from nearly flavourless to fairly sweet without any odour. Maltodextrin is a common ingredient in processed foods such as soda and candies. In molecular gastronomy, it can be used both as a thickener and a stabilizer for sauces and dressings, for encapsulation, and as a sweetener. In many cases, it is also used as an aroma carrier due to its capacity to absorb oil. It is also often used to make powders or pastes out of fat.
Sodium alginate
Sodium alginate, which is also called algin, is a natural gelling agent taken from the cell walls of certain brown seaweed species. This salt is obtained by drying the seaweed, followed by cleaning, boiling, gelling, and pulverizing it. A light yellow powder is produced from the process. When dissolved in liquids, sodium alginate acts as a thickener, creating a viscous fluid. Conversely, when it is used with calcium it forms a gel through a cold process.
In molecular gastronomy, sodium alginate is most commonly used as a texturizing agent. Foams and sauces may be created with it. It is also used in spherification for the creation of pearls, raviolis, mock caviar, marbles, and spheres. Sodium alginate can be used directly by dissolving it into the liquid that needs to be gelled, as in the case of basic spherification. It may also be used inversely by adding it directly to a bath, as in the case of reverse spherification.
This versatile product is soluble in both hot and cold liquids, and gels made with it will set at any temperature.
Soy lecithin
Soy lecithin, also called just lecithin, is a natural emulsifier that comes from fatty substances found in plant tissues. It is derived from soybeans either mechanically or chemically, and is a by-product of soybean oil creation. The end product is a light brown powder that has low water solubility.
As an emulsifier, it works to blend immiscible ingredients together, such as oil and water, giving way to stable preparations. It can be whisked directly into the liquid of choice.
Soy lecithin is also used in creating foams, airs, mousses, and other aerated dishes that are long lasting and full of flavour. It is used in pastries, confections, and chocolate to enhance dough and increase moisture tolerance.
As with most ingredients, dosage and concentration for soy lecithin will depend on the ingredients used, specific properties desired in the resulting preparation, as well as other conditions.
Tapioca maltodextrin
Tapioca maltodextrin is a form of maltodextrin made from tapioca starch. It is a common ingredient in molecular gastronomy because it can be used both as a thickener and stabilizer for sauces and dressings, for encapsulation, and as a sweetener. In many cases it is also used as an aroma carrier due to its capacity to absorb oil. It is often used to make powders or pastes out of fat.
Xanthan gum
Xanthan gum is a food additive used as a thickening agent. It is produced through the fermentation of glucose. As a gluten-free additive it can be used as a substitute in cooking and baking.
As a thickener, when used in low dosages, xanthan gum produces a weak gel with high viscosity that is shear reversible with a high pourability. It also displays excellent stabilizing abilities that allow for particle suspension.
Moreover, xanthan gum mixes well with other flavours without masking them and provides an improved mouth feel to preparations. The presence of bubbles within the thickened liquids often makes way for light and creamy textures. It is used in the production of emulsions, suspensions, raviolis, and foams.
Being a hydrocolloid, xanthan gum must be hydrated before use. High versatility allows it to be dissolved over a wide range of temperatures, acid, and alcohol levels. Once set, xanthan gum may lose some of its effectiveness when exposed to heat. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.07%3A_Non-traditional_thickeners.txt |
Sauces enhance desserts by both their flavor and their appearance, just as savory sauces enhance meats, fish, and vegetables. Crème anglaise, chocolate sauce, caramel sauce, and the many fruit sauces and coulis are the most versatile. One or another of these sauces will complement nearly every dessert.
Examples of dessert sauces
Caramel sauce: A proper caramel flavor is a delicate balance between sweetness and bitterness. As sugar cooks and begins to change color, a flavor change will occur. The darker the sugar, the more bitter it will become. Depending on the application for the finished caramel, it can be made mild or strong. At this point, a liquid is added. This liquid will serve several roles: it will stop the cooking process, it can add richness and flavor, and it will soften the sauce. The fluidity of the finished sauce will depend on the amount of liquid added to it, and the temperature it is served at. Dairy products, such as cream, milk, or butter, will add richness; use water for a clear sauce; use fruit purées to add different flavor elements.
• Chocolate sauce: Sometimes called fudge sauce, chocolate sauce is generally made from cream (or milk), butter, and chocolate, and can be served hot or cold. The proportion of each of the ingredients will affect the thickness of the final product.
• Compote: French for “mixture,” a compote is cooked fruit served in its own cooking liquid, usually a sugar syrup. Compotes can be made with fresh, frozen, or dried fruits, and served hot or cold.
• Coulis: French for “strained liquid,” a coulis is most often an uncooked, strained purée. Flavors remain pure, and the colors bright. One of the drawbacks of using a coulis is that it may separate quickly when used as a plating sauce. It’s best to use à la minute.
• Crème anglaise: French for “English custard,” crème anglaise is a rich, pourable custard sauce that can be served hot or cold over cake, fruits, or other desserts. Made with eggs, sugar, and milk or cream, it is stirred over heat until it thickens into a light sauce. However, it’s a delicate operation: too much heat turns it into scrambled eggs! It should not get above 85°C (185°F) during the cooking process. Vanilla is the classic flavoring, but coffee, spices, chocolate, or liqueurs can be added. With additional yolks and heavy cream, it becomes the “custard” used for French ice cream. With additional yolks, gelatin, whipped cream, and flavoring, it becomes Bavarian cream.
• Curd: A curd is creamy and fruit based, with citrus and berry flavors being the most popular. Made from fruit juices, eggs, butter, and sugar cooked in a process similar to crème anglaise, curds can be thick, pourable sauces or spreads.
• Fruit butter: Fruit butter is a spread made from whole fruits, cooked, reduced, and puréed (if you don’t want any chunks in it) until very thick. It does not contain any butter; the term refers to the consistency.
• Fruit sauce: A fruit sauce is a fruit purée, cooked and thickened with a starch. It is normally served cold.
• Hard sauce: This traditional sauce for Christmas pudding, or any steamed pudding, is made by combining butter, sugar, and flavorings, often liqueurs. It is normally piped into shapes and chilled, then placed on the warm dessert just before serving.
• Sabayon: Sabayon is a mixture of egg yolks, flavoring, and sugar beaten over simmering water until thick, then beaten until cool. It is traditionally flavored with sweet white wine or liquor, then served over fresh fruit and grilled (when it is called a gratin). The Italian version of this is called azabaglione and isflavored with Madeira wine.
• Whipped cream: This very popular dessert topping can be served plain, sweetened, or flavoredC.rème chantilly, a classic version of this, is a combination of whipped cream, sugar, and vanilla.
Applying dessert sauces
Except in the case of some home-style or frozen desserts, sauces are usually not ladled over the dessert because doing so would mar the appearance. Instead, the sauce is applied in a decorative fashion to the plate rather than the dessert. Many different styles of plate saucing are available.
Pouring a pool of sauce onto the plate is knownfloaosding. Although plate flooding often looks old- fashioned today, it can still be a useful technique for many desserts. Flooded plates can be made more attractive by applying a contrasting sauce and then blending or feathering the two sauces decoratively with
a pick or the end of a knife. For this technique to work, the two sauces should be at about the same fluidity or consistency.
Rather than flooding the entire plate, it may be more appropriate for some desserts to apply a smaller pool of sauce to the plate, as this avoids overwhelming the dessert with too much sauce.
A variation of the flooding technique is outlining, where a design is piped onto the plate with chocolate and allowed to set. The spaces can then be flooded with colorful sauces.
A squeeze bottle is useful for making dots, lines, curves, and streaks of sauce in many patterns. Or just a spoon is needed to drizzle random patterns of sauce onto a plate. Another technique for saucing is applying a small amount of sauce and streaking it with a brush, an offset spatula, or the back of a spoon.
Sauces are a great way to highlight flavors. Choose ones that will create balance on the plate, not just for color, but with all the components. A tart berry sauce will complement a rich cheesecake or chocolate dessert because sourness (acid) will cut through fat, making it taste lighter than it is. A sweet sauce served with a sweet dessert will have the overall effect of hiding flavors in both. Hold back on sweetness in order to intensify other flavors.
Many modern presentations may have a minimal amount of sauce. Sometimes this is done just for aesthetic reasons and not for how it will complement the dessert. Think of the dish and the balance of the components. This is the most important factor: flavor first, presentation second. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.08%3A_Sauces.txt |
Sous-vide cooking is about immersing a food item in a precisely controlled water bath, where the temperature of the water is the same as the target temperature of the food being cooked. Food is placed in
a food-grade plastic bag and vacuum-sealed before going into the water bath. Temperatures will vary depending on desired end result. This allows the water in the bath to transfer heat into the food while preventing the water from coming into direct contact with it. This means the water does not chemically interact with the food: the flavors of the food remain stronger, because the water is unable to dissolve or carry away any compounds in the food (Figure 1).
Figure 1. “Img_0081” by Derek is licensed under CC BY- SA-ND 2.0
Sous-vide fruits and vegetables
Cooking vegetable and fruits sous-vide is a great way to tenderize them without losing as many of the vitamins and minerals that are normally lost through blanching or steaming. Fruits can also be infused with liquid when cooked at lower temperatures by adding liquid to the bag. Sous-vide helps preserve the nutrients present in fruits and vegetables by not cooking them above the temperatures that cause the cell walls to fully break down. This allows them to tenderize without losing all their structure. The bag also helps to catch any nutrients that do come out of the vegetable.
While time and temperature do not factor into safety for fruits and vegetables, they do have a unique effect on their structure. There are two components in fruits and vegetables that make them crisp: pectin and starch. Pectin, which is a gelling agent commonly used in jams and jellies for structure, breaks down at 83oC (183oF) at a slower rate than the starch cells do. In many cases this allows for more tender fruits and vegetables that have a unique texture to them.
Custards
The term custard spans so many possible ingredients and techniques that it is most useful to think of a custard as simply a particular texture and mouth feel. Custards have been made for centuries by lightly cooking a blend of eggs, milk, and heavy cream, but modernist chefs have invented myriad ways to make custards.
Using the sous-vide method to prepare crème anglaise, curds, ice cream bases, custard bases, sabayons, and dulce de leche is possible. The technique offers greater consistency and more control over the texture, which can range from airy, typical of a sabayon, to dense, as in a posset. For custards, eggs will be properly cooked at 82°C (180oF), so if the water bath is set to this temperature, no overcooking can happen. The one constant among custards is the use of plenty of fat, which not only provides that distinctive mouth feel but also makes custard an excellent carrier of fat-soluble flavors and aromas. Lighter varieties of custard, prepared sous-vide style and cooled, can be aerated in a whipping siphon into smooth, creamy foams.
Fruit compression
Vacuum-compressing fruits and vegetables is a popular modern technique that can give many plant foods
an attractive, translucent appearance (as shown in the watermelon in Figure 2) and a pleasant, surprising texture. This technique exploits the ability of a vacuum chamber to reduce surrounding pressure, which causes air and moisture within the plant tissue to rapidly expand and rupture the structures within the food. When the surrounding pressure is restored to a normal level, the labyrinth of air-filled spaces collapses. As
a result, light tends to pass through the food rather than being scattered and diffused, which is why vacuum-compressed plant foods appear translucent. Causing the porous structure of a plant food to collapse also imparts a somewhat dense, toothsome texture that can give a familiar ingredient, such as watermelon, an entirely new appeal.
Figure 5. “WD-50 (7th Course)” by Peter Dillon is licensed under CC BY 2.0
Infusions
When adding liquids, the vacuum-seal process creates a rapid infusion—especially with more porous foods (such as adding spices to cream or herbs to melon). This can add flavor and texture in a shorter time than traditional infusions. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.09%3A_Low-temperature_and_sous-vide.txt |
• 2.1: Introduction - Understanding Ingredients
Ingredients play an important role in baking. Not only do they provide the structure and flavour of all of the products produced in the bakery or pastry shop, their composition and how they react and behave in relation to each other are critical factors in understanding the science of baking. This is perhaps most evident when it comes to adapting formulas and recipes to accommodate additional or replacement ingredients while still seeking a similar outcome to the original recipe.
• 2.2: The History of Wheat Flour
Archaeologists who did excavations in the region of the lake dwellers of Switzerland found grains of wheat, millet, and rye 10,000 years old. The Romans perfected the rotary mill for turning wheat into flour. By the time of Christ, Rome had more than 300 bakeries, and Roman legions introduced wheat throughout their empire
• 2.3: Milling of Wheat
Milling of wheat is the process that turns whole grains into flours. The overall aims of the miller are to produce: A consistent product A range of flours suitable for a variety of functions Flours with predictable performance
• 2.4: Flour Streams and Types of Wheat Flour
Modern milling procedures produce many different flour streams (approximately 25) that vary in quality and chemical analysis. These are combined into four basic streams of edible flour, with four other streams going to feed.
• 2.5: Flour Terms and Treatments
In addition to types of flour, you may come across various other terms when purchasing flour. These include some terms that refer to the processing and treatment of the flour, and others outlining some of the additives that may be added during the milling and refining process.
• 2.6: Flour Additives
A number of additives may be found in commercial flours, from agents used as dough conditioners, to others that aid in the fermentation process. Why use so many additives? Many of these products are complementary – that is, they work more effectively together and the end product is as close to “ideal” as possible.
• 2.7: Whole Grain and Artisan Milling
Whole grain and artisan milling is the type of milling that was practiced before the consumer market demanded smooth white flours that are refined and have chemical additives to expedite aging of flours. Artisan milling produces flours that are less refined and better suited to traditional breads, but also contain little to no additives and have higher nutritional content.
• 2.8: Flour in Baking
Flour forms the foundation for bread, cakes, and pastries. It may be described as the skeleton, which supports the other ingredients in a baked product. This applies to both yeast and chemically leavened products.
• 2.9: Rye Flour
Rye is a hardy cereal grass cultivated for its grain. Its use by humans can be traced back over 2,000 years. Once a staple food in Scandinavia and Eastern Europe, rye declined in popularity as wheat became more available through world trade. A crop well suited to northern climates, rye is grown on the Canadian Prairies and in the northern states such as the Dakotas and Wisconsin.
• 2.10: Other Grains and Flours
Several other types of grains are commonly used in baking. In particular, corn and oats feature predominantly in certain types of baking (quick breads and cookies respectively, for instance) but increasingly rice flour is being used in baked goods, particularly for people with gluten sensitivities or intolerances. The trend to whole grains and the influence of different ethnic cultures has also meant the increase in the use of other grains and pulses for flours used in breads.
Thumbnail: All-purpose flour. (CC BY-SA 2.0; Veganbaking.net).
02: Flour
Ingredients play an important role in baking. Not only do they provide the structure and flavour of all of the products produced in the bakery or pastry shop, their composition and how they react and behave in relation to each other are critical factors in understanding the science of baking. This is perhaps most evident when it comes to adapting formulas and recipes to accommodate additional or replacement ingredients while still seeking a similar outcome to the original recipe.
In this book, we look at each of the main categories of baking ingredients, listed below, and then explore their composition and role in the baking process. In addition to these categories, we will discuss the role that salt and water play in the baking process.
The main categories of baking ingredients are:
• Grains and flours Sweeteners
• Fats oils Leavening agents Eggs
• Dairy products
• Chocolate and other cocoa products Nuts and seeds
• Thickening agents
• Spices and other flavourings
• Fruit
Note: For most measurements used in the open textbook series, both S.I. (metric) and U.S./imperial values are given. The exception is nutritional information, which is always portrayed using metric values in both Canada and the United States. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.01%3A_Introduction_-_Understanding_Ingredients.txt |
Archaeologists who did excavations in the region of the lake dwellers of Switzerland found grains of wheat, millet, and rye 10,000 years old. The Romans perfected the rotary mill for turning wheat into flour. By the time of Christ, Rome had more than 300 bakeries, and Roman legions introduced wheat throughout their empire. Improved milling processes were needed because even when wheat was milled twice and bolted (sifted) through silk gauze, the result was still a yellowish flour of uneven texture and flecked with germ and bran.
In the second half of the 19th century, there were great changes in the flour milling process. An American inventor, Edmund LaCroix, improved the process with a purifier to separate the middlings (bran, germ,and other coarse particles) from the particles that form smooth-textured white flour. In recent years, the demand for whole grain milling has increased because whole grain food products have proved to be more nutritious than products made from white flour. (More information on whole grain and artisan milling is provided later in this section.)
In Canada, large-scale wheat growing didn’t occur until after the Prairies were settled in the 1800s. Hard wheat, such as Red Fife, Marquis, and Selkirk, earned Canada a position as the granary for Britain and many other European countries. Today, most of the wheat grown in Western Canada is the hard Red Spring variety. Soft wheats, such as soft red and soft white, are primarily grown in Quebec and Ontario. Many of the original wheat growers have passed on their farms to the next generations, while others branched out to organic farming and milling. One of these farms, Nunweiler’s, has a heritage that goes back to the early 1900s when the original wheat in Canada, Red Fife and Marquis, was grown on this farm.
Today, the major wheat growing areas of North America are in the central part of the continent, in the Great Plains of the United States and the Canadian Prairies. From Nebraska south, winter wheat can be grown, while to the north through Saskatchewan spring wheat dominates. Many American states and some Canadian provinces grow both kinds. In fact, there are very few states that don’t grow some wheat. Kansas, the site of the American Institute of Baking, could be said to be at the heart of the U.S. wheat growing area, while Saskatchewan is the Canadian counterpart.
2.03: Milling of Wheat
Milling of wheat is the process that turns whole grains into flours. The overall aims of the miller are to produce:
• A consistent product
• A range of flours suitable for a variety of functions
• Flours with predictable performance
The very first mill operation is analyzing the grain, which determines criteria such as thgeluten content and amylase activity. It is at this point that decisions about blending are made.
Following analysis, milling may be divided into three stages:
• Cleaning and conditioning – ridding the grain of all impurities and readying it for milling
• Crushing or breaking – breaking down the grain in successive stages to release its component parts Reduction – progressive rollings and siftings to refine the flour and separate it into various
• categories, called streams
Cleaning
Wheat received at the mill contains weeds, seeds, chaff, and other foreign material. Strong drafts of air from the aspirator remove lighter impurities. The disc separator removes barley, oats, and other foreign materials. From there, the wheat goes to the scourers in which it is driven vigorously against perforated steel casings by metal beaters. In this way, much of the dirt lodged in the crease of the wheat berry is removed and carried away by a strong blast of air. Then the magnetic separator removes any iron or steel.
At this point, the wheat is moistened. Machines known as whizzers take off the surface moisture. The wheat
is then tempered, or allowed to lie in bins for a short time while still damp, to toughen the bran coat, thus making possible a complete separation of the bran from the flour-producing portion of the wheat berry. After tempering, the wheat is warmed to a uniform temperature before the crushing process starts.
Crushing or Breaking
The objectives at this stage are twofold:
• Separate as much bran and germ as possible from the endosperm
• Maximize the flour from the resulting endosperm
Household grain mills create flour in one step — grain in one end, flour out the other — but the commercial mill breaks the grain down in a succession of very gradual steps, ensuring that little bran and germ are mixed with any endosperm.
Although the process is referred to ascrushing, flour mills crack rather than crush the wheat with large steel rollers. The rollers at the beginning of the milling system are corrugated and break the wheat into coarse particles. The grain passes through screens of increasing fineness. Air currents draw off impurities from the middlings. Middlings is the name given to coarse fragments of endosperm, somewhere between the size of semolina and flour. Middlings occur after the “break” of the grain.
Bran and germ are sifted out, and the coarse particles are rolled, sifted, and purified again. This separation of germ and bran from the endosperm is an important goal of the miller. It is done to improve dough-making characteristics and colour. As well, the germ contains oil and can affect keeping qualities of the flour.
Reduction
In the reduction stage, the coarser particles go through a series of fine rollers and sieves. After the first crushing, the wheat is separated into five or six streams. This is accomplished by means of machines called plansifters that contain sieves, stacked vertically, with meshes of various sizes. The finest mesh is as fine as the finished flour, and some flour is created at an early stage of reduction.
Next, each of the divisions or streams passes through cleaning machines, known apsurifiers, a series of
sieves arranged horizontally and slightly angled. An upcurrent draught of air assists in eliminating dust. The
product is crushed a little more, and each of the resulting streams is again divided into numerous portions
by means of sifting. The final crushings are made by perfectly smooth steel rollers that reduce the middlings into flour. The flour is then bleached and put into bulk storage. From bulk storage, the flour is enriched (thiamine, niacin, riboflavin, and iron are added), and either bagged for home and bakery use or made ready for bulk delivery.
Extraction Rates
The extraction rate is a figure representing the percentage of flour produced from a given quantity of grain. For example, if 82 kg of flour is produced from 100 kg of grain, the extraction rate is 82% (82÷100×100). Extraction rates vary depending on the type of flour produced. A whole grain flour, which contains all of the germ, bran, and endosperm, can have an extraction rate of close to 100%, while white all-purpose flours generally have extraction rates of around 70%. Since many of the nutrients are found in the germ and bran, flours with a higher extraction rate have a higher nutritional value. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.02%3A_The_History_of_Wheat_Flour.txt |
Modern milling procedures produce many different flour streams (approximately 25) that vary in quality and chemical analysis. These are combined into four basic streams of edible flour, with four other streams going to feed.
• Top patent flour: This stream is composed of only the purest and most highly refined streams from the mill. It is low in ash and is approximately 50% of the flour extracted. The term ash indicates the mineral content (e.g., phosphorus) of the flour. When flour is burned, all that is left is the burned mineral elements that constitute ash.
• Second patent flour: This flour is composed of streams with an intermediate degree of refinement. It has an average ash content of approximately 0.45% and represents about 35% of the total flour. First clear flour: This stream contains the balance of the flour that possesses baking properties, and is high in ash and protein content. It is usually about 15% of the total flour.
• Second clear flour: This grade contains the poorest flour streams. It is very high in ash (approximately 0.75%), and has little or no baking quality. It is about 2% of the total flour.
• Feed streams: The balance of the streams from the mill are classed as feed. Feeds are marketed as bran, wheat shorts, flour middlings, and wheat germ.
Within the streams of edible flours, there are a number of different types of flour used in food preparation. Each has different characteristics, and with those come different uses, as described below.
All-Purpose Flour
General purpose or home use flours are usually a blend of hard spring wheats that are lower in protein (gluten) content than bread flours. They are top patent flours and contain sufficient protein to make good yeast breads, yet not too much for good quick breads, cakes, and cookies.
Note: A word about gluten quality as opposed to gluten quantity: The fact that a particular flour contains a high quantity of protein, say 13% to 15%, does not necessarily mean that it is of high quality. It may contain too much ash or too much damaged starch to warrant this classification. High quality is more important in many bread applications than high quantity. All-purpose flour is an example of a high-quality flour, with a protein content of about 12%.
Graham Flour
A U.S. patented flour, graham flour is a combination of whole wheat flour (slightly coarser), with added bran and other constituents of the wheat kernel.
Bread Flour
Bread flour is milled from blends of hard spring and hard winter wheats. They average about 13% protein and are slightly granular to the touch. This type of flour is sold chiefly to bakers because it makes excellent bread with bakery equipment, but has too much protein for home use. It is also called strong flour or hard flour and is second patent flour.
For example, the specification sheet on bread flour produced by a Canadian miller might include the following information:
• Ingredients: Wheat flour, amylase, ascorbic acid, niacin, iron, thiamine mononitrate, riboflavin, azodicarbonamide, folic acid.
• Moisture: 14.2%
• Ash: 0.54%
• Protein (5.7 x N) 13.00%
Along with this information there is microbiological data and an allergen declaration. (Note that the formula in parentheses beside “Protein” is simply the laboratory’s way of deriving the protein figure from the nitrogen content.)
Cake Flour
Cake flour is milled from soft winter wheats. The protein content is about 7% and the granulation is so uniform and fine that the flour feels satiny. An exception is a high-protein cake flour formulated especially for fruited pound cakes (to prevent the fruit from sinking).
Clear Flour
Clear flour comes from the part of the wheat berry just under the outer covering. Comparing it to first patent flour is like comparing cream to skim milk. It is dark in colour and has a very high gluten content. It
is used in rye and other breads requiring extra strength.
Gluten Flour
Gluten flour is made from wheat flour by removing a large part of the starch. It contains no more than 10% moisture and no more than 44% starch.
Pastry Flour
Pastry flour is made from either hard or soft wheat, but more often from soft. It is fairly low in protein and is finely milled, but not so fine as cake flour. It is unsuitable for yeast breads but ideal for cakes, pastries, cookies, and quick breads.
Self-Rising Flour
Self-rising flour has leavening and salt added to it in controlled amounts at the mill.
Wheat Germ Flour
Wheat germ flour consists entirely of the little germ or embryo part of the wheat separated from the rest of the kernel and flattened into flakes. This flour should be refrigerated.
Whole Wheat Flour
Whole wheat flour contains all the natural parts of the wheat kernel up to 95% of the total weight of the wheat. It contains more protein than all-purpose flour and produces heavier products because of the bran particles.
Whole Wheat Pastry Flour
Whole wheat pastry flour is milled from the entire kernel of soft wheat, is low in gluten, and is suitable for pastry, cakes, and cookies.
Hovis Flour
Most of the germ goes away with the shorts and only a small fraction of the total quantity can be recovered in a fairly pure form. At the mill, a special process developed in England to improve its keeping qualities and flavour cooks this fraction. It is then combined with white flour to make Hovis flour, which produces a loaf that, though small for its weight, has a rich, distinctive flavour.
Triticale Flour
The world’s first new grain, triticale is a hybrid of wheat and rye. It combines the best qualities of both grains. It is now grown commercially in Manitoba.
Semolina
Semolina is the granular product consisting of small fragments of the endosperm of the durum wheat kernel. (The equivalent particles from other hard wheat are called farina.) The commonest form of semolina available commercially is the breakfast cereal Cream of Wheat.
No-Time Flour
The primary goal of all bakers has been to reduce production time and keep costs to a minimum without
losing quality, flavour, or structure. After extensive research, millers have succeeded in eliminating bulk fermentation for both sponge and straight dough methods. No-time flour is flour with additives such as ascorbic acid, bromate, and cysteine. It saves the baker time and labour, and reduces floor spac requirements. The baker can use his or her own formulas with only minor adjustments.
Blending Flours
Blending of flours is done at the mill, and such is the sophistication of the analysis and testing of flours (test baking, etc.) that when problems occur it is generally the fault of the baker and not the product. Today the millers and their chemists ensure that bakers receive the high grade of flour that they need to produce marketable products for a quality-conscious consumer. Due to the vagaries of the weather and its effect on growing conditions, the quality of the grain that comes into the mill is hardly ever constant. For example, if damp weather occurs at harvest time, the grain may start to sprout and will cause what is known as damaged starch. Through analysis and adjustments in grain handling and blending, the miller is able to furnish a fairly constant product.
Bakers do blend flours, however. A portion of soft flour may be blended with the bread flour to reduce the toughness of a Danish pastry or sweet dough, for example. Gluten flour is commonly used in multigrain bread to boost the aeration. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.04%3A_Flour_Streams_and_Types_of_Wheat_Flour.txt |
In addition to types of flour, you may come across various other terms when purchasing flour. These include some terms that refer to the processing and treatment of the flour, and others outlining some of the additives that may be added during the milling and refining process.
Bleached
Bleaching and maturing agents are added to whiten and improve the baking quality quickly, making it possible to market the freshest flour. Even fine wheat flours vary in colour from yellow to cream when freshly milled. At this stage, the flour produces doughs that are usually sticky and do not handle well. Flour improves with age under proper storage conditions up to one year, both in color and quality.
Because storing flour is expensive, toward the close of the 19th century, millers began to treat freshly milled flour with oxidizing agents to bleach it and give it the handling characteristics of naturally aged flour. Under the category of maturing agents are included materials such as chlorine dioxide, chlorine gas plus a small amount of nitrosyl chloride, ammonium persulfate, and ascorbic acid. No change occurs in the nutritional value of the flour when these agents are present.
There are two classes of material used to bleach flour. A common one, an organic peroxide, reacts with the yellow pigment only, and has no effect on gluten quality. Chlorine dioxide, the most widely used agent in North America, neutralizes the yellow pigment and improves the gluten quality. It does, however, destroy the tocopherols (vitamin E complex).
Enriched
Iron and three of the most necessary B vitamins (thiamin, riboflavin, and niacin), which are partially removed during milling, are returned to white flour by a process known as enrichment. No change occurs in taste, colour, texture, baking quality, or caloric value of the flour.
Pre-sifted
During the milling process, flour is sifted many times through micro-fine silk. This procedure is known as pre-sifting. The mesh size used for sifting varies from flour to flour. There are more holes per square inch for cake flour than, for example, bread flour, so that a cup of cake flour has significantly more minute particles than does a cup of bread flour, is liable to be denser, and weigh slightly more. Sifted flour yields more volume in baked bread than does unsifted flour, simply because of the increased volume of air.
2.06: Flour Additives
A number of additives may be found in commercial flours, from agents used as dough conditioners, to others that aid in the fermentation process. Why use so many additives? Many of these products are complementary – that is, they work more effectively together and the end product is as close to “ideal” as possible. Nevertheless, in some countries the number of additives allowed in flour are limited. For instance, in Germany, ascorbic acid remains the only permitted additive. Some of the additives that are commonly added to flour include those described below.
Bromate
Until the early 1990s, bromate was added to flour because it greatly sped up the oxidation or aging of flour. Millers in Canada stopped using it after health concerns raised by the U.S. Food and Drug Administration (FDA). In the United States, bromate is allowed in some states but banned in others (e.g., California).
Azodicarbonamide (ADA)
Approved in the United States since 1962, but banned in Europe, ADA falls under the food additives permitted in Canada. ADA is a fast-acting flour treatment resulting in a cohesive, dry dough that tolerates high water absorption. It is not a bleach, but because it helps produce bread with a finer texture it gives an apparently whiter crumb. It does not destroy any vitamins in the dough. Bakers who want to know if their flours contain ADA or other chemical additives can request the information from their flour suppliers.
L-Cysteine
An amino acid, L-cysteine speeds up reactions within the dough, thus reducing or almost eliminating bulk fermentation time. In effect, it gives the baker a “no-time” dough. It improves dough elasticity and gas retention.
Ascorbic Acid
Ascorbic acid was first used as a bread improver in 1932, after it was noticed that old lemon juice added to dough gave better results because it improved gas retention and loaf volume. Essentially vitamin C (ascorbic acid) has the advantage of being safe even if too much is added to the dough, as the heat of baking destroys the vitamin component. The addition of ascorbic acid consistent with artisan bread requirements is now routine for certain flours milled in North America.
Calcium Peroxide
Calcium peroxide (not to be confused with the peroxide used for bleaching flour) is another dough-maturing agent.
Glycerides
Glycerides are multi-purpose additives used in both cake mixes and yeast doughs. They are also known as surfactants, which is a contraction for “surface-acting agents.” In bread doughs, the main function of glycerides is as a crumb-softening agent, thus retarding bread staling. Glycerides also have some dough strengthening properties.
Sodium Stearoyl Lactylate
Approved for use in the United States since 1961, this additive improves gas retention, shortens proofing time, increases loaf volume, and works as an anti-staling agent.
2.07: Whole Grain and Artisan Milling
Whole grain and artisan milling is the type of milling that was practiced before the consumer market demanded smooth white flours that are refined and have chemical additives to expedite aging of flours. Artisan milling produces flours that are less refined and better suited to traditional breads, but also contain little to no additives and have higher nutritional content. For that reason, demand for these types of flour is on the rise.
Artisan millers (also known as micro millers) process many non-stream grains, including spelt, kamut, buckwheat, and other non-gluten grains and pulses. This offers bakers opportunities to work with different grains and expand their businesses. Artisan flours are readily available directly from millers or through a distributor. Knowing the origin of the grains and the quality of the ingredients in baking is important for artisan bakers.
Whole grain flours are on the increase as consumers become more aware of their benefits. Whole grain flour, as the name suggests, is made from whole grains.
Many artisan millers purchase their grains directly from growers. This method of purchasing establishes trustworthy working relationships with the grain growers and promotes transparency in grain growing and food safety practices. Grain growers that sell their grains to artisan millers apply conventional or organic growing practices. Grain growers and millers have to go through vigorous processes to obtain the certified organic certification for their grains or products, which guarantees that no chemical additives have been used.
How organic grain is processed varies. Stone milling and impact hammer milling methods are typical when minimal refined whole grain flour is preferred. Information on several American artisan millers that produce various whole grain flours can be found Faitrebid Mills; Hayden Flour Mills; and Baker Miller Chicago. Organic flours have gained popularity in the baking industry. As consumers become more aware of them, we see the demand swinging back toward whole grain and artisan milling as a preference. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.05%3A_Flour_Terms_and_Treatments.txt |
Flour forms the foundation for bread, cakes, and pastries. It may be described as the skeleton, which supports the other ingredients in a baked product. This applies to both yeast and chemically leavened products.
The strength of flour is represented in protein (gluten) quality and quantity. This varies greatly from flour to flour. The quality of the protein indicates the strength and stability of the flour, and the result in bread making depends on the method used to develop the gluten by proper handling during the fermentation. Gluten is a rubber-like substance that is formed by mixing flour with water. Before it is mixed it contains two proteins. In wheat, these two proteins are gliadin and glutenin. Although we use the terms protein and gluten interchangeably, gluten only develops once the flour is moistened and mixed. The protein in the flour becomes gluten.
Hard spring wheat flours are considered the best for bread making as they have a larger percentage of good quality gluten than soft wheat flours. It is not an uncommon practice for mills to blend hard spring wheat with hard winter wheat for the purpose of producing flour that combines the qualities of both. Good bread flour should have about 13% gluten.
Storing Flour
Flour should be kept in a dry, well-ventilated storeroom at a fairly uniform temperature. A temperature of about 21°C (70°F) with a relative humidity of 60% is considered ideal. Flour should never be stored in a damp place. Moist storerooms with temperatures greater than 23°C (74°F) are conducive to mould growth, bacterial development, and rapid deterioration of the flour. A well-ventilated storage room is necessary because flour absorbs and retains odors. For this reason, flour should not be stored in the same place as onions, garlic, coffee, or cheese, all of which give off strong odors.
Flour Tests
Wheat that is milled and blended with modern milling methods produce flours that have a fairly uniform quality all year round and, if purchased from a reliable mill, they should not require any testing for quality. The teacher, student, and professional baker, however, should be familiar with qualitative differences in flours and should know the most common testing methods.
Flours are mainly tested for:
• Color
• Absorption
• Gluten strength
• Baking quality
Other tests, done in a laboratory, are done for:
• Albumen
• Starch
• Sugar
• Dextrin
• Mineral and fat content
Color
The color of the flour has a direct bearing on baked bread, providing that fermentation has been carried out properly. The addition of other ingredients to the dough, such as brown sugar, malt, molasses, salt, and colored margarine, also affects the color of bread.
To test the color of the flour, place a small quantity on a smooth glass, and with a spatula, work until a firm smooth mass about 5 cm (2 in.) square is formed. The thickness should be about 2 cm (4/5 in.) at the back
of the plate to a thin film at the front. The test should be made in comparison with a flour of known grade and quality, both flours being worked side by side on the same glass. A creamy white color indicates a hard flour of good gluten quality. A dark or greyish color indicates a poor grade of flour or the presence of dirt. Bran specks indicate a low grade of flour.
After making a color comparison of the dry samples, dip the glass on an angle into clean water and allow to partially dry. Variations in color and the presence of bran specks are more easily identified in the damp samples.
Absorption
Flours are tested for absorption because different flours absorb different amounts of water and therefore make doughs of different consistencies. The absorption ability of a flour is usually between 55% and 65%. To determine the absorption factor, place a small quantity of flour (100 g/4 oz.) in a bowl. Add water gradually from a beaker containing a known amount of water. As the water is added, mix with a spoon until the dough reaches the desired consistency. You can knead the dough by hand for final mixing and determination of consistency. Weigh the unused water. Divide the weight of the water used by the weight of the flour used. The result is the absorption ability in percentage. For example:
• Weight of flour used 100 g (4 oz.)
• Weight of water used 60 g (2.7 oz.) T
• therefore absorption = 6/10 or 60%
Prolonged storage in a dry place results in a natural moisture loss in flour and has a noticeable effect on the dough. For example, a sack of flour that originally weighed 40 kg (88 lb.) with a moisture content of 14% may be reduced to 39 kg (86 lb.) during storage. This means that 1 kg (2 lb.) of water is lost and must be made up when mixing. The moisture content of the wheat used to make the flour is also important from an economic standpoint.
Hard wheat flour absorbs more liquid than soft flour. Good hard wheat flour should feel somewhat granular when rubbed between the thumb and fingers. A soft, smooth feeling indicates a soft wheat flour or a blend of soft and hard wheat flour. Another indicator is that hard wheat flour retains its form when pressed in the hollow of the hand and falls apart readily when touched. Soft wheat flour tends to remain lumped together after pressure.
Gluten Strength
The gluten test is done to find the variation of gluten quality and quantity in different kinds of flour. Hard flour has more gluten of better quality than soft flour. The gluten strength and quality of two different kinds of hard flour may also vary with the weather conditions and the place where the wheat is grown. The difference may be measured exactly by laboratory tests, or roughly assessed by the variation of gluten balls made from different kinds of hard flours.
For example, to test the gluten in hard flour and all-purpose flour, mix 250 g (9 oz.) of each in separate mixing bowls with enough water to make each dough stiff. Mix and develop each dough until smooth. Let the dough rest for about 10 minutes. Wash each dough separately while kneading it under a stream of cold water until the water runs clean and all the starch is washed out. (Keep a flour sieve in the sink to prevent dough pieces from being washed down the drain.) What remains will be crude gluten. Shape the crude gluten into round balls, then place them on a paper-lined baking pan and bake at 215°C (420°F) for about one hour. The gluten ball made from the hard flour will be larger than the one made from all-purpose flour. This illustrates the ability of hard flour to produce a greater volume because of its higher gluten content.
Ash Content
Ash or mineral content of flour is used as another measurement of quality. Earlier in the chapter, we talked about extraction rates as an indicator of how much of the grain has been refined. Ash content refers to the amount of ash that would be left over if you were to burn 100 g of flour. A higher ash content indicates that the flour contains more of the germ, bran, and outer endosperm. Lower ash content means that the flour is more highly refined (i.e., a lower extraction rate).
Baking Quality
The final and conclusive test of any flour is the kind of bread that can be made from it. The baking test enables the baker to check on the completed loaf that can be expected from any given flour. Good volume is related to good quality gluten; poor volume to young or green flour. Flour that lacks stability or power to hold during the entire fermentation may result in small, flat bread. Flour of this type may sometimes respond to an increase in the amount of yeast. More yeast shortens the fermentation time and keeps the dough in better condition during the pan fermentation period. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.08%3A_Flour_in_Baking.txt |
Rye is a hardy cereal grass cultivated for its grain. Its use by humans can be traced back over 2,000 years. Once a staple food in Scandinavia and Eastern Europe, rye declined in popularity as wheat became more available through world trade. A crop well suited to northern climates, rye is grown on the Canadian Prairies and in the northern states such as the Dakotas and Wisconsin.
Rye flour is the only flour other than wheat that can be used without blending (with wheat flour) to make yeast-raised breads. Nutritionally, it is a grain comparable in value to wheat. In some cases, for example, its lysine content (an amino acid), is even biologically superior.
The brown grain is cleaned, tempered, and milled much like wheat grain. One difference is that the rye endosperm is soft and breaks down into flour much quicker that wheat. As a result, it does not yield semolina, so purifiers are seldom used. The bran is separated from the flour by the break roller, and the flour is further rolled and sifted while being graded into chop, meal, light flour, medium flour, and dark flour:.
• Chop: This is the miller’s name for the coarse stock after grinding in a break roller mill.
• Meal: Like chop, meal is made of 100% extraction obtained by grinding the entire rye kernel.
• Light rye flour: This is obtained from the centre of the rye kernel and is low in protein and high in starch content. It can be compared to white bread flour and is used to make light rye breads. Medium rye flour: This is straight flour and consists of all the kernels after the bran and shorts have been removed. It is light grey in colour, has an ash content of 1%, and is used for a variety of sourdough breads.
• Dark rye flour: This is comparable to first clear wheat flour. It has an ash content of 2% and a protein content of 16%. It is used primarily for heavier types of rye bread.
The lighter rye flours are generally bleached, usually with a chlorine treatment. The purpose of bleaching is to lighten the colour, since there is no improvement on the gluten capability of the flour.
Extraction of Rye Flour
The grade of extraction of rye flour is of great importance to the yield of the dough and the creation of a particular flavour in the baked bread. Table 1 shows the percentage of the dry substances of rye flour by grade of extraction.
Table 1 Table of extraction for rye flour
Substance
Grade of Extraction
70%
85%
Ash
0.8%
1.4%
Fat
1.2%
1.7%
Protein
8.1%
9.6%
Sugar
6.5%
7.5%
Starch 72.5% 65.1%
Crude fibre 0.5% 1.3%
Pentosans 5.2% 7.6%
Undefinable 5.2% 5.8%
Note that ash, fibre, and pentosans are higher in the 85% extraction rate flour, and starch is lower. Pentosans are gummy carbohydrates that tend to swell when moistened and, in baking, help to give the rye loaf its cohesiveness and structure. The pentosan level in rye flour is greater than that of wheat flour and is of more significance for successful rye bread baking.
Rye flours differ from wheat flours in the type of gluten that they contain. Although some dark rye flours can have a gluten content as high as 16%, this is only gliadin. The glutenin, which forms the elasticity in dough is absent, and therefore doughs made only with rye flour will not hold the gas produced by the yeast during fermentation. This results in a small and compact loaf of bread.
Starch and pentosans are far more important to the quality of the dough yield than gluten. Starch is the chief component of the flour responsible for the structure of the loaf. Its bread-making ability hinges on the age of the flour and the acidity. While rye flour does not have to be aged as much as wheat flour, it has both a “best after” and a “best before” date. Three weeks after milling is considered to be good.
When the rye flour is freshly milled, the starch gelatinizes (sets) quickly at a temperature at which amylases are still very active. As a result, bread made from fresh flour may be sticky and very moist. At the other extreme, as the starch gets older, it gelatinizes less readily, the enzymes cannot do their work, and the loaf may split and crack. A certain amount of starch breakdown must occur for the dough to be able to swell.
The moisture content of rye flour should be between 13% and 14%. The less water in the flour, the better its storage ability. Rye should be stored under similar conditions to wheat flour.
Differences between Rye and Wheat
Here is a short list of the differences between rye and wheat:
• Rye is more easily pulverized.
• Rye does not yield semolina.
• Gluten content in rye is not a significant dough-making factor.
• Starch is more important for bread making in rye flour than in wheat flour.
• The pentosan level in rye flour is higher and more important for bread making.
• Rye flour has greater water binding capability than wheat flour, due to its starch and pentosan content.
In summary, both wheat and rye have a long history in providing the “staff of life.” They are both highly nutritious. North American mills have state-of-the-art technology that compensates for crop differences, thus ensuring that the baker has a reliable and predictable raw material. Flour comes in a great variety of types, specially formulated so that the baker can choose according to product and customer taste. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.09%3A_Rye_Flour.txt |
Several other types of grains are commonly used in baking. In particular, corn and oats feature predominantly in certain types of baking (quick breads and cookies respectively, for instance) but increasingly rice flour is being used in baked goods, particularly for people with gluten sensitivities or intolerances. The trend to whole grains and the influence of different ethnic cultures has also meant the increase in the use of other grains and pulses for flours used in breads and baking in general.
Corn
Corn is one of the most widely used grains in the world, and not only for baking. Corn in used in breads and cereals, but also to produce sugars (such as dextrose and corn syrup), starch, plastics, adhesives, fuel (ethanol), and alcohol (bourbon and other whisky). It is produced from the maize plant (the preferred scientific and formal name of the plant that we callcorn in North America). There are different varieties of corn, some of which are soft and sweet (corn you use for eating fresh or for cooking) and some of which are starchy and are generally dried to use for baking, animal feed, and popcorn.
Varieties Used in Baking
• Cornmeal has a sandy texture and is ground to fine, medium, and coarse consistencies. It has most of the husk and germ removed, and is used is recipes from the American South (e.g., cornbread) and can be used to add texture to other types of breads and pastry.
• Stone-ground cornmeal has a texture not unlike whole wheat flour, as it contains some of the husk and germ. Stone ground cornmeal has more nutrients, but it is also more perishable. In baking, it acts more like cake flour due to the lack of gluten.
• Corn flour in North America is very finely ground cornmeal that has had the husk and germ removed. It has a very soft powdery texture. In the U.K. and Australia, corn flour refers to cornstarch. Cornstarch is the starch extracted from the maize kernel. It is primarily used as a thickener in baking and other cooking. Cornstarch has a very fine powdery consistency, and can be dissolved easily in water. As a thickening agent, it requires heat to set, and will produce products with a shiny, clear consistency.
• Blue cornmeal has a light blue or violet colour and is produced from whole kernels of blue corn. It is most similar to stone-ground cornmeal and has a slightly sweet flavour.
Rice
Rice is another of the world’s most widely used cereal crops and forms the staple for much of the world’s diet. Because rice is not grown in Canada, it is not regulated by the Canadian Grain Commission.
Varieties Used in Baking
• Rice flour is prepared from finely ground rice that has had the husks removed. It has a fine, slightly sandy texture, and provides crispness while remaining tender due to its lack of gluten. For this reason, many gluten-free breads are based on rice flours or blends that contain rice flour.
• Short grain or pearl rice is also used in the pastry shop to produce rice pudding and other desserts.
Oats
Oats are widely used for animal feed and food production, as well as for making breads, cookies, and dessert toppings. Oats add texture to baked goods and desserts.
Varieties Used in Baking
• Bakers will most often encounter rolled oats, which are produced by pressing the de-husked whole kernels through rollers.
• Oat bran and oat flour are produced by grinding the oat kernels and separating out the bran and endosperm.
• Whole grain oat flour is produced by grinding the whole kernel but leaving the ground flour intact. Steel-cut oats are more commonly used in cooking and making breakfast cereals, and are the chopped oat kernels.
Other Grains and Pulses
A wide range of additional flours and grains that are used in ethnic cooking and baking are becoming more and more widely available in Canada. These may be produced from grains (such as kamut, spelt, and quinoa), pulses (such as lentils and chickpeas), and other crops (such as buckwheat) that have a grain-like consistency when dried. Increasingly, with allergies and intolerances on the rise, these flours are being used in bakeshops as alternatives to wheat-based products for customers with special dietary needs. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.10%3A_Other_Grains_and_Flours.txt |
Add all sections in this chapter
• 3.1: Understanding Fats and Oils
Fats and oils are organic compounds that, like carbohydrates, are composed of the elements carbon (C), hydrogen (H), and oxygen (O), arranged to form molecules. There are many types of fats and oils and a number of terms and concepts associated with them, which are detailed further here.
• 3.2: Sources of Bakery Fats and Oils
Edible fats and oils are obtained from both animal and vegetable sources. Animal sources include: Beef, Pork, Sheep, and Fish. In North America, the first two are the prime sources.
• 3.3: Major Fats and Oils Used in Bakeries
All fats become oils and vice versa, depending on temperature. Physically, fats consist of minute solid fat particles enclosing a microscopic liquid oil fraction. The consistency of fat is very important to the baker. It is very difficult to work with butter (relatively low melting point) in hot weather, for example. At the other extreme, fats with a very high melting point are not very palatable, since they tend to stick to the palate.
• 3.4: Functions of Fat in Baking
Thumbnail: Butter is often served for spreading on bread with a butter knife. (CC BY-SA 3.0; Jonathunder).
03: Fat
Fats and oils are organic compounds that, like carbohydrates, are composed of the elements carbon (C), hydrogen (H), and oxygen (O), arranged to form molecules. There are many types of fats and oils and a number of terms and concepts associated with them, which are detailed further here.
Lipids
In baking, lipids are generally a synonym for fats. Baking books may talk about the “lipid content of eggs,” for example.
Triglycerides
Triglycerides is another chemical name for the most common type of fats found in the body, indicating that they are usually made up of three (tri) fatty acids and one molecule of glycerol (glycerine is another name) as shown in Figure 3. (The mono and diglycerides that are used as emulsifiers have one and two fatty acids respectively.)
Figure 1 Composition of fats (triglycerides)
Fatty Acids
Each kind of fat or oil has a different combination of fatty acids. The nature of the fatty acid will determine the consistency of the fat or oil. For example, stearic acid is the major fatty acid in beef fat, and linoleic acid is dominant in seed oils. Fatty acids are defined as short, medium, or long chain, depending on the number of atoms in the molecule. The reason that some fat melts gradually is that as the temperature rises, each fatty acid will, in turn, soften, as its melting point is reached. Fats that melt all of a sudden mean that the fatty acids are of the same or similar type and have melting points within a narrow range. An example of such a fat is coconut fat: one second it is solid, the next, liquid.
Table 1 shows the characteristics of three fatty acids.
Table 1: Characteristics of Fatty Acids Type of Fatty Acid Melting Point Physical State (at room temperature) Stearic 69°C (157°F) Solid Oleic 16°C (61°F) Liquid Linoleic -12°C (9°F) Liquid
Rancid
Rancid is a term used to indicate that fat has spoiled. The fat takes on an unpleasant flavor when exposed to air and heat. Unsalted butter, for example, will go rancid quickly if left outside the refrigerator, especially in warm climates.
Oxidation/Antioxidants
Oxidation (exposure to air) causes rancidity in fats over time. This is made worse by combination with certain metals, such as copper. This is why doughnuts are never fried in copper pans!
Some oils contain natural antioxidants, such as tocopherols (vitamin E is one kind), but these are often destroyed during the processing. As a result, manufacturers add synthetic antioxidants to retard rancidity. BHA and BHT are synthetic antioxidants commonly used by fat manufacturers.
Saturated/Unsaturated
Saturated and unsaturated refer to the extent to which the carbon atoms in the molecule of fatty acid are linked or bonded (saturated) to hydrogen atoms. One system of fatty acid classification is based on the number of double bonds.
• 0 double bonds: saturated fatty acids. Stearic acid is a typical long-chain saturated fatty acid (Figure 2).[1]
Figure 2 Stearic Acid
• 1 double bond: monounsaturated fatty acids. Oleic acid is a typical monounsaturated fatty acid (Figure 3).[2]
Figure 3 Oleic Acid
• 2 or more double bonds: polyunsaturated fatty acids. Linoleic acid is a typical polyunsaturated fatty acid (Figure 4).[3]
Figure 4 Linoleic Acid
Saturated fat is a type of fat found in food. For many years, there has been a concern that saturated fats may lead to an increased risk of heart disease; however, there have been studies to the contrary and the literature is far from conclusive. The general assumption is that the less saturated fat the better as far as health is concerned. For the fat manufacturer, however, low saturated fat levels make it difficult to produce oils that will stand up to the high temperatures necessary for processes such as deep-frying. Hydrogenation has been technology’s solution. Hydrogenation will be discussed later in the chapter.
Saturated fat is found in many foods:
• Animal foods (like beef, chicken, lamb, pork, and veal)
• Coconut, palm, and palm kernel oils
• Dairy products (like butter, cheese, and whole milk)
• Lard
• Shortening
Unsaturated fat is also in the foods you eat. Replacing saturated and trans fats (see below) with unsaturated fats has been shown to help lower cholesterol levels and may reduce the risk of heart disease. Unsaturated fat is also a source of omega-3 and omega-6 fatty acids, which are generally referred to as “healthy” fats. Choose foods with unsaturated fat as part of a balanced diet using the U.S. Department of Health and Human Service’s Dietary Guidelines.
Even though unsaturated fat is a “good fat,” having too much in your diet may lead to having too many calories, which can increase your risk of developing obesity, type 2 diabetes, heart disease, and certain types of cancer.
There are two main types of unsaturated fats:
• Monounsaturated fat, which can be found in:
1. Avocados
2. Nuts and seeds (like cashews, pecans, almonds, and peanuts)
3. Vegetable oils (like canola, olive, peanut, safflower, sesame, and sunflower)
• Polyunsaturated fat, which can be found in:
1. Fatty fish (like herring, mackerel, salmon, trout and smelt)
2. Fish oils
3. Nuts and seeds (like cashews, pecans, almonds and peanuts)
4. Vegetable oils (like canola, corn, flaxseed, soybean and sunflower)
Hydrogenation
Simply put, hydrogenation is a process of adding hydrogen gas to alter the melting point of the oil or fat. The injected hydrogen bonds with the available carbon, which changes liquid oil into solid fat. This is practical, in that it makes fats versatile. Think of the different temperature conditions within a bakery during which fat must be workable; think of the different climatic conditions encountered in bakeries.
Trans Fat Trans fat is made from a chemical process known as “partial hydrogenation.” This is when liquid oil is made into a solid fat. Like saturated fat, trans fat has been shown to raise LDL or “bad” cholesterol levels, which may in turn increase your risk for heart disease. Unlike saturated fat, trans fat also lowers HDL or “good” cholesterol. A low level of HDL-cholesterol is also a risk factor for heart disease.
Until recently, most of the trans fat found in a typical American diet came from:
• Fried foods (like doughnuts)
• baked goods including cakes, pie crusts, biscuits, frozen pizza, cookies, and crackers
• stick margarine and other spreads
The US Food and Drug Administration (FDA) specifically prescribe what information must be displayed on a label. The trans fat content of food is one piece of core nutrition information that is required to be declared in a nutrition facts table. More information on a nutrition facts table and labeling details can be found in www.fda.gov/food/ingredientsp.../ucm274590.htm
Emulsification (Emulsified Shortenings)
Emulsification is the process by which normally unmixable ingredients (such as oil and water) can be combined into a stable substance. Emulsifiers are substances that can aid in this process. There are natural emulsifiers such as lecithin, found in egg yolks. Emulsifiers are generally made up of monoglycerides and diglycerides and have been added to many hydrogenated fats, improving the fat’s ability to:
• Develop a uniformly fine structure
• Absorb a high percentage of sugar
• Hold in suspension a high percentage of liquid
Emulsified shortenings are ideal for cakes and icings, but they are not suitable for deep-frying.
Stability
Stability refers to the ability of a shortening to have an extended shelf life. It refers especially to deepfrying fats, where a smoke point (see below) of 220°C to 230°C (428°F to 446°F) indicates a fat of high stability.
Smoke Point
The smoke point is the temperature reached when fat first starts to smoke. The smoke point will decline over time as the fat breaks down (see below).
Fat Breakdown
The technical term for fat breakdown is hydrolysis, which is the chemical reaction of a substance with water. In this process, fatty acids are separated from their glycerol molecules and accumulate over time in the fat. When their concentration reaches a certain point, the fat takes on an unpleasant taste, and continued use of the fat will yield a nasty flavor. The moisture, which is at the root of this problem, comes from the product being fried. This is why it is a good reason to turn off the fryer or turn it to “standby” between batches of frying foods such as doughnuts. Another cause of fat breakdown is excessive flour on the product or particles breaking off the product. Attribution
Figure 2. Stearic Acide. Retrieved from http://library.med.utah.edu/NetBioch...Acids/3_3.html
Figure 3 Oleic Acid Retrieved from: http://library.med.utah.edu/NetBioch...Acids/3_3.html
Figure 4 Linoleic Acid Retrieved from: http://library.med.utah.edu/NetBioch...Acids/3_3.html
3.02: Sources of Bakery Fats and Oils
Edible fats and oils are obtained from both animal and vegetable sources. Animal sources include: Beef, Pork, Sheep, and Fish. In North America, the first two are the prime sources. Vegetable sources include canola, coconut, corn, cotton, olive, palm fruit and palm kernel, peanut, soya bean, safflower, and sunflower.
Refining of Fats and Oils
The major steps in refining fats and oils are as follows:
• Free fatty acids are neutralized and treated with an alkali.
• Color is removed.
• The fat is hydrogenated.
• The fat is deodorized.
• The fat is chilled and beaten to make it softer and whiter. This is done by a votator (a machine that cools and kneads liquid margarine).
• Fat is stored to facilitate the correct crystallization (tempering). | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/03%3A_Fat/3.01%3A_Understanding_Fats_and_Oils.txt |
Lard
Lard is obtained from the fatty tissues of pigs, with a water content of 12% to 18%. Due to dietary concerns, lard has gradually lost much of its former popularity. It is still extensively used, however, for:
• Yeast dough additions
• Pie pastry
• Pan greasing
Lard has a good plastic range, which enables it to be worked in a pie dough at fairly low temperatures (try the same thing with butter!). It has a fibrous texture and does not cream well. It is therefore not suitable for cake making. Some grades of lard also have a distinctive flavor, which is another reason it is unsuitable for cake making.
Butter
Butter is made from sweet, neutralized, or ripened creams pasteurized and standardized to a fat content of 30% to 40%. When cream is churned or overwhipped, the fat particles separate from the watery liquid known as buttermilk. The separated fat is washed and kneaded in a water wheel to give it plasticity and consistency. Color is added during this process to make it look richer, and salt is added to improve its keeping quality.
In Canada, the following regulations apply to butter:
• Minimum 80% milk fat by weight
• Permitted ingredients: milk solids, salt, air or inert gas, permitted food color, permitted bacterial culture
• The grade and grade name for butter and butter products is Canada 1.
Sweet (or unsalted) butter is made from a cream that has a very low acid content and no salt is added to it. It is used in some baking products like French butter cream, where butter should be the only fat used in the recipe. Keep sweet butter in the refrigerator.
From the standpoint of flavor, butter is the most desirable fat used in baking. Its main drawback is its relatively high cost. It has moderate but satisfactory shortening and creaming qualities. When used in cake mixing, additional time, up to five minutes more, should be allowed in the creaming stage to give maximum volume. Adding an emulsifier (about 2% based on flour weight) will also help in cake success, as butter has a poor plastic range of 18°C to 20°C (64°F to 68°F).
Butter and butter products may also be designated as “whipped” where they have had air or inert gas uniformly incorporated into them as a result of whipping. Whipped butter may contain up to 1% added edible casein or edible caseinates.
Butter and butter products may also be designated as “cultured” where they have been produced from cream to which a permitted bacterial culture has been added.
Margarine
Margarines are made primarily from vegetable oils (to some extent hydrogenated) with a small fraction of milk powder and bacterial culture to give a butter-like flavor. Margarines are very versatile and include:
• General purpose margarine with a low melting point, suitable for blending in dough and general baking
• Cake margarine with excellent creaming qualities
• Roll-in margarine, which is plastic and suitable for Danish pastries
• Puff pastry roll-in, which is the most waxy and has the highest melting point
Margarine may be obtained white, but is generally colored. Margarine has a fat content ranging from 80% to 85%, with the balance pretty much the same as butter.
Oil content claims on margarine
The claim that margarine contains a certain percentage of a specific oil in advertisements should always be based on the percentage of oil by weight of the total product. All the oils used in making the margarine should be named. For example, if a margarine is made from a mixture of corn oil, cottonseed oil, and soybean oil, it would be considered misleading to refer only to the corn oil content in an advertisement for the margarine. On the other hand, the mixture of oils could be correctly referred to as vegetable oils.
It used to be that you could only buy margarines in solid form full of saturated and trans fat. The majority of today’s margarines come in tubs, are soft and spreadable, and are non-hydrogenated, which means they have low levels of saturated and trans fat. Great care must be taken when attempting to substitute spreadable margarine for solid margarine in recipes.
Shortenings
Since the invention of hydrogenated vegetable oil in the early 20th century, shortening has come almost exclusively to mean hydrogenated vegetable oil. Vegetable shortening shares many properties with lard: both are semi-solid fats with a higher smoke point than butter and margarine. They contain less water and are thus less prone to splattering, making them safer for frying. Lard and shortening have a higher fat content (close to 100%) compared to about 80% for butter and margarine. Cake margarines and shortenings tend to contain a bit higher percentage of monoglycerides that margarines. Such “high-ratio shortenings” blend better with hydrophilic (attracts water) ingredients such as starches and sugar.
Health concerns and reformulation
Early in this century, vegetable shortening became the subject of some health concerns due to its traditional formulation from partially hydrogenated vegetable oils that contain trans fats, which have been linked to a number of adverse health effects. Consequently, a low trans-fat variant of Crisco brand shortening was introduced in 2004. In January 2007, all Crisco products were reformulated to contain less than one gram of trans fat per serving, and the separately marketed trans-fat free version introduced in 2004 was consequently discontinued. Since 2006, many other brands of shortening have also been reformulated to remove trans fats. Non-hydrogenated vegetable shortening can be made from palm oil.
Hydrogenated vegetable shortenings
Hydrogenated shortenings are the biggest group of fats used in the commercial baking industry. They feature the following characteristics:
• They are made from much the same oils as margarine.
• They are versatile fats with good creaming ability.
• Their hydrogenation differs according to the specific use for which the fat is designed.
• They are 100% fat – no water.
• They keep well for six to nine months.
Variations on these shortenings are: emulsified vegetable shortenings, roll-in pastry shortenings, and deepfrying fats.
Emulsified vegetable shortenings
Emulsified vegetable shortenings are also termed high-ratio fats. The added emulsifiers (mono- and diglycerides) increase fat dispersion and give added fineness to the baked product. They are ideal for highratio cakes, where relatively large amounts of sugar and liquid are incorporated. The result is a cake:
• Fine in texture
• Light in weight and of excellent volume
• Superior in moisture retention (good shelf life)
• Tender to eat
This is also the fat of choice for many white cake icings.
Roll-in pastry shortenings
This type of shortening is also called special pastry shortening (SPS). These fats have a semi-waxy consistency and offer:
• Large plastic range
• Excellent extensibility
• Excellent lifting ability
They are primarily used in puff pastry and Danish pastry products where lamination is required. They come in various specialized forms, with varying qualities and melting points. It is all a matter of compromise between cost, palatability, and leavening power. A roll-in that does not have “palate cling” may have a melting point too low to guarantee maximum lift in a puff pastry product.
Deep-Frying Fats
Deep-frying fats are special hydrogenated fats that have the following features:
• High smoke point of up to 250°C (480°F)
• High heat stability and resistance to fat breakdown
• No undesirable flavor on finished products
• No greasiness when cold
• These fats contain an anti-foaming agent.
Vegetable Oils
Vegetable oil is an acceptable common name for an oil that contains more than one type of vegetable oil. Generally, when such a vegetable oil blend is used as an ingredient in another food, it may be listed in the ingredients as “vegetable oil.”
There are two exceptions: if the vegetable oils are ingredients of a cooking oil, salad oil, or table oil, the oils must be specifically named in the ingredient list (e.g., canola oil, corn oil, safflower oil), and using the general term vegetable oil is not acceptable. As well, if any of the oils are coconut oil, palm oil, palm kernel oil, peanut oil, or cocoa butter, the oils must be specifically named in the ingredient list.
When two or more vegetable oils are present and one or more of them has been modified or hydrogenated, the common name on the principal display panel and in the list of ingredients must include the word “modified” or “hydrogenated,” as appropriate (e.g., modified vegetable oil, hydrogenated vegetable oil, modified palm kernel oil).
Vegetable oils are used in:
• Chemically leavened batters (e.g., muffin mixes)
• Dough additives (to replace the fat)
• Short sponges (to replace the butter or fat)
Coconut Fat
Coconut fat is often used to stabilize butter creams as it has a very small plastic range. It has a quite low melting point and its hardness is due to other factors. It can be modified to melt at different temperatures, generally between 32°C and 36°C (90°F and 96°F).
The Importance of Melting Points
As mentioned above, all fats become oils and vice versa, depending on temperature. Physically, fats consist of minute solid fat particles enclosing a microscopic liquid oil fraction. The consistency of fat is very important to the baker. It is very difficult to work with butter (relatively low melting point) in hot weather, for example. At the other extreme, fats with a very high melting point are not very palatable, since they tend to stick to the palate. Fat manufacturers have therefore attempted to customize fats to accommodate the various needs of the baker.
Fats with a melting range between 40°C and 44°C (104°F and 112°F) are considered to be a good compromise between convenience in handling and palatability. New techniques allow fats with quite high melting points without unpleasant palate-cling. Table 1 shows the melting points of some fats.
Table 1 Melting points of typical fats.
Type of Fat Melting Point
Coconut fat 32.5°C-34.5°C (90.5°F-4.1°F)
Regular margarine 34°C (93°F)
Butter 38°C (100°F)
Regular shortenings 44°C-47°C (111°F-116°F)
Roll-in shortenings 40°C-50°C (104°F-122°F)
Roll-in margarine 44°C-54°C (111°F-130°F)
Blending
It is probably safe to say that most fats are combinations or blends of different oils and/or fats. They may be all vegetable sources. They may be combined vegetable and animal sources. A typical ratio is 90% vegetable source to 10% animal (this is not a hard and fast rule). Formerly, blends of vegetable and animal oils and fats were termed compound fats. Nowadays, this term, if used at all, may refer also to combinations of purely vegetable origin.
3.04: Functions of Fat in Baking
The following summarize the various functions of fat in baking.
Tenderizing Agents
Used in sufficient quantity, fats tend to “shorten” the gluten strands in flour; hence their name: shortenings. Traditionally, the best example of such fat was lard.
Creaming Ability
This refers to the extent to which fat, when beaten with a paddle, will build up a structure of air pockets. This aeration, or creaming ability, is especially important for cake baking; the better the creaming ability, the lighter the cake. Plastic Range Plastic range relates to the temperature at which the fatty acid component melts and over which shortening will stay workable and will “stretch” without either cracking (too cold) or softening (too warm). A fat that stays “plastic” over a temperature range of 4°C to 32°C (39°F to 90°F) would be rated as excellent. A dough made with such a fat could be taken from the walk-in cooler to the bench in a hot bakeshop and handled interchangeably. Butter, on the other hand, does not have a good plastic range; it is almost too hard to work at 10°C (50°F) and too soft at 27°C (80°F).
Lubrication
In dough making, the fat portion makes it easier for the gluten network to expand. The dough is also easier to mix and to handle. This characteristic is known as lubrication.
Moistening Ability
Whether in dough or in a cake batter, fat retards drying out. For this purpose, a 100% fat shortening will be superior to either butter or margarine.
Nutrition
As one of the three major food categories, fats provide a very concentrated source of energy. They contain many of the fatty acids essential for health. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/03%3A_Fat/3.03%3A_Major_Fats_and_Oils_Used_in_Bakeries.txt |
• 4.1: Sugar Chemistry (ADD US)
• 4.2: Sugar Refining
• 4.3: The Application of Sugar
Sugar is the third most used ingredient in the bakeshop. Sugar has several functions in baking. The most recognized purpose is, of course, to sweeten food, but there are many other reasons sugar is used in cooking and baking.
• 4.4: Agave
Agave has gained popularity in the food industry due to some of its nutritional properties. The agave nectar is obtained from the sap of the heart of the agave plant, a desert succulent, which is also used to produce tequila. The syrup/sugar production process of agave is similar to that of sugar.
• 4.6: Honey
Honey is a natural food, essentially an invert sugar. Bees gather nectar and, through the enzyme invertase, change it into honey. Honey varies in composition and flavor depending on the source of the nectar. The average composition of honey is about 40% levulose, 35% dextrose, and 15% water, with the remainder being ash, waxes, and gum.
• 4.7: Malt
Malt is the name given to a sweetening agent made primarily from barley. The enzymes from the germ of the seeds become active, changing much of the starch into maltose, a complex sugar. Maltose has a distinct flavor and is used for making yeast products such as bread and rolls. Malt is considered to be relatively nutritious compared to other sweeteners.
• 4.8: Maple Syrup (ADD US)
Maple syrup is made by boiling and evaporating the sap of the sugar maple tree. Because sap is only 2% or 3% sugar, it takes almost 40 liters of sap to make 1 liter of syrup. This makes maple syrup a very expensive sweetener. It is prized for its unique flavor and sweet aroma. Don’t confuse maple-flavored pancake or table syrup with real maple syrup. Table syrup is made from inexpensive glucose or corn syrup, with added caramel coloring and maple flavoring.
• 4.9: Sugar Substitutes (ADD US)
Food additives such as sugar substitutes, which cover both artificial sweeteners and intense sweeteners obtained from natural sources, are subject to rigorous controls under the Food and Drugs Act and Regulations. New food additives (or new uses of permitted food additives) are permitted only once a safety assessment has been conducted and regulatory amendments have been enacted.
• 4.5: Glucose/Dextrose
The sugar known as glucose has two origins: (1) in a natural form in most fruits and (2) n a processed form from corn (corn syrup) In baking, we usually refer to industrially made glucose. It is made from corn and the resulting product, a thick syrup, is then adjusted to a uniform viscosity or consistency.
Thumbnail: Sugars; clockwise from top-left: White refined, unrefined, brown, unprocessed cane. (Public Domain; Romain Behar).
Contributors and Attributions
Sorangel Rodriguez-Velazquez (American University). Chemistry of Cooking by Sorangel Rodriguez-Velazquez is licensed under a Creative Commons Attribution-NonCommercial ShareAlike 4.0 International License, except where otherwise noted
04: Sugar
Chemically, sugar consists of carbon (C), oxygen (O), and hydrogen (H) atoms, and is classified as a carbohydrate. There are three main groups of sugars, classified according to the way the atoms are arranged together in the molecular structure. These groups are the following:
• Monosaccharides or simple sugars. Dextrose (glucose) is the major monosaccharide. Others are levulose or fructose (found in honey and many fruits), and galactose, which is a milk sugar. Such sugars do not readily crystallize. (Mono means one, indicating that the sugar consists of only one molecule.)
• Disaccharides or complex sugars. Sucrose (common sugar) is the primary example of a disaccharide. Maltose, found in cereals, and lactose, found in milk, are others.
• Polysaccharides. Examples are starches, dextrins, and cellulose.
Bakers are not concerned with polysaccharides but rather with the monosaccharides and disaccharides. The latter two both sweeten, but they cannot be used interchangeably because they have different effects on the end product. These differences are touched on later in the book.
Sugar Names
It is helpful to understand some of the conventions of the names of different sugars. Note that sugar names often end in “ose”: sucrose, dextrose, maltose, lactose, etc. Sucrose is the chemical name for sugar that comes from the cane and beet sugar plants.
Note that glucose is the chemical name for a particular type of sugar. What is sometimes confusing is that glucose occurs naturally, as a sugar molecule in substances such as honey, but it is also produced industrially from the maize plant (corn).
The Canadian Food and Drug Regulations (FDR) govern the following definitions:
• Sugars: All monosaccharides and disaccharides. Used for nutrition labelling purposes.
• Sweetening agent: Any food for which a standard is provided in Division 18 of the Food and Drug Regulation, or any combination of these. Includes sugar (sucrose), sugar syrups, and molasses derived from sugar cane or sugar beet, dextrose, glucose and syrups, honey and lactose. Excludes sweeteners considered to be food additives.
• Sweetening ingredient: Any sugar, invert sugar, honey, dextrose, glucose, or glucose solids, or any combination of these in dry or liquid form. Designed for sweetening fruits, vegetables, and their products and substitutes.
• Maple syrup: The syrup obtained by the concentration of maple sap or by the dilution or solution of a maple product, other than maple sap, in potable water.
• Sweetener: Any food additive listed as a sweetener. Includes both sugar alcohols and high intensity- sweeteners such as acesulfame-potassium, aspartame, and sucralose.
• Sugar alcohols: Food additives that may be used as sweeteners. Includes isomalt, lactitol, maltitol, maltitol syrup, mannitol, sorbitol, sorbitol syrup, xylitol, and erythritol. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/04%3A_Sugar/4.01%3A_Sugar_Chemistry_%28ADD_US%29.txt |
While some refining usually occurs at source, most occurs in the recipient country. The raw sugar that arrives at the ports is not legally edible, being full of impurities.
At the refinery, the raw brown sugar goes through many stages:
• Washing and boiling
• Filtering to remove impurities
• Evaporation to the desired crystal size under vacuum to avoid caramelization Centrifuging, in which the fluid is spun off leaving the crystals
• Drying in a rotating drum with hot air
• Packaging in various sizes, depending on the intended market
Sugar beet undergoes identical steps after the initial processing, which involves:
• Slicing the beets and extracting the sugar with hot water
• Removing impurities
• Filtration
• Concentration in evaporators
From here, the process is identical to the final steps in cane processing. See Figure 2 which illustrates the process.
Some of the sugar passes through a machine that presses the moist sugar into cubes and wraps and packages them; still other sugar is made into icing sugar. The sugar refining process is completely mechanical, and machine operators’ hands never touch the sugar.
Brown and yellow sugars are produced only in cane sugar refineries. When sugar syrup flows from the centrifuge machine, it passes through further filtration and purification stages and is re-boiled in vacuum pans such as the two illustrated in Figure 2. The sugar crystals are then centrifuged but not washed, so the sugar crystals still retain some of the syrup that gives the product its special flavour and colour.
During the whole refining process almost 100 scientific checks for quality control are made, while workers in research laboratories at the refineries constantly carry out experiments to improve the refining processand the final product. Sugar is carefully checked at the mills and is guaranteed to have a high purity. Government standards both in the United States and Canada require a purity of at least 99.5% sucrose.
Are animal ingredients included in white sugar?
Bone char — often referred to as natural carbon — is widely used by the sugar industry as a decolourizing
filter, which allows the sugar cane to achieve its desirable white colour. Other types of filters involve granular carbon or an ion-exchange system rather than bone char.
Bone char is made from the bones of cattle, and it is heavily regulated by the European Union and the USDA. Only countries that are deemed BSE-free can sell the bones of their cattle for this process.
Bone char is also used in other types of sugar. Brown sugar is created by adding molasses to refined sugar,
so companies that use bone char in the production of their regular sugar also use it in the production of their brown sugar. Confectioner’s sugar — refined sugar mixed with cornstarch — made by these companies also involves the use of bone char. Fructose may, but does not typically, involve a bone-char filter.
Bone char is not used at the sugar beet factory in Taber, Alberta, or in Montreal’s cane refinery. Bone char is used only at the Vancouver cane refinery. All products under the Lantic trademark are free of bone char. For the products under the Rogers trademark, all Taber sugar beet products are also free of bone char. In order to differentiate the Rogers Taber beet products from the Vancouver cane products, you can verify the inked-jet code printed on the product. Products with the code starting with the number “22” are from Taber, Alberta, while products with the code starting with the number “10” are from Vancouver.
If you want to avoid all refined sugars, there are alternatives such as sucanat and turbinado sugar, which are not filtered with bone char. Additionally, beet sugar — though normally refined — never involves the use of bone char. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/04%3A_Sugar/4.02%3A_Sugar_Refining.txt |
Sugar is the third most used ingredient in the bakeshop. Sugar has several functions in baking. The most recognized purpose is, of course, to sweeten food, but there are many other reasons sugar is used in cooking and baking:
• It can be used for browning effect, both caramelization and the Maillard reaction, on everything from breads to cookies to cakes. Browning gives a pleasant colour and flavour to the finished product. Caramelization results from the action of heat on sugars. At high temperatures, the chemical changes associated with melting sugars result in a deep brown colour and new flavours. The Maillard reaction results from chemical interactions between sugars and proteins at high heat. An amino group from a protein combines with a reducing sugar to produce a brown colour in a variety of foods (e.g., brewed coffee, fried foods, and breads).
• It acts as the most important tenderizing agent in all baked goods, and one of the factors responsible for the spread in cookies. It helps delay the formation of gluten, which is essential for maintaining a soft or tender product.
• It makes an important contribution to the way we perceive the texture of food. For example, adding sugar to ice cream provides body and texture, which is perceived as smoothness. This addition helps prevent lactose crystallization and thus reduces sugar crystal formation that otherwise causes a grainy texture sometimes associated with frozen dairy products.
• It preserves food when used in sufficient quantity.
• In baking, it increases the effectiveness of yeast by providing an immediate and more usable source of nourishment for the yeast’s growth. This hastens the leavening process by producing more carbon dioxide, which allows the dough to rise at a quicker and more consistent rate.
Just as there are many functions of sugar in the bakeshop, there are different uses for the various types of sugar as well:
• Fine granulated sugar is most used by bakers. It generally dissolves easily in mixes and is pure enough for sugar crafters to boil for “pulled” sugar decorations.
• Coarse granulated sugar may be used for a topping on sugar cookies, puff pastry, and Danish pastries as it doesn’t liquefy or caramelize so readily. In some European countries, an extra coarse sugar (called hail — a literal translation) is used for this purpose.
• Icing or powdered sugar is used in icings and fillings and in sifted form as a top decoration on many baked goods.
• Brown or yellow sugars are used where their unique flavor is important, or in bakeries where an old- fashioned or rustic image is projected. Brown sugar can usually be substituted for white sugar without technical problems in sugar/batter mixes such as cakes and muffins, and in bread dough.
4.04: Agave
Agave has gained popularity in the food industry due to some of its nutritional properties. The agave nectar is obtained from the sap of the heart of the agave plant, a desert succulent, which is also used to produce tequila. The syrup/sugar production process of agave is similar to that of sugar. See more about the nutritional properties and application of agave in the chapter Special Diets, Allergies, Intolerances, Emerging Issues, and Trends in the open textbook Nutrition and Labelling for the Canadian Baker.
A video on the production of agave syrup is available | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/04%3A_Sugar/4.03%3A_The_Application_of_Sugar.txt |
Honey is a natural food, essentially an invert sugar. Bees gather nectar and, through the enzyme invertase, change it into honey. Honey varies in composition and flavor depending on the source of the nectar. The average composition of honey is about 40% levulose, 35% dextrose, and 15% water, with the remainder being ash, waxes, and gum.
Blended honey is a mixture of pure honey and manufactured invert sugar, or a blend of different types of honey mixed together to produce a good consistency, color, and aroma. Dehydrated honey is available in a granular form.
Store honey in a tightly covered container in a dry place and at room temperature because it is hygroscopic, meaning it absorbs and retains moisture. Refrigeration or freezing won’t harm the color or flavor but it may hasten granulation. Liquid honey crystallizes during storage and is re-liquefied by warming in a double boiler not exceeding a temperature of 58°C (136°F).
Honey is used in baking:
• As a sweetener
• To add unique flavor
• In gingerbread and special cookies where a certain moistness is characteristic of the product
• To improve keeping qualities
There are several types of honey available:
• Comb honey is “packed by the bees” directly from the hive.
• Liquid honey is extracted from the comb and strained. It is the type used by most bakers.
• Creamed honey has a certain amount of crystallized honey added to liquid honey to give body to the final product.
• Chunk honey consists of pieces of comb honey as well as liquid.
• Granulated honey has been crystallized.
In the United States, honey categories are based on color, from white to dark amber. Honey from orange blossom is an example of white honey. Clover honey is an amber honey, and sage and buckwheat honeys are dark amber honeys.
4.07: Malt
Malt is the name given to a sweetening agent made primarily from barley. The enzymes from the germ of the seeds become active, changing much of the starch into maltose, a complex sugar. Maltose has a distinct flavor and is used for making yeast products such as bread and rolls. Malt is considered to be relatively nutritious compared to other sweeteners.
Malt is available as:
• Flour
• Malt syrup
• Malt extract
• Dried malt
The flour is not recommended since it can lead to problems if not scaled precisely. Malt syrup is inconvenient to work with, as it is sticky, heavy, and bulky. Dried malt is the most practical, though it must be kept protected from humidity.
There are two distinct types of malt:
• Diastatic malt flour is dried at low temperature, thus retaining the activity of the diastatic enzymes.
• Non-diastatic malt flour is darker in color. It is treated at high temperature, which kills the enzymes, and the result is non-diastatic malt.
Crushing malted grain in water produces malt syrup. This dissolves the maltose and soluble enzymes. The liquid is concentrated, producing the syrup. If the process is continued, a dry crystallized product called dried malt syrup is obtained.
Malt syrup has a peculiar flavor, which many people find desirable. It is used in candy, malted milk, and many other products. The alcoholic beverage industry is the largest consumer of malt by far, but considerable quantities are used in syrup and dried malt syrup, both of which are divided into diastatic and non-diastatic malt.
Both diastatic and non-diastatic malts add sweetness, color, and flavor to baked products. Both are valuable since they contain malt sugar, which is fermented by the yeast in the later stages of fermentation. Other sugars such as glucose and levulose are used up rapidly by fermenting yeast in the early stages of fermentation. Diastatic malt is made with various levels of active enzymes. Malt with medium diastatic activity is recommended. Normally, bread bakers will find sufficient enzymes in well-balanced flour from a good mill, so it is unnecessary to use diastatic malt.
When using dry diastatic malt, about the same weight should be used as liquid regular diastatic malt. Adjustment is made at the factory insofar as the enzyme level is increased in the dry product to compensate. Since the dry type contains about 20% less moisture than the liquid type, add water to make up the difference if dry diastatic malt is substituted for malt syrup.
The main uses of malt in the bakery are to:
• Add nutritive value, as it is rich in vitamins and essential amino acids
• Lengthen shelf life through its ability to attract moisture
• Help fermentation by strengthening the gluten and feeding the yeast Make products more appealing through browning of the crust
• Add unique flavor to products when used in sufficient quantity
Table 1 shows the suggested use levels for malt.
Table 1 Recommended level of malt for various baked goods
Product Percentage of Flour Weight
White pan bread 0.5-1.5
Sweet goods 1.5-3.0
French/Italian bread 0.5-2.0
Whole wheat bread 5.0-9.0
Pretzels 1.5-6.0
Hard rolls 3.0-5.5 | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/04%3A_Sugar/4.06%3A_Honey.txt |
Canada is responsible for 84% of the world’s maple syrup production, with the United States being responsible for the remaining 16%. Maple syrup is made by boiling and evaporating the sap of the sugar maple tree. Because sap is only 2% or 3% sugar, it takes almost 40 liters of sap to make 1 liter of syrup. This makes maple syrup a very expensive sweetener. It is prized for its unique flavor and sweet aroma. Don’t confuse maple-flavored pancake or table syrup with real maple syrup. Table syrup is made from inexpensive glucose or corn syrup, with added caramel coloring and maple flavoring.
Maple syrup in Canada has two categories:
1. Canada Grade A, which has four color/flavor classes
• golden, delicate taste
• amber, rich taste
• dark, robust taste
• very dark, strong taste
2. Canada Processing Grade, which has no color descriptors (any maple syrup that possesses minimal food quality defects but still meets all government regulatory standards for food quality and safety for human consumption)
This definition and grading system gives consumers more consistent and relevant information about the varieties, and helps them make informed choices when choosing maple syrup.
Darker maple syrups are better for baking as they have a more robust flavor. Using maple sugar is also a good way to impart flavor. Maple sugar is what remains after the sap of the sugar maple is boiled for longer than is needed to create maple syrup. Once almost all the water has been boiled off, all that is left is a solid sugar. It can be used to flavor some maple products and as an alternative to cane sugar.
For a video on maple syrup production, see: https://www.youtube.com/watch?v=OFIj4pMYpTQ
4.09: Sugar Substitutes (ADD US)
In Canada, food additives such as sugar substitutes, which cover both artificial sweeteners and intense sweeteners obtained from natural sources, are subject to rigorous controls under theFood and Drugs Act and Regulations. New food additives (or new uses of permitted food additives) are permitted only once a safety assessment has been conducted and regulatory amendments have been enacted.
Several sugar substitutes have been approved for use in Canada. These include acesulfame-potassium, aspartame, polydextrose, saccharin, stevia, sucralose, thaumatin, and sugar alcohols (polyols) like sorbitol, isomalt, lactitol, maltitol, mannitol, and xylitol. Please see the Health Canada website for more information on sugar substitutes.
Bakers must be careful when replacing sugar (sucrose) with these sugar substitutes in recipes. Even though the sweetness comparison levels may be similar (or less), it is generally not possible to do straight 1-for-1 substitution. Sugar (sucrose) plays many roles in a recipe:
• It is a bulking agent.
• It absorbs moisture.
• It is a tenderizer.
• It adds moisture and extends shelf life. It adds color (caramelization).
Sugar substitutes may not work in a recipe in the same way. More information on sugar substitutes and their relative sweetness can be found here: http://www.sugar-and-sweetener-guide...er-values.html
Dextrose
The sugar known as glucose has two origins:
1. In a natural form in most fruits
2. In a processed form from corn (corn syrup)
In baking, we usually refer to industrially made glucose. It is made from corn and the resulting product, a thick syrup, is then adjusted to a uniform viscosity or consistency. The particular form of the syrup is defined by what is known as the dextrose equivalent, or DE for short. Corn syrup is the most familiar form of glucose.
In plant baking, high-fructose corn syrup (HFCS) is the major sweetening agent in bread and buns. It consists of roughly half fructose and half dextrose. Dextrose (chemically identical to glucose) is available in crystalline form and has certain advantages over sucrose:
• It is easily fermentable.
• It contributes to browning in bread and bun making.
• In crystalline form, it is often used in doughnut sugars as it is more inclined to stay dry and non- greasy.
• It is hygroscopic and valued as a moisture-retaining ingredient.
• It retards crystallization in syrups, candies, and fondant.
Corn syrup is made from the starch of maize (corn) and contains varying amounts of glucose and maltose, depending on the processing methods. Corn syrup is used in foods to soften texture, add volume, prevent crystallization of sugar, and enhance flavor.
Glucose/dextrose has a sweetening level of approximately three-quarters that of sugar. Table 1 shows the amount of corn syrup or HFCS needed to replace sugar in a formula.
Table 1:Replacement factor for Corn Syrup and High-Fructose Corn Syrup Type of Sugar
Solids Replacement Factor
Granulated sugar 100% 1.0
Regular corn syrup 80% 1.25
High-fructose corn syrup 71% 1.41
Glucose, HFCS, and corn syrup are not appropriate substitutions for sucrose in all bakery products. Certain types of cakes, such as white layer cakes, will brown too much if glucose or HFCS is used in place of sugar. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/04%3A_Sugar/4.08%3A_Maple_Syrup_%28ADD_US%29.txt |
The word leavening in the baking trade is used to describe the source of gas that makes a dough or batter expand in the presence of moisture and heat. Leavening agents are available in different forms, from yeast (the organic leavener) to chemical, mechanical, and physical leaveners. Bakers choose the appropriate type of leavening based on the product they are making.
Thumbnail: Active dried yeast, a granulated form in which yeast is commercially sold. (Public Domain; Ranveig).
05: Leavening Agents
The word leavening in the baking trade is used to describe the source of gas that makes a dough or batter expand in the presence of moisture and heat. Leavening agents are available in different forms, from yeast (the organic leavener) to chemical, mechanical, and physical leaveners. Bakers choose the appropriate type of leavening based on the product they are making.
5.02: Yeast
Yeast is a microscopic unicellular fungus that multiplies by budding, and under suitable conditions, causes fermentation. Cultivated yeast is widely used in the baking and distilling industries. History tells us that the early Chaldeans, Egyptians, Greeks, and Romans made leavened bread from fermented doughs. This kind of fermentation, however, was not always reliable and easy to control. It was Louis Pasteur, a French scientist who lived in the 19th century, who laid the foundation for the modern commercial production of yeast as we know it today through his research and discoveries regarding the cause and prevention of disease.
Types of Yeast
Wild yeast spores are found floating on dust particles in the air, in flour, on the outside of fruits, etc. Wild yeasts form spores faster than cultivated yeasts, but they are inconsistent and are not satisfactory for controlled fermentation purposes.
Compressed Yeast
Compressed yeast is made by cultivating a select variety, which is known by experiment to produce a yeast that is hardy, consistent, and produces a fermentation with strong enzymatic action. These plants are carefully isolated in a sterile environment free of any other type of yeast and cultivated on a plate containing nutrient agar or gelatin. Wort, a combination of sterilized and purified molasses or malt, nitrogenous matter, and mineral salts is used to supply the food that the growing yeast plants need to make up the bulk of compressed yeast.
After growing to maturity in the fermentation tank, the yeast is separated from the used food or wort by means of centrifugal machines. The yeast is then cooled, filtered, pressed, cut, wrapped, and refrigerated. It is marketed in 454 g (1 lb.) blocks, or in large 20 kg (45 lb.) bags for wholesale bakeries.
Figure 1 illustrates the process of cultivating compressed yeast, and Table 1 summarizes its composition.
Figure 1 Cultivating compressed yeast
Table 1 Average composition of fresh (compressed) yeast
Water 68% to 73%
Protein 12% to 14%
Fat 0.6% to 0.8%
Carbohydrate 9% to 11%
Mineral Matter 1.7% to 2%
Active Dry Yeast
Active dry yeast is made from a different strain than compressed yeast. The manufacturing process is the same except that the cultivated yeast is mixed with starch or other absorbents and dehydrated. Its production began after World War II, and it was used mainly by the armed forces, homemakers, and in areas where fresh yeast was not readily available.
Even though it is a dry product, it is alive and should be refrigerated below 7°C (45°F) in a closed container for best results. It has a moisture content of about 7%. Storage without refrigeration is satisfactory only for a limited period of time. If no refrigeration is available, the yeast should be kept unopened in a cool, dry place. It should be allowed to warm up to room temperature slowly before being used.Dry yeast must be hydrated for about 15 minutes in water at least four times its weight at a temperature between 42°C and 44°C (108°F and112°F). The temperature should never be lower than 30°C (86°F), and dry yeast should never be used before it is completely dissolved.
It takes about 550 g (20 oz.) of dry yeast to replace 1 kg (2.2 lb.) of compressed yeast, and for each kilogram of dry yeast used, an additional kilogram of water should be added to the mix. This product is hardly, if ever, used by bakers, having been superseded by instant yeast (see below).
Instant Dry Yeast
Unlike instant active dry yeast that must be dissolved in warm water for proper rehydration and activation, instant dry yeast can be added to the dough directly, either by:
Mixing it with the flour before the water is added
Adding it after all the ingredients have been mixed for one minute
This yeast can be reconstituted. Some manufacturers call for adding it to five times its weight of water at a temperature of 32°C to 38°C (90°F to 100°F). Most formulas suggest a 1:3 ratio when replacing compressed yeast with instant dry. Others vary slightly, with some having a 1:4 ratio. In rich Danish dough,
it takes about 400 g (14 oz.), and in bread dough about 250 g to 300 g (9 oz. to 11 oz.) of instant dry yeast
to replace 1 kg (2.2 lb.) of compressed yeast. As well, a little extra water is needed to make up for the moisture in compressed yeast. Precise instructions are included with the package; basically, it amounts to the difference between the weight of compressed yeast that would have been used and the amount of dry yeast used.
Instant dry yeast has a moisture content of about 5% and is packed in vacuum pouches. It has a shelf life of about one year at room temperature without any noticeable change in its gassing activity. After the seal is broken, the content turns into a granular powder, which should be refrigerated and used by its best-before date, as noted on the packaging.
Instant dry yeast is especially useful in areas where compressed yeast is not available. However, in any situation, it is practical to use and has the advantages of taking up less space and having a longer shelf life than compressed yeast.
Cream Yeast
Creamy yeast is a soft slurry-type yeast that is used only in large commercial bakeries and is pumped into the dough.
Yeast Food
Yeast food is used in bread production to condition the dough and speed up the fermentation process. It consists of a blend of mineral salts such as calcium salt or ammonium salt and potassium iodate. It has a tightening effect on the gluten and is especially beneficial in dough where soft water is used. The addition of yeast food improves the general appearance and tasting quality of bread. The retail baker does not use it much. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/05%3A_Leavening_Agents/5.01%3A_Introduction_to_Leavening_Agents.txt |
Yeast has two primary functions in fermentation:
To convert sugar into carbon dioxide gas, which lifts and aerates the dough
To mellow and condition the gluten of the dough so that it will absorb the increasing gases evenly and hold them at the same time
In baked products, yeast increases the volume and improves the flavor, texture, grain, color, and eating quality. When yeast, water, and flour are mixed together under the right conditions, all the food required for fermentation is present as there is enough soluble protein to build new cells and enough sugar to feed them.
Activity within the yeast cells starts when enzymes in the yeast change complex sugar into invert sugar. The invert sugar is, in turn, absorbed within the yeast cell and converted into carbon dioxide gas and alcohol. Other enzymes in the yeast and flour convert soluble starch into malt sugar, which is converted again by other enzymes into fermentable sugar so that aeration goes on from this continuous production of carbon dioxide.
Proper Handling of Yeast
Compressed yeast ages and weakens gradually even when stored in the refrigerator. Fresh yeast feels moist and firm, and breaks evenly without crumbling. It has a fruity, fresh smell, which changes to a sticky mass with a cheesy odor. It is not always easy to recognize whether or not yeast has lost enough of its strength to affect the fermentation and the eventual outcome of the baked bread, but its working quality definitely depends on the storage conditions, temperature, humidity, and age.
The optimum storage temperature for yeast is -1°C (30°F). At this temperature it is still completely effective for up to two months. Yeast does not freeze at this temperature.
Other guidelines for storing yeast include:
Rotating it properly and using the older stock first
Avoiding overheating by spacing it on the shelves in the refrigerator
Yeast needs to breathe, since it is a living fungus. The process is continuous, proceeding slowly in the refrigerator and rapidly at the higher temperature in the shop. When respiration occurs without food, the yeast cells starve, weaken, and gradually die.
Yeast that has been frozen and thawed does not keep and should be used immediately. Freezing temperatures weaken yeast, and thawed yeast cannot be refrozen successfully.
5.04: Using Yeast in Baking
Many bakers add compressed yeast directly to their dough. A more traditional way to use yeast is to dissolve it in lukewarm water before adding it to the dough. The water should never be higher than 50°C (122°F) because heat destroys yeast cells. In general, salt should not come into direct contact with yeast, as salt dehydrates the yeast. (Table 1 indicates the reaction of yeast at various temperatures.)
It is best to add the dissolved yeast to the flour when the dough is ready for mixing. In this way, the flour is used as a buffer. (Buffers are ingredients that separate or insulate ingredients, which if in too close contact, might start to react prematurely.) In sponges where little or no salt is used, yeast buds quickly and fermentation of the sponge is rapid.
Table 1 How yeast reacts at different temperatures
Temperature Reaction
15°C -20°C (60°F -68°F) slow reaction
26°C -29°C (80°F -85°F) normal reaction
32°C -38°C (90°F -100°F) fast reaction
59°C (138°F) terminal death point
Never leave compressed yeast out for more than a few minutes. Remove only the amount needed from the refrigerator. Yeast lying around on workbenches at room temperature quickly deteriorates and gives poor results. One solution used by some bakeries to eliminate steps to the fridge is to have a small portable cooler in which to keep the yeast on the bench until it is needed. Yeast must be kept wrapped at all times because if it is exposed to air the edges and the corners will turn brown. This condition is known as air- burn.
5.05: Baking Powder
Baking powder is a dependable, high-quality chemical leavener. To be effective, all baking powders rely on the reaction between one or more acids on sodium bicarbonate to produce carbon dioxide gas. Just as with yeast leavening, the presence of carbon dioxide gas creates air bubbles that cause the product to rise.
There are two main types of baking powders available on the market:
• Continuous or single-action baking powder
• Double- or multiple-action baking powder
The difference between continuous- and double-action baking powders is simply the rate of reaction:
• Continuous-action baking powder uses one acid, which continuously reacts with the soda to release gas steadily throughout the baking process until all the gassing power is spent.
• Double-action baking powder contains two different acids, which react with soda at different stages of the baking process. One acid reacts to give off a small amount of gas at low temperature, and the other major acid reacts at baking temperatures to give off the bulk of the gas.
The Leavening Mechanism of Baking Powder
Before baking, approximately 15% of the CO2 gas is released in the cold stage. Eighty-five percent of the CO2 gas is released in the oven starting at approximately 40°C (105°F). Some leavening power is apparently lost in the cold stage, but there is usually still adequate gassing power in the remaining portion.
When the baking powder is activated through moisture and heat, the gas works its way into the many cells created by the mixing or creaming of the batter and starts to expand them. This process comes to a halt when the starch gelatinizes and the cells become rigid. This starts at about 60°C (140°F) and is more or less complete at around 75°C (167°F). After this point, some gas may still be created, but it simply escapes through the porous structure of the product.
Using Baking Powder
For even distribution throughout the batter, baking powder should be sifted with the flour or other dry ingredients. For most cakes, about 5% baking powder to the weight of the flour produces an optimum result. Accurate scaling is important, since a little too much may cause the product to collapse. (Note this is unlike yeast, where an “overdose” will usually simply cause a more rapid rise.) | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/05%3A_Leavening_Agents/5.03%3A_The_Functions_of_Yeast.txt |
When sodium bicarbonate (baking soda) is moistened and heated, it releases carbon dioxide gas. If it is moistened and heated in the presence of sufficient acid, it will release twice as much gas as if it is moistened and heated without the presence of an acid.
Slightly acidic ingredients provide the mix with some of the necessary acids for the release of carbon dioxide gas. Examples are:
Honey Molasses Ginger Cocoa Bran
For this reason, some of the mixes contain baking powder only while others contain a combination of baking powder and baking soda. If an excessive amount of baking soda is used in a cake batter without the presence of sufficient acid, the normally white cake crumb will have a yellowish-brown color and a strong undesirable smell of soda.
The gas evolves very fast at the beginning of baking when thepH level is still on the acidic side (pH of around 5 to 6). Once the soda neutralizes the acid, the dough or batter quickly becomes alkaline and the release of gas is reduced. Mixes and doughs leavened with baking soda must be handled without delay, or the release of the gas may be almost exhausted before the product reaches the oven.
The darker color of the crumb found on the bottom half of a cake or muffins is caused by the partial dehydration of the batter that is heated first during baking. In spiced honey cookies and gingerbread, baking soda is used alone to give them quick color during baking and yet keep the products soft.
In chocolate cakes, baking soda is used in conjunction with baking powder to keep the pH at a desirable level. However, it is important to know whether the cocoa powder you are using is natural or treated by the Dutch process. In the Dutch process, some of the acid in the cocoa is already neutralized, and there is less left for the release of gas in the mix. This means more baking powder and less baking soda is used.
Baking soda in a chocolate mix not only counteracts the acid content in the baked cake but also improves the grain and color of the cake. A darker and richer chocolate color is produced if the acid level is sufficient to release all the carbon dioxide gas. On the other hand, the reddish, coarse, open-grained crumb in devil’s food cake is the result of using baking soda as the principal leavening agent.
The level of baking soda depends on the nature of the product and on the other ingredients in the formula. Cookies, for example, with high levels of fat and sugar, do not require much, if any, leavening.
Table 1 provides the recommended amounts of baking soda for different products. Note that the percentages appear small compared to the 5% level of baking powder suggested because baking powder
contains both an acid agent and a leavening agent.
Table 1 Recommended amounts of baking soda
Product Amount of Baking Soda (% of flour weight)
Cookies 0.4-0.6
Cakes 0.5-1.0
Cake doughnuts 0.7-1.0
Pancakes 1.4-2.0
5.07: Ammonium Bicarbonate
Ammonium bicarbonate is a white crystalline powder used in flat, spiced cookies, such as gingerbreads, and in eclair paste. It must be dissolved in the cold liquid portion of the batter. At room temperature, decomposition of \(\ce{CO2}\) in the batter is minimal. When heated to approximately 60°C (140°F) decomposition is more noticeable, and at oven temperature, decomposition takes place in a very short time. Ammonium bicarbonate should only be used in low moisture-containing products that are not dense. Providing that these conditions are met, there will be no taste and odor remaining from the ammonium.
5.08: Water Hardness and pH
Effects on Baking
Most municipal supplies of water contain chlorine, which is used to ensure the purity of the water. Some cities add fluoride to their water supply to stop tooth decay. Neither chlorine nor fluoride is present in large enough quantities to affect dough in any way. In addition, most municipal water is treated to reduce excessive acidity, since this could be corrosive for the water lines. It is therefore unlikely that bakers using municipal water need to be concerned about extremely acidic water.
Soft water is another matter, as it can lead to sticky dough. An addition of yeast food, or a reduction in dough water, will help. Alkaline water tends to tighten the dough and retard fermentation, since enzymes work best in slightly acidic dough.
If there is a possibility of water problems, a sample should be forwarded to a laboratory for a complete analysis. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/05%3A_Leavening_Agents/5.06%3A_Sodium_Bicarbonate.txt |
Add all section in this chapter
• 6.1: Introduction to Dairy Products
Milk and milk products are some of our oldest and best-known natural foods. In baking, milk is used fresh, condensed, powdered, skimmed, or whole. The great bulk, weight, and perishability of fresh milk plus the expense of refrigeration makes it a relatively high-cost ingredient, and for this reason, most modern bakeries use non-fat powdered milk or buttermilk powder.
• 6.2: Milk
• 6.3: Milk Products ADD US
• 6.4: Milk in bread baking
• 6.5: Yogurt
Yogurt is a thick or semi-solid food made from pasteurized milk fermented by lactic bacteria. The milk coagulates when a sufficient quantity of lactic acid is produced. Yogurt is a rich, versatile food capable of enhancing the flavor and texture of many recipes. It is prepared sweetened or unsweetened, and is used in baking to make yogurt-flavored cream cakes, desserts, and frozen products.
• 6.6: Lactose
Lactose is a "milk sugar" and is a complex sugar. It is available commercially spray-dried and in crystalline form. There are many advantages to using it in various baking applications:
• 6.7: Cheese
Cheese is a concentrated dairy product made from fluid milk and is defined as the fresh or matured product obtained by draining the whey after coagulation of casein.
Thumbnail: A glass of pasteurized cow's milk. (CC BY-SA 3.0; Stefan Kühn),
Contributors and Attributions
Sorangel Rodriguez-Velazquez (American University). Chemistry of Cooking by Sorangel Rodriguez-Velazquez is licensed under a Creative Commons Attribution-NonCommercial ShareAlike 4.0 International License, except where otherwise noted
06: Dairy Products
Milk and milk products are some of our oldest and best-known natural foods. In baking, milk is used fresh, condensed, powdered, skimmed, or whole. The great bulk, weight, and perishability of fresh milk plus the expense of refrigeration makes it a relatively high-cost ingredient, and for this reason, most modern bakeries use non-fat powdered milk or buttermilk powder.
Over the past 20 years, there has been a trend to lower fat content in dairy products. This reflects the high caloric value of milk fat, and also is compatible with the trend to leaner, healthier nutrition. These “low-fat” products often have the fat replaced with sugars, so care must be taking in substituting these ingredients
in a recipe. For bakers, this trend has not meant any great changes in formulas: a 35% milk fat or a 15% cream cheese product usually works equally well in a cheesecake. Some pastry chefs find lowering the richness in pastries and plated desserts can make them more enjoyable, especially after a large meal.
Table 1 provides the nutritional properties of milk products.
Table 1 Nutritional properties of milk products (per 100 g)
Whole Milk (3.5% milk fat) Skim Milk (0.1% milk fat) Coffee Cream (18% milk fat) Heavy or Whipping Cream (36% milk fat)
Protein 3.22 g 3.37 g 3g 2g
Fat 3.25 g 0.08 g 19 g 37 g
Cholesterol 10 mg 2 mg 66 mg 137 mg
Potassium 143 mg 156 mg 122 mg 75 mg
Calcium 113 mg 125 mg 96 mg 65 mg
Magnesium 10 mg 11 mg 9 mg 7 mg
Sodium 40 mg 42 mg 40 mg 40 mg
Vitamin A (IU) 102 IU 204 IU 656 IU 1470 IU
Note: Besides the elements shown in Table 1, all dairy products contain vitamin B-complex. IU = International Units, a term used in nutritional measurement
6.02: Milk
Homogenized milkis fresh milk in which the fat particles are so finely divided and emulsified mechanically that the milk fat cannot separate on standing. The milk fat is forced into tiny droplets. As soon as the droplets form, milk proteins and emulsifiers form a protective film around each one, preventing the fat from reuniting. The tiny droplets stay suspended indefinitely, and milk fat no longer separates and rises
to the top as a cream layer. In other words, homogenized dairy products are stable emulsions of fat droplets suspended in milk. It is also said that homogenized milk is more readily digestible.
Pasteurization of milk was developed in 1859 by the French chemist Louis Pasteur. One method of pasteurization is to heat milk to above 71°C (160°F), maintain it at this temperature for a set time, then cool it immediately to 10°C (50°F) or lower. This kills all harmful bacteria that carry the potential threat of bovine tuberculosis and fever from cows to humans.
The two main types of pasteurization used today are high-temperature, short-time (HTST, also known as “flash”) and higher-heat, shorter time (HHST). Ultra-high-temperature (UHT) processing is also used.
High-temperature, short-time (HTST) pasteurization is done by heating milk to 72°C (161°F) for 15 seconds. Milk simply labelled “pasteurized” is usually treated with the HTST method.
Higher-heat, shorter time (HHST) milk and milk products are pasteurized by applying heat continuously, generally above 100°C (212°F) for such time to extend the shelf life of the product under refrigerated conditions. This type of heat process can be used to produce dairy products with extended shelf life (ESL).
Ultra-high-temperature (UHT) processing holds the milk at a temperature of 140°C (284°F) for four seconds. During UHT processing, milk is sterilized rather than pasteurized. This process allows milk or juice to be stored several months without refrigeration. The process is achieved by spraying the milk or juice through a nozzle into a chamber that is filled with high-temperature steam under pressure. After the temperature reaches 140°C (284°F) the fluid is cooled instantly in a vacuum chamber and packed in a pre-sterilized, airtight container. Milk labelled UHT has been treated in this way.
For more information on pasteurization, visit the International Dairy Foods Association. Attribution | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/06%3A_Dairy_Products/6.01%3A_Introduction_to_Dairy_Products.txt |
Cream
The usual minimum standard for cream is 10% fat content, though it ranges between 10% and 18%. Cream in this range may be sold as half and half, coffee cream, or table cream.
Whipping cream is about 32% to 36% in milk fat content. Cream with 36% or higher is called heavy cream. This percentage of fat is not a mandated standard; much less than this and the cream simply will not whip. For best whipping results, the cream should be 48 to 60 hours old and be cold. A stabilizer, some sugar, and flavour may be added during whipping. Before adding stabilizer, check the ingredients on the carton; some whipping creams nowadays have added agents such as carrageenan, in which case an additional stabilizer may not be necessary.
Canadian cream definitions are similar to those used in the United States, except for that of “light cream.” In Canada, what the U.S. calls light cream is referred to most commonly as half and half. In Canada, “light cream” is low-fat cream, usually with 5% to 6% fat. You can make your own light cream by blending milk with half-and-half.
In Quebec, country cream is sold, which contains 15% milk fat. If you are usinga recipe that calls for country cream, you may substitute 18% cream.
If you have recipes from the UK, you might see references to double cream. This is cream with about 48% milk fat, which is not readily available in Canada, except in some specialty stores. Use whipping cream or heavy cream instead.
Table 1 lists some of the common cream types and their uses.
Table 1 Cream types and fat content Name
Minimum Milk Fat Additional Definition Main Uses
Whipping cream 32% Heavy cream has at least 36% milk fat Whips well, can be piped; custards, cream fillings, confectionary products
Table cream 18% Coffee cream Added to coffee, poured over puddings, used in sauces
Half-and-half 10%-12% Cereal cream Added to coffee; custards and ice cream mixes
Light cream 5%-10% Added to coffee
Buttermilk
Inoculating milk with a specific culture to sour it
Churning milk and separating the liquid left over from the butter
There are two methods to produce buttermilk:
The second method is where buttermilk gets its name, but today, most of what is commonly called buttermilk is the first type. Buttermilk has a higher acid content than regular milk (pH of 4.6 compared with milk’s pH of 6.6).
The fermented dairy product known ascultured buttermilk is produced from cow’s milk and has a characteristically sour taste caused by lactic acid bacteria. This variant is made using one of two species of bacteria — either Lactococcus lactis or Lactobacillus bulgaricus, which creates more tartness in certain recipes.
The acid in buttermilk reacts with the sodium bicarbonate (baking soda) to produce carbon dioxide, which acts as the leavening agent.
Sour Cream
Sour cream is made from cream soured by adding lactic acids and thickened naturally or by processing. Milk fat content may vary from 5.5% to 14%. The lactic acid causes the proteins in sour cream to coagulate to a gelled consistency; gums and starches may be added to further thicken it. The added gums and starches also keep the liquid whey in sour cream from separating.
Use sour cream in cheesecakes, coffee cakes, and pastry doughs. Low-fat and fat-free sour cream are available. Low-fat sour cream, which is essentially cultured half-and-half or light cream (and usually contains 7% to 10% milk fat), is often satisfactory as a substitute for regular sour cream in baking. These products are higher in moisture and less rich in flavor than regular sour cream.
Crème Fraîche
Crème fraîche (fresh cream) is a soured cream containing 30% to 45% milk fat and having a pH of around 4.5. It is soured with bacterial culture. Traditionally it is made by setting unpasteurized milk into a pan at room temperature, allowing the cream to rise to the top. After about 12 hours, the cream is skimmed off. During that time, natural bacteria in the unpasteurized milk ripens the cream, turning it into a mildly sour, thickened product.
An effective substitute can be made by adding a small amount of cultured buttermilk or sour cream to whipping cream and allowing it to stand in a warm spot for 10 hours or more before refrigerating. As the cream ripens from the growth of the lactic acid bacteria, it thickens and develops a sour flavour. This product is similar to sour cream, but it has a higher milk fat content.
Milk Substitutes
Milk substitutes are becoming increasingly popular as replacements for straight skim milk powders. Innumerable replacement blends are available to the baker. Their protein contents range from 11% to 40%; some are wet, some are dry-blended. Product types vary from all dairy to mostly cereal. All-dairy blends range from mostly dry skim milk to mostly whey. A popular blend is whey mixed with 40% soy flour solids and a small quantity of sodium hydroxide to neutralize the whey acidity.
Dough consistency may be a little softer if the milk in the replacement blend exceeds 3%, and this could dictate the need to increase dough mixing by at least half a minute. However, absorption and formula changes are seldom necessary when switching from dry milk to a blend, or from a blend to a blend.
For nutritional labelling, or when using a blend in a non-standardized product that must carry an itemized ingredient label, all blend components must be listed in their proper order on the label.
The Canadian Food Inspection Agency defines modified milk ingredients as any of the following in liquid, concentrated, dry, frozen, or reconstituted form:
Calcium-reduced skim milk
Casein: This a protein in milk and is used as a binding agent. Caseins are also used in wax to shine fruits and vegetables, as an adhesive, and to fortify bread. Caseins contain common amino acids. Caseinate: This protein is derived from skim milk. Bodybuilders sometimes take powder enriched with calcium caseinate because it releases proteins at an even, measured pace.
Cultured milk products: These are milk products that have been altered through controlled fermentation, including yogurt, sour cream, and cultured buttermilk.
Milk serum proteins
Ultra-filtered milk: The Canadian Food and Drug Regulations define this type of milk as that which “has been subjected to a process in which it is passed over one or more semi-permeable membranes to partially remove water, lactose, minerals, and water-soluble vitamins without altering the whey protein-to-casein ratio and that results in a liquid product.”
Whey: This is serum by-product created in the manufacture of cheese.
Whey butter: Typically oily in composition, whey butter is made from cream separated from whey. Whey cream: This is cream skimmed from whey, sometimes used as a substitute for sweet cream and butter.
Any component of milk that has been altered from the form in which it is found in milk.
Milk Powder
Milk powder is available in several different forms: whole milk, skim milk (non-fat dry milk), buttermilk, or
whey. They are all processed similarly: the product is first pasteurized, then concentrated with an evaporator, and finally dried (spray or roller dried) to produce powder.
Whole milk powder must contain no less than 95% milk solids and must not exceed 5% moisture. The milk fat content must be no less than 2.6%. Vitamins A and D may be added and the emulsifying agent lecithin may also be added in an amount not exceeding 0.5%.
Skim milk powder (non-fat dry milk) must contain no less than 95% milk solids and must not exceed 4% moisture or 1.5% fat.
Buttermilk powder must contain no less than 95% milk solids and must not exceed 3% moisture or 6% fat.
Whey powder consists primarily of carbohydrate (lactose), protein (several different whey proteins, mainly lactalbumins and globulins), various minerals, and vitamins. Whey powder is a valuable addition to the functional properties of various foods as well as a source of valuable nutrients because it contains approximately 50% of the nutrients in the original milk.
Table 2 compares the composition of milk and two powdered milk products.
Table 2 Comparison of fresh and powdered milk products (% by weights)
Whole Milk Skim Milk Powder (Non-fat dry milk) Buttermilk Powder
Milk fat 3.25 0.7 5.0
Protein 3.5 36.0 34.0
Milk sugar (lactose) 4.9 51.0 48.0
Minerals 0.8 8.2 7.9
Water 87.0 3.0 3.0
Calcium 0.12 1.3 1.3
To make 10 L (22 lb.) of liquid skim milk from skim milk powder, 9.1 L (2.4 gal.) of water and 900 g (2 lb.) of skim milk powder are required.
To make 10 L (22 lb.) of whole milk from skim milk powder, 8.65 L (2.25 gal) of water, 900 g (2 lb.) of skim milk powder, and 450 g (1 lb.) of butter are needed.
When reconstituting dried milk, add it to the water and whisk in immediately. Delaying this, or adding water to the milk powder, will usually result in clogging. Water temperature should be around 21°C (70°F).
Evaporated Milk
Sometimes called concentrated milk, this includes evaporated whole, evaporated partly skimmed, and evaporated skim milks, depending on the type of milk used in its production. Canadian standards require 25% milk solids and 7.5% milk fat.
All types of evaporated milk have a darker color than the original milk because at high temperatures a browning reaction occurs between the milk protein and the lactose. After 60% of the water is removed by evaporation, the milk is homogenized, cooled, restandardized, and canned. It is then sterilized by heating for 10 to 15 minutes at 99°C to 120°C (210°F to 248°F). Controlled amounts of disodium phosphate and/or sodium citrate preserve the “salt balance” and prevent coagulation of the milk that might occur at high temperatures and during storage.
Sweetened Condensed Milk
Sweetened condensed milk is a viscous, sweet-colored milk made by condensing milk to one-third of its original volume, which then has sugar added. It contains about 40% sugar, a minimum of 8.5% milk fat, and not less than 28% total milk solids.
Attribution | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/06%3A_Dairy_Products/6.03%3A_Milk_Products_ADD_US.txt |
In the dough stage, milk increases water absorption. Consequently, dough made with milk should come softer from the mixer than dough made with water. Other aspects of milk in yeast doughs include:
Dough may be mixed more intensively.
Milk yields dough with a higher pH compared to water dough, and the fermentation will be slower. Fermentation tolerance (the ability of the dough to work properly in a range of temperatures) will be slightly improved.
Bench time will be extended as the dough ferments more slowly at this stage. (Final proof times will be about the same, as by this time the yeast has adjusted to the condition of the dough.)
Bread made with milk will color faster in the oven and allowance should be made for this. If taken out too early after a superficial examination of crust color, it may collapse slightly and be hard to slice. The loaf should be expected to have a darker crust color than bread made without milk.
In the finished product, milk will make bread that has:
Greater volume (improved capacity to retain gas) Darker crust (due to the lactose in the milk) Longer shelf life (due partly to the milk fat)
Finer and more “cottony” grain
Better slicing due to the finer grain
If skim milk or skim milk powder is used, some of the above benefits will not be so evident (e.g., longer shelf life, which is a result of the fat in the milk).
The type of sugar found in milk, lactose, has little sweetening power and does not ferment, so in dough made with skim milk powder, sugar has to be added or the fermentation will be very slow. While lactose is not fermentable, it caramelizes readily in the oven and produces a healthy crust color. The recommended amount of skim milk powder used in fermented dough is 2% to 8% based on flour, and up 15% in cakes.
Buttermilk and sour milk are used to make variety breads. They have a lower pH and require a shorter fermentation for good results.
6.05: Yogurt
Yogurt is a thick or semi-solid food made from pasteurized milk fermented by lactic bacteria. The milk coagulates when a sufficient quantity of lactic acid is produced. Yogurt is a rich, versatile food capable of enhancing the flavor and texture of many recipes. It is prepared sweetened or unsweetened, and is used in baking to make yogurt-flavored cream cakes, desserts, and frozen products.
6.06: Lactose
Lactose is a "milk sugar" and is a complex sugar. It is available commercially spray-dried and in crystalline form. There are many advantages to using it in various baking applications:
Because of its low sweetening value compared to sucrose, it can lend texture and create browning while keeping the sweetness level at low values, which many consumers prefer. It can be used to replace sucrose up to a 50% level, or replace it entirely in products like pie pastry. Lactose improves dough handling properties and the color of the loaf.
In pie crusts, it gives good color to top and bottom crusts, more tender crusts, and retards sogginess. In machine-dropped cookies, lactose can help the dough release better from the die. In cakes and muffins, it gives body without excessive sweetening and improves volume. Lactose binds flavors that are normally volatile and thus intensifies or enhances flavor. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/06%3A_Dairy_Products/6.04%3A_Milk_in_bread_baking.txt |
Cheese is a concentrated dairy product made from fluid milk and is defined as the fresh or matured product obtained by draining the whey after coagulation of casein.
Cheese making consists of four steps:
1. Curdling of the milk, either by enzyme (rennet) or by lactic curdling (natural process)
2. Draining in which the whey (liquid part) is drained from the curd (firm part)
3. Pressing, which determines the shape
4. Ripening, in which the rind forms and the curd develops flavor
Cheese can be classified, with some exceptions, into five broad categories, as follows. Examples are given of specific cheeses that may be used in baking.
1. Fresh cheese: High moisture content and no ripening characterize these products. Examples: cottage cheese, baker’s cheese, cream cheese, quark, and ricotta.
2. Soft cheeses: Usually some rind, but with a soft interior. Example: feta.
3. Semi-soft cheeses: Unripened cheeses of various moisture content. Example: mozzarella.
4. Firm cheeses: Well-ripened cheese with relatively low moisture content and fairly high fat content. Examples: Swiss, cheddar, brick.
5. Hard cheeses: Lengthy aging and very low moisture content. Example: Parmesan.
In baking, cheeses have different functions. Soft cheeses, mixed with other ingredients, are used in fillings
for pastries and coffeecakes. They are used for certain European deep-fried goods, such as cannoli. They may also be used, sometimes in combination with a richer cream cheese, for cheesecakes. All the cheeses itemized under fresh cheese (see above) are all more or less interchangeable for these functions. The coarser cheese may be strained first if necessary. The firmer cheeses are used in products like cheese bread, quiches, pizza, and cheese straws.
A brief description of the cheeses most likely to be used by bakers follows.
Dry Curd Cottage Cheese
This is a soft, unripened, acid cheese. Pasteurized skim milk is inoculated with lactic-acid-producing bacteria, and a milk-clotting enzyme (rennet) is added. Following incubation, the milk starts to clot, and it is then cut into cubes. After gentle cooking, the cubes or curds become quite firm. At this point, the whey is drained off, and the curd is washed and cooled with cold water.
Creamed Cottage Cheese
Creamed or dressed cottage cheese consists of dry curd cottage cheese combined with a cream dressing. The milk fat content of the dressing determines whether the final product is “regular” (4% milk fat ) or low fat (1% to 2% milk fat).
Baker’s Cheese
This is a soft, unripened, uncooked cheese. It is made following exactly the same process as for dry curd cottage cheese, up to and including the point when the milk clot is cut into cubes. This cheese is not cooked to remove the whey from the curd. Rather, the curd is drained through cloth bags or it may be pumped through a curd concentrator. The product is then ready to be packaged. The milk fat content is
Chemistry of Cooking 214
generally about 4%.
Quark
Quark (or quarg) is a fresh unripened cheese prepared in a fashion similar to cottage cheese. The mild flavor and smooth texture of quark make it excellent as a topping or filling for a variety of dishes. Quark is similar to baker’s cheese, except acid is added to it (it is inoculated with lactic-acid-producing bacteria), and then it is blended with straight cream to produce a smooth spread containing approximately 7% milk fat. Today there are low-fat quarks with lower percentage, and high-fat versions with milk fat adjusted to 18%. Quark cheese can often be used in place of sour cream, cottage cheese, or ricotta cheese.
Cream Cheese
Cream cheese is a soft, unripened, acid cheese. A milk-and-cream mixture is homogenized and pasteurized, cooled to about 27°C (80°F), and inoculated with lactic-acid-producing bacteria. The resulting curd is not cut, but it is stirred until it is smooth, and then heated to about 50°C (122°F) for one hour. The curd is drained through cloth bags or run through a curd concentrator. Regular cream cheese is fairly high fat, but much lighter versions exist now.
Ricotta
Ricotta is a fresh cheese prepared from either milk or whey that has been heated with an acidulating agent added. Traditionally lemon juice or vinegar was used for acidulation, but in commercial production, a bacterial culture is used. The curds are then strained and the ricotta is used for both sweet and savory applications.
Mascarpone
Mascarpone is a rich, fresh cheese that is a relative of both cream cheese and ricotta cheese. Mascarpone is prepared in a similar fashion to ricotta, but using cream instead of whole milk. The cream is acidified (often by the direct addition of tartaric acid) and heated to a temperature of 85°C (185°F), which results in precipitation of the curd. The curd is then separated from the whey by filtration or mechanical means. The cheese is lightly salted and usually whipped. Note that starter culture and rennet are not used in the production of this type of cheese. The high-fat content and smooth texture of mascarpone cheese make it suitable as a substitute for cream or butter. Ingredient applications of mascarpone cheese tend to focus on desserts. The most famous application of mascarpone cheese is in the Italian dessert tiramisu.
Table 1 provides the composition of various types of cheeses.
Table 1 Composition of various cheeses (% by weight)
Moisture % Milk Fat % Salt %
Dry curd cottage cheese 80 0.4 n/a
Regular creamed cottage cheese 79 4 1
Low fat (1% and 2%) creamed cottage cheese 79 1-2 1
Baker’s cheese 79 4 1
Quark 72 5-7 n/a
Quark (high fat) 59 18 n/a
Cream cheese 54 (varies) 17- 37 1
Ricotta 72-75 8-13 n/a
Mascarpone 46 60-75 1 | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/06%3A_Dairy_Products/6.07%3A_Cheese.txt |
• 7.1: Eggs Grade
• 7.2: Composition and Nutrition
Worth noting is the concentration of certain food elements in different parts of the egg. Note for example that all the cholesterol is in the yolk. The yolk is relatively rich in iron and the white is high in calcium.
• 7.3: Egg Products
A number of egg products besides whole shell eggs are used in the baking and food service industry. By law, all egg products other than shell eggs are pasteurized to protect them against salmonella, and the low temperature at which they are kept inhibits bacterial activity, although under certain conditions they may spoil very rapidly.
• 7.4: The Function of Eggs
Eggs are a truly multifunctional ingredient and have many roles to play in the bakeshop. Their versatility means that product formulas may be adjusted once the properties of eggs are understood. For example, in French butter cream, egg whites may be substituted in the summer for whole eggs to give a more stable and bacteria-free product (egg white is alkaline, with pH 8.5). A yolk or two may be worked into a sweet short paste dough to improve its extensibility.
• 7.5: Storing Eggs
Whole eggs are the perfect medium for the development of bacteria and mould. Eggs with an undesirable odor may be high in bacteria or mould. While some of these odors disappear in baking, some will remain and give an off-taste to the product if the odor is concentrated and strong.
Thumbnail: Chicken eggs vary in color depending on the hen. (CC BY-SA 3.0; Fir0002).
07: Eggs
Fresh hen eggs are sold by grade in all provinces. All shell eggs that are imported, exported, or shipped from one province to another for commercial sale must be graded. In Canada, it is mandatory to have all eggs graded by the standards set by Agriculture and Agri-Foods Canada (AAFC). The grade name appears on cartons. The grades Canada A and Canada B bear the maple leaf symbol with the grade name inside, and Canada C and Nest Run eggs will have the grade name printed in block text. The grades indicate the quality of the egg and should not be confused with size. Only Canada A are available in different sizes. The average large size egg weighs about 56 g (2 oz.) as indicated in Table 1.
Table 1: Canada Grade A egg sizes
Size Weight (including shell)
Peewee Less than 42 g (1.5 oz.)
Small At least 42 g (1.5 oz.)
Medium At least 49 g (1.75 oz.)
Large At least 56 g (2 oz.)
Extra Large At least 63 g (2.25 oz.)
Jumbo 70 g (2.5 oz.) or more
The Canada grade symbol does not guarantee that the eggs are of Canadian origin, but it does guarantee that the products meet Canadian government standards. Agriculture Canada inspects all egg-processing plants to ensure that the products are wholesome and processed according to sanitary standards. The pasteurization of “packaged” egg product is also monitored.
The criteria for grading eggs are:
• Weight
• Cleanliness
• Soundness and shape of shell
• Shape and relative position of yolk within the egg Size of air cell free of abnormalities
• Freedom from dissolved yolk and blood spots
Canada A
Canada A eggs are clean, normal in shape with sound shells, and have the finest interior quality. They are ideal for all uses. The yolks are round and compact and surrounded by very thick, firm albumen. Canada A eggs are a premium quality and in limited supply on the retail market. If eggs are not sold within a limited time, unsold stocks are returned to the supplier. Eggs graded as A must meet the minimum weight for the declared size (see Table 12.) The size designation for Canada A eggs must appear on the label.
Canada B
Canada B eggs have very slight abnormalities. This grade is fine for baking, where appearance is not important. These eggs must weigh at least 49 g (1.75 oz.). There are no size designations on the label for Canada B eggs.
Canada C
Canada C is considered a processing grade and provides a safe outlet for the disposition of cracked eggs. Canada C eggs must be shipped to a federally registered processed egg station and pasteurized as a means of controlling the higher risk of salmonella or other microbial contamination that may be found in cracked eggs.
These eggs are suitable for processing into commercially frozen, liquid, and dried egg products. Sizes are not specified.
Canada Nest Run
Since Canada Nest Run eggs are generally sent for further processing, they are usually not washed, candled (a process discussed later in this chapter), or sized. However, nest run eggs must meet the minimum quality requirements prescribed by the egg regulations. This grade, as with other Canada grades, can only be applied to eggs in a federally registered egg station. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/07%3A_Eggs/7.01%3A_Eggs_Grade.txt |
Table 1 Composition of eggs by percent of weight. Traces of sugar and ash are also present.
Composition of Eggs (%) Whole Egg Composition of Eggs (%) Yolk Composition of Eggs (%) White
Moisture 73.0 49.0 86.0
Protein 13.3 16.7 11.6
Lipid 11.5 31.6 0.2
Table 2 Nutritional content of a large egg
Whole Egg Yolk White
Weight 50 g 17 g 33 g
Protein 6g 3g 3g
Fat 5g 5g Trace
Cholesterol 216 mg 216 mg 0
Calcium 25 mg 2 mg 27 mg
Iron 1.0 mg 0.6 mg Trace
Sodium 63 mg 7 mg 54 mg
Potassium 60 mg 16 mg 47 mg
Vitamin A 96 RE 99 RE 0 RE
Note: B-complex vitamins, not itemized, are well represented in eggs, as are amino acids. RE = retinol equivalent, a term used in nutritional measurement.
Worth noting is the concentration of certain food elements in different parts of the egg. Note for example that all the cholesterol is in the yolk. The yolk is relatively rich in iron and the white is high in calcium.
In practice, when separating large eggs, one estimates the weight of the white as 30 g (1 oz) and the yolk as 20 g (0.7 oz). The color of the shell, which is either a creamy white or brown, is relevant to the breed of the hen, and there is no other basic difference in the content of the egg or the shell.
The color of the yolk depends on the diet of the hens. Bakers have a preference for eggs with dark yolks. Certainly the appearance of cakes made with such eggs is richer. Tests have found that, although eggs with darker yolks tend to produce moister sponge cakes, the cakes are somewhat coarser and less tender.
7.03: Egg Products
A number of egg products besides whole shell eggs are used in the baking and food service industry. By law, all egg products other than shell eggs are pasteurized to protect them against salmonella, and the low temperature at which they are kept inhibits bacterial activity, although under certain conditions they may spoil very rapidly.
The chief categories of egg products available are:
Liquid eggs (whole eggs and whole eggs with additional yolks)
Frozen eggs (whole eggs, egg whites, and egg yolks)
Dried and powdered eggs (whole eggs, egg whites, and meringue powder)
Liquid and Frozen Eggs
Liquid and frozen whole eggs are preferred in large bakeries where cracking and emptying of shells is not economical. They are also one of the most economical ways of purchasing eggs. Liquid and frozen whole eggs are sometimes “fortified” by the addition of egg yolks. Some bakers feel that liquid or frozen eggs don’t yield the same volume in sponge cakes as fresh eggs, and there is a certain bias in favor of shell eggs.
If stored in the freezer at -18°C (0°F) or lower, liquid and frozen eggs will keep for long periods with minimum loss of quality. Thawing should take place in the refrigerator or under cold water without submerging the container. Leaving frozen eggs at room temperature to thaw is a bad practice because the outside layers of egg can reach a temperature favorable to bacteria while the centre is still frozen. Heat should never be used to defrost eggs. Unused portions must be refrigerated and used within 24 hours.
Frozen egg yolks consist of 90% egg yolks and 10% sugar to prevent the yolk from gelling and to avoid separation of the fat.
Spray-Dried Whole Eggs and Egg Whites
Dried eggs are used by some bakers as a convenience and cost saver. As with frozen eggs, some bakers doubt their performance in products such as sponge cakes. But dried eggs produce satisfactory results because of the addition of a carbohydrate to the egg before the drying process, usually corn syrup, which results in foaming comparable to fresh eggs.
Dried whole eggs should be stored unopened in a cool place not over 10°C (50°F), preferably in the refrigerator. They are reconstituted by blending 1 kg (2.2 lb.) of powdered whole egg with 3 kg (6.6 lb) of cold water. The water is added slowly while mixing. Once reconstituted, dried eggs should be used immediately or refrigerated promptly and used within an hour.
In mixes such as muffins and cake doughnuts, dried eggs can be mixed in with the other dry ingredients and do not have to be reconstituted. In layer cake formulas, dried eggs are blended with the other dry ingredients before the fat and some water are added, followed by the balance of liquid in two stages.
Spray-dried egg whites are reconstituted by mixing 1 kg (2.2 lb.) of powdered egg white with 1 kg (2.2 lb.) of cold water, letting it stand for 15 minutes, and then adding 9 kg (20 lb.) of cold water. When used in cake mixes, the powdered egg white is blended with the other dry ingredients, but only 7 L (7 qt.) of cold water is used for every 1 kg (2.2 lb.) of powdered egg white.
221 Chemistry of Cooking
Dry Egg Substitutes or Replacements
Egg substitutes are made from sweet cheese, whey, egg whites, dextrose, modified tapioca starch, sodium caseinate, and artificial color and flavor. They are cost-cutters and can be used alone or in combination with fresh or dried eggs in cakes, cookies, and fillings. One kg (2.2 lb.) of powder is mixed with 4 kg (9 lb.) of water to replace powdered eggs.
Meringue Powder
While it is not a pure dehydrated egg white, meringue powder is widely used by bakers to make baked Alaska, royal icing, and toppings. It contains vegetable gums and starches to absorb moisture and make it whip better. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/07%3A_Eggs/7.02%3A_Composition_and_Nutrition.txt |
Eggs are a truly multifunctional ingredient and have many roles to play in the bakeshop. Their versatility means that product formulas may be adjusted once the properties of eggs are understood. For example, in French butter cream, egg whites may be substituted in the summer for whole eggs to give a more stable and bacteria-free product (egg white is alkaline, with pH 8.5). A yolk or two may be worked into a sweet short paste dough to improve its extensibility. Sponge cake formulas can be adjusted, for example, with the addition of egg yolks in jelly rolls to improve rolling up.
If a recipe is changed by replacing some or all of the eggs with water, two factors must be remembered:
Water replacement is about 75% of the egg content, since egg solids constitute about 25% of the egg.
Leavening ability is lessened and must be made up by the addition of chemical leavening.
Other uses of eggs are:
Leavening: They will support many times their own weight of other ingredients through their ability to form a cell structure either alone or in combination with flour. The egg white in particular is capable of forming a large mass of cells by building a fine protein network.
Moistening and binding: The fat in eggs provides a moistening effect, and the proteins present coagulate when heated, binding ingredients together.
Thickening: Eggs are valuable thickeners in the cooking of chiffon pie fillings and custard. Emulsifying: Lecithin, present in the yolk, is a natural emulsifier and assists in making smooth batters.
Customer appeal: Eggs enhance the appearance of products through their colour and flavour, and they improve texture and grain.
Structure: Eggs bind with other ingredients, primarily flour, creating the supporting structure for other ingredients.
Shelf life: The shelf life of eggs is extended through the fat content of the yolk.
Nutrition: Eggs are a valuable food in every respect. Note, however, that 4% of the lipid in egg yolk is cholesterol, which may be a concern to some people. Developments in poultry feed claim to have reduced or eliminated this cholesterol level.
Tenderizing: The fat in eggs acts like a shortening and improves the tenderness of the baked cake.
Keep these points in mind when using eggs:
Spots in eggs are due to blood fragments in the ovary. Such eggs are edible and may be used.
The albumen or egg white is soluble in cold water, congeals at 70°C (158°F), and remains insoluble from then on.
Cover leftover yolks or whites tightly and refrigerate. Add a little water on top of yolks, or mix in 10% sugar, to prevent crusting. Do not return unused portions to the master container.
Use clean utensils to dip egg products from their containers.
7.05: Storing Eggs
Whole eggs are the perfect medium for the development of bacteria and mould. Eggs with an undesirable odor may be high in bacteria or mould. While some of these odors disappear in baking, some will remain and give an off-taste to the product if the odor is concentrated and strong.
Store fresh eggs in the refrigerator in cartons to prevent moisture loss and absorption of odours. If refrigerator space is at a premium, eggs are stable for up to three weeks if kept at a temperature of 13°C to 15°C (55°F to 60°F). Naturally, this must be in a location with invariable conditions.
Food poisoning can result from using eggs held too long before using. Liquid or cracked eggs should be kept under refrigeration at all times.
Whole eggs can be checked for freshness with the candling or salt water method:
Candling method: Hold the egg up to a light in a darkened room or positioned so that the content or condition of the egg may be seen. If the yolk is held firmly by the white when the egg is turned, and the egg is clean and not broken, then the egg is of good quality. Smell or odor is not readily revealed unless the shell is broken.
Salt water method: Add 100 g of salt to 1 L (3.5 oz. to 1 qt.) of water. Allow to dissolve completely. When an egg is placed in this mixture, its level of buoyancy determines the age of the egg. An old egg will float to the surface, while a fresher egg will sink to the bottom. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/07%3A_Eggs/7.04%3A_The_Function_of_Eggs.txt |
Add all sections in this chapter
Thumbnail: Chocaolate is yummy. (CC BY-SA 3.0; Tuvalkin).
Contributors and Attributions
Sorangel Rodriguez-Velazquez (American University). Chemistry of Cooking by Sorangel Rodriguez-Velazquez is licensed under a Creative Commons Attribution-NonCommercial ShareAlike 4.0 International License, except where otherwise noted
08: Chocolate
In North America, chocolate manufacturing started in Massachusetts in 1765. Today, in the factory, the beans get cleaned, and magnets take out metallic parts, and then sand, dust, and other impurities are removed. Some starch will be changed into dextrins in the roasting process to improve flavor. Machines break the beans and grind them fine until a flowing liquid is produced, called chocolate liquor. Through hydraulic pressure, cocoa butter is reduced from 55% to approximately 10% to 24% or less, and the residue forms a solid mass called press cake.
The press cake is then broken, pulverized, cooled, and sifted to produce commercial cocoa powder. The baking industry uses primarily cocoa powders with a low fat content.
At the factory, chocolate is also subject to an additional refining step calcleodnching. Conching has a smoothing effect. The temperature range in this process is between 55°C and 65°C (131°F and 149°F). Sugar interacts with protein to form amino sugars, and the paste losesacids and moisture and becomes smoother.
This video explains the chemical reactions related to heat, melting point, and formation of crystal structures in science360.gov/obj/video/27d9...stry-chocolate
8.02: Chocolate Produced for the Baking Industry
True chocolate contains cocoa butter. The main types of chocolate, in decreasing order of cocoa liquor content, are:
Unsweetened (bitter) chocolate Dark chocolate
Milk chocolate
White chocolate
Unsweetened Chocolate
Unsweetened chocolate, also known as bitter chocolate, baking chocolate, or cooking chocolate, is pure cocoa liquor mixed with some form of fat to produce a solid substance. The pure ground, roasted cocoa beans impart a strong, deep chocolate flavor. With the addition of sugar in recipes, however, it is used as the base for cakes, brownies, confections, and cookies.
Dark (Sweet, Semi-Sweet, Bittersweet) Chocolate
Dark chocolate has an ideal balance of cocoa liquor, cocoa butter, and sugar. Thus it has the attractive, rich color and flavor so typical of chocolate, and is also sweet enough to be palatable. It does not contain any milk solids. It can be eaten as is or used in baking. Its flavor does not get lost or overwhelmed, as in many cases when milk chocolate is used. It can be used for fillings, for which more flavorful chocolates with high cocoa percentages ranging from 60% to 99% are often used.Dark is synonymous with semi-sweet, and extra dark with bittersweet, although the ratio of cocoa butter to solids may vary.
Sweet chocolate has more sugar, sometimes almost equal to cocoa liquor and butter amounts (45% to 55% range).
Semi-sweet chocolate is frequently used for cooking. It is a dark chocolate with less sugar than sweet chocolate.
Bittersweet chocolate has less sugar and more liquor than semi-sweet chocolate, but the two are often interchangeable when baking. Bittersweet and semi-sweet chocolates are sometimes referred to as couverture (see below). The higher the percentage of cocoa, the less sweet the chocolate is.
Milk Chocolate
Milk chocolate is solid chocolate made with milk, added in the form of milk powder. Milk chocolate contains a higher percentage of fat (the milk contributes to this) and the melting point is slightly lower. It is used mainly as a flavoring and in the production of candies and moulded pieces.
White Chocolate
The main ingredient in white chocolate is sugar, closely followed by cocoa butter and milk powder. It has no cocoa liquor. It is used mainly as a flavoring in desserts, in the production of candies and, in chunk form in cookies. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/08%3A_Chocolate/8.01%3A_From_the_Cocoa_Bean_to_the_Finished_Chocolate.txt |
The usual term for top quality chocolate is couverture. Couverture chocolate is a very high-quality chocolate that contains extra cocoa butter. The higher percentage of cocoa butter, combined with proper tempering, gives the chocolate more sheen, firmer “snap” when broken, and a creamy mellow flavor. Dark, milk, and white chocolate can all be made as couvertures.
The total percentage cited on many brands of chocolate is based on some combination of cocoa butter in relation to cocoa liquor. In order to be labelled as couverture by European Union regulations, the product must contain not less than 35% total dry cocoa solids, including not less than 31% cocoa butter and not less than 2.5% of dry non-fat cocoa solids. Couverture is used by professionals for dipping, coating, moulding, and garnishing.
What the percentages don’t tell you is the proportion of cocoa butter to cocoa solids. You can, however, refer to the nutrition label or company information to find the amounts of each. All things being equal, the chocolate with the higher fat content will be the one with more cocoa butter, which contributes to both flavor and mouthfeel. This will also typically be the more expensive chocolate, because cocoa butter is more valuable than cocoa liquor.
But keep in mind that just because two chocolates from different manufacturers have the same percentages, they are not necessarily equal. They could have dramatically differing amounts of cocoa butter and liquor, and dissimilar flavors, and substituting one for the other can have negative effects for
your recipe. Determining the amounts of cocoa butter and cocoa liquor will allow you to make informed decisions on chocolate choices.
8.04: Definitions and Regulations (ADD US)
The legislation for cocoa and chocolate products in Canada is found in Division 4 of the Food and Drug Regulations (FDR), under the Food and Drugs Act (FDA). The Canadian Food Inspection Agency (CFIA) is responsible for administering and enforcing the FDR and FDA. Here are some of the regulations governing cocoa and chocolate:
• Cocoa butter must be the only fat source. Chocolate sold in Canada cannot contain vegetable fats or oils.
• Chocolate must contain chocolate liquor.
The only sweetening agents permitted in chocolate in Canada are listed in Division 18 of the Food and Drug Regulations.
• Artificial sweeteners such as aspartame, sucralose, acesulfame potassium, and sugar alcohols (sorbitol, maltitol, etc.) are not permitted.
• Milk and/or milk ingredients are permissible.
• Emulsifying agents are permissible, as are flavors such as vanilla.
Cocoa butter and sugar quantities are not defined in the regulations. Some semi-sweet chocolate may be sweeter than so-called sweet chocolate. And remember that bittersweet chocolate is not, as you might expect, sugarless. Only if the label states “unsweetened,” do you know that there is no sugar added.
Products manufactured or imported into Canada that contain non-permitted ingredients (vegetable fats or oils, artificial sweeteners) cannot legally be called chocolate when sold in Canada. A non-standardized name such as “candy” must be used.
Finally, lecithin, which is the most common emulsifying agent added to chocolate, is approved for use in chocolate in North America and Europe, but Canadian regulations state that no more than 1% can be added during the manufacturing process of chocolate. Emulsifiers like lecithin can help thin out melted chocolate so it flows evenly and smoothly. Because it is less expensive than cocoa butter at thinning chocolate, it can be used to help lower the cost. The lecithin used in chocolate is mainly derived from soy. Both GMO (genetically modified organism) and non-GMO soy lecithin are available. Check the manufacturer’s packaging and ingredient listing for the source of soy lecithin in your chocolate. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/08%3A_Chocolate/8.03%3A_Couverture.txt |
• 9.1: Elements of Taste
Essentially there are a handful of elements that compose all of the taste profiles found in the foods we eat. Western definitions of taste conventionally define four major elements of taste: Salty Sweet Sour Bitter Asian cultures have added the following to the list: Umami (literally “pleasant savory taste”) Spiciness Astringency
• 9.2: Introduction to Salt
Salt can be found deposited in Earth’s layers in rock salt deposits. These deposits formed when the water in the oceans that covered Earth many millions of years ago evaporated. The salt was then covered by various types of rocks. Common salt (sodium chloride) is 40% sodium and 60% chloride. An average adult consumes about 7 kg (15 lb.) per year.
• 9.3: Origins of Salt
• 9.4: Functions of Salt in Baking
Salt has three major functions in baking. It affects: (1) Fermentation, (2) Dough, and (3) conditioning Flavor
• 9.5: Using Salt in Fermented Doughs
• 9.6: Storing Salt
Salt is very stable and does not spoil under ordinary conditions. However, it may have a slight tendency to absorb moisture and become somewhat lumpy and hard. Therefore, it is advisable to store it in a clean, cool, and dry place. Inasmuch as salt can absorb odors, the storage room should be free from any odor that might be taken up and carried by the salt.
• 9.7: Introduction to Spices and Other Flavorings
Food touches all of the senses. We taste, we smell, we see color and shape, we feel texture and temperature, and we hear sounds as we eat. All of these elements together create a palette with an infinite number of combinations, but the underlying principles that make food taste good are unchanged.
• 9.8: Seasoning and Flavoring
Many ingredients are used to enhance the taste of foods. These ingredients can be used to provide both seasoning and flavoring.
• 9.9: Herbs
Herbs tend to be the leaves of fragrant plants that do not have a woody stem. Herbs are available fresh or dried, with fresh herbs having a more subtle flavor than dried. You need to add a larger quantity of fresh herbs (up to 50% more) than dry herbs to get the same desired flavor. Conversely, if a recipe calls for a certain amount of fresh herb, you would use about one-half of that amount of dry herb.
• 9.10: Spices
Spices are aromatic substances obtained from the dried parts of plants such as the roots, shoots, fruits, bark, and leaves. They are sold as seeds, blends of spices, whole or ground spices, and seasonings. The aromatic substances that give a spice its particular aroma and flavor are the essential oils. The flavor of the essential oil or flavoring compound will vary depending on the quality and freshness of the spice.
• 9.11: Flavorings in Baking
Flavors cannot be considered a truly basic ingredient in bakery products but are important in producing the most desirable products.
Thumbnail: Spices and herbs at a shop in Goa, India. (CC BY 2.0; judepics).
09: Spices
Essentially there are a handful of elements that compose all of the taste profiles found in the foods we eat. Western definitions of taste conventionally define four major elements of taste:
• Salty
• Sweet
• Sour
• Bitter
Asian cultures have added the following to the list:
• Umami (literally “pleasant savory taste”)
• Spiciness
• Astringency
Foods and recipes that contain a number of these elements in balance are generally those that we think of as tasting good. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/09%3A_Spices/9.01%3A_Elements_of_Taste.txt |
Historically, salt was a prestigious commodity. “The salt of the earth” describes an outstanding person. The word salary comes from the Latin salaria, which was the payment made to Roman soldiers for the purchase of salt. In Arabic, the phrase translated as “there is salt between us” expresses the covenant between humans and the divine. Though no longer a valuable commodity in the monetary sense, salt is still valuable in the sense of being crucial to human health. Common salt (sodium chloride) is 40% sodium and 60% chloride. An average adult consumes about 7 kg (15 lb.) per year.
Salt can be found deposited in Earth’s layers in rock salt deposits. These deposits formed when the water in the oceans that covered Earth many millions of years ago evaporated. The salt was then covered by various types of rocks.Today, we have three basic methods of obtaining salt from natural sources:
• Mining rock salt
• Extracting salt from salt brines created by pumping water into underground salt deposits
• Evaporating salt water from oceans, seas, and salt lakes
9.03: Origins of Salt
Mined Rock Salt
In some countries, salt is mined from salt beds approximately 150 m to 300 m (490 ft. to 985 ft.) below Earth’s surface. Sometimes, impurities such as clay make it impossible to use rock salt without purification. Purification makes it possible to get the desired flavor and color, thus making it edible. Edible salt is highly refined: pure and snow white.
Salt from Salt Brines
Salt can also be mined from natural salt beds by using water to extract the salt in the form of a brine, which saves having to construct a mine. Holes are drilled approximately 20 cm (8 in.) in diameter until the salt deposits are reached. A pipe is then driven into the salt beds and another pipe is driven inside the larger pipe further into the deposits. Pressurized water is forced through the outer pipe into the salt beds, and then pumped back out through the smaller pipe to the refineries. Through separation of the impurities, eventually all water in the brine will evaporate, leaving crystallized salt, which then can be dried, sifted, and graded in different sizes.
Ocean, Sea, and Lake Salt
In some countries, especially those with dry and warm climates, salt is recovered straight from the ocean or salt lakes. The salt water is collected in large shallow ponds (also calledsalt gardens) where, through the heat of the sun, the water slowly evaporates. Moving the salt solution from one pond to another until the salt crystals become clear and the water has evaporated eliminates impurities. The salt is then purified, dried completely, crushed, sifted, and graded.
9.04: Functions of Salt in Baking
Salt has three major functions in baking. It affects: (1) Fermentation, (2) Dough, and (3) conditioning Flavor
Fermentation
Fermentation is salt’s major function:
• Salt slows the rate of fermentation, acting as a healthy check on yeast development.
• Salt prevents the development of any objectionable bacterial action or wild types of fermentation.
• Salt assists in oven browning by controlling the fermentation and therefore lessening the destruction of sugar.
• Salt checks the development of any undesirable or excessive acidity in the dough. It thus protects against undesirable action in the dough and effects the necessary healthy fermentation required to secure a finished product of high quality.
Dough Conditioning
Salt has a binding or strengthening effect on gluten and thereby adds strength to any flour. The additional firmness imparted to the gluten by the salt enables it to hold the water and gas better, and allows the dough to expand without tearing. This influence becomes particularly important when soft water is used for dough mixing and where immature flour must be used. Under both conditions, incorporating a maximum amount of salt will help prevent soft and sticky dough. Although salt has no direct bleaching effect, its action results in a fine-grained loaf of superior texture. This combination of finer grain and thin cell walls gives the crumb of the loaf a whiter appearance.
Flavor
One of the important functions of salt is its ability to improve the taste and flavor of all the foods in which it is used. Salt is one ingredient that makes bread taste so good. Without salt in the dough batch, the resulting bread would be flat and insipid. The extra palatability brought about by the presence of salt is only partly due to the actual taste of the salt itself. Salt has the peculiar ability to intensify the flavor created in bread as a result of yeast action on the other ingredients in the loaf. It brings out the characteristic taste and flavor of bread and, indeed, of all foods. Improved palatability in turn promotes the digestibility of food, so it can be said that salt enhances the nutritive value of bakery products. The lack of salt or too much of it is the first thing noticed when tasting bread. In some bread 2% can produce a decidedly salty taste, while in others the same amount gives a good taste. The difference is often due to the mineralization of the water used in the dough. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/09%3A_Spices/9.02%3A_Introduction_to_Salt.txt |
The average amount of salt to use in dough is about 1.75% to 2.25% based on the flour used. Some authorities recommend that the amount of salt used should be based on the actual quantity of water used in making the dough, namely about 30 g per L (1 oz. per qt.) of water.
During the hot summer months, many bakers find it advantageous to use slightly more salt than in the winter as a safeguard against the development of any undesirable changes in the dough fermentation. Salt should never be dissolved in the same water in which yeast is dissolved. It is an antiseptic and dehydrates yeast cells and can even kill part of them, which means that less power is in the dough and a longer fermentation is needed. In bread made by the sponge dough method and in liquid fermentation systems, a small amount of salt included in the first stage strengthens the gluten.
9.06: Storing Salt
Salt is very stable and does not spoil under ordinary conditions. However, it may have a slight tendency to absorb moisture and become somewhat lumpy and hard. Therefore, it is advisable to store it in a clean, cool, and dry place. Inasmuch as salt can absorb odors, the storage room should be free from any odor that might be taken up and carried by the salt.
9.07: Introduction to Spices and Other Flavorings
Food touches all of the senses. We taste, we smell, we see color and shape, we feel texture and temperature, and we hear sounds as we eat. All of these elements together create a palette with an infinite number of combinations, but the underlying principles that make food taste good are unchanged.
• Variety and diversity in textures and the elements of taste make for interesting food.
• Contrast is as important as harmony; but avoid extremes and imbalance.
9.08: Seasoning and Flavoring
Many ingredients are used to enhance the taste of foods. These ingredients can be used to provide both seasoning and flavoring.
• Seasoning means to bring out or intensify the natural flavor of the food without changing it. Seasonings are usually added near the end of the cooking period. The most common seasonings are salt, pepper, and acids (such as lemon juice). When seasonings are used properly, they cannot be tasted; their job is to heighten the flavors of the original ingredients.
• Flavoring refers to something that changes or modifies the original flavor of the food. Flavoring can be used to contrast a taste such as adding liqueur to a dessert where both the added flavor and the original flavor are perceptible. Or flavorings can be used to create a unique flavor in which it is difficult to discern what the separate flavorings are. Spice blends used in pumpkin pies are a good example of this.
Knowing how to use seasonings and flavorings skillfully provides cooks and bakers with an arsenal with which they can create limitless flavor combinations. Flavoring and seasoning ingredients include wines, spirits, fruit zests, extracts, essences, and oils. However, the main seasoning and flavoring ingredients are classified as herbs and spices.
Knowing the difference between herbs and spices is not as important as knowing how to use seasonings and flavorings skillfully. In general, fresh seasonings are added late in the cooking process while dry ones tend to be added earlier. It is good practice to under-season during the cooking process and then add more seasonings (particularly if you are using fresh ones) just before presentation. This is sometimes referred to as “layering.” When baking, it is difficult to add more seasoning at the end, so testing recipes to ensure the proper amount of spice is included is a critical process.
9.09: Herbs
Herbs tend to be the leaves of fragrant plants that do not have a woody stem. Herbs are available fresh or dried, with fresh herbs having a more subtle flavor than dried. You need to add a larger quantity of fresh herbs (up to 50% more) than dry herbs to get the same desired flavor. Conversely, if a recipe calls for a certain amount of fresh herb, you would use about one-half of that amount of dry herb.
The most common fresh herbs are basil, coriander, marjoram, oregano, parsley, rosemary, sage, tarragon, and thyme. Fresh herbs should have a clean, fresh fragrance and be free of wilted or brown leaves. They can be kept for about five days if sealed inside an airtight plastic bag. Fresh herbs are usually added near the completion of the cooking process so flavors are not lost due to heat exposure.
Dried herbs lose their power rather quickly if not properly stored in airtight containers. They can last up to six months if properly stored. Dried herbs are usually added at the start of the cooking process as their flavor takes longer to develop than fresh herbs. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/09%3A_Spices/9.05%3A_Using_Salt_in_Fermented_Doughs.txt |
Spices are aromatic substances obtained from the dried parts of plants such as the roots, shoots, fruits, bark, and leaves. They are sold as seeds, blends of spices, whole or ground spices, and seasonings. The aromatic substances that give a spice its particular aroma and flavor are the essential oils. The flavor of the essential oil or flavoring compound will vary depending on the quality and freshness of the spice.
The aromas of ground spices are volatile. This means they lose their odor or flavoring when left exposed to the air for extended periods. They should be stored in sealed containers when not in use. Whole beans or unground seeds have a longer shelf life but should also be stored in sealed containers.
Allspice
Allspice is only one spice, yet it has a flavor resembling a blend of cloves, nutmeg, and cinnamon. At harvest time, the mature (but still green) berries from the allspice trees (a small tropical evergreen) are
dried in the sun. During drying they turn reddish-brown and become small berries. The berries are about
0.6 cm (1/4 in.) in diameter and contain dark brown seeds.
Allspice is grown principally in Jamaica and to a lesser degree in Mexico. Allspice is available whole or ground. Bakers usually use ground allspice in cakes, cookies, spices, and pies.
Anise
Anise is the small, green-grey fruit or seed of a plant of the parsley family. The plant grows to a height of 45 cm (18 in.) and has fine leaves with clusters of small white flowers. It is native to Mexico and Spain, with the latter being the principal producer. Anise seeds are added to pastries, breads, cookies, and candies.
Caraway
Caraway is the dried fruit or seed of a biennial plant of the parsley family, harvested every second year, primarily in the Netherlands. It is also produced in Poland and Russia. The many-branched, hollow-stemmed
herb grows up to 60 cm (24 in.) high and has small white flowers. Caraway is a small crescent-shaped brown seed with a pleasant aroma but somewhat sharp taste. Although it is most familiar in rye bread, caraway is also used in cookies and cakes.
Cardamom
Native to India, Sri Lanka, and Guatemala, cardamom is the fruit or seed of a plant of the ginger family. The three-sided, creamy-white, flavorless pod holds the tiny aromatic, dark brown seeds. It is available in whole and ground (pod removed). Cardamom in ground form flavors Danish pastries and coffee cakes, Christmas baking, and Easter baking such as hot cross buns.
Cinnamon
Cinnamon comes from the bark of an aromatic evergreen tree. It is native to China, Indonesia, and Indochina. Cinnamon may be purchased in ground form or as cinnamon sticks. Ground cinnamon is used in pastries, breads, puddings, cakes, candy, and cookies. Cinnamon sticks are used for preserved fruits and flavoring puddings. Cinnamon sugar is made with approximately 50 g (2 oz.) of cinnamon to 1 kg (2.2 lb.) of granulated sugar.
Cassia
Cassia, sometimes known as Chinese cinnamon, is native to Assam and Myanmar. It is similar to cinnamon but a little darker with a sharper taste. It is considered better for savory rather than sweet foods. It is prized in Germany and some other countries as a flavor in chocolate.
Cloves
Cloves are the dried, unopened buds of a tropical evergreen tree, native to Indonesia. The flavor is characterized by a sweet, pungent spiciness. The nail-shaped whole cloves are mainly used in cooking, but
the ground version of this spice heightens the flavor of mincemeat, baked goods, fruit pies, and plum pudding.
Ginger
Ginger is one of the few spices that grow below the ground. It is native to southern Asia but is now imported from Jamaica, India, and Africa. The part of the ginger plant used is obtained from the root. Ground ginger is the most commonly used form in baking — in fruitcakes, cookies, fruit pies, and gingerbread. Candied ginger is used in pastries and confectionery.
Mace
Originating in the East and West Indies, mace is the fleshy growth between the nutmeg shell and outer husk, yellow-orange in color. It is usually sold ground, but sometimes whole mace (blades of mace) is available. Mace is used in pound cakes, breads, puddings, and pastries.
Nutmeg
Nutmeg is the kernel or seed of the nutmeg fruit. The fruit is similar to the peach. The fleshy husk, grooved on one side, splits, releasing the deep-brown aromatic nutmeg. It is available whole or ground. Ground nutmeg is used extensively in custards, cream puddings, spice cakes, gingerbread, and doughnuts.
Poppy Seed
Poppy seed comes from the Netherlands and Asia. The minute, blue-grey, kidney-shaped seeds are so small
they seem to be round. Poppy seeds are used in breads and rolls, cakes and cookies, and fillings for pastries.
Sesame or Benne Seed
Sesame or benne seeds are the seeds of the fruit of a tropical annual herb grown in India, China, and Turkey. The seeds are tiny, shiny, and creamy-white with a rich almond-like flavor and aroma. Bakers use sesame seeds in breads, buns, coffee cakes, and cookies.
Vanilla
The Spaniards named vanilla. The word derives from vaina, meaning pod. Vanilla is produced from an orchid-type plant native to Central America. The vanilla beans are cured by a complicated process, which helps explain the high cost of genuine vanilla. The cured pods should be black in color and packed in airtight boxes. Imitation vanilla extracts are made from a colorless crystalline synthetic compound called vanillin. Pure vanilla extract is superior to imitation vanilla. Artificial vanilla is more intense than real vanilla by a factor of 3 to 4 and must be used sparingly.
To use vanilla beans, split the pod down the middle to scrape out the seeds. The seeds are the flavoring agents. Alternatively, the split pod can be simmered in the milk or cream used in dessert preparation. Its flavoring power is not spent in one cooking and it can be drained, kept frozen, and reused. A vanilla bean kept in a container of icing sugar imparts the flavor to the sugar, all ready for use in cookies and cakes.
Vanilla extract is volatile at temperatures starting at 138°C (280°F) and is therefore not ideal for flat products such as cookies. It is suitable for cakes, where the interior temperature does not get so high.
Vanilla beans and vanilla extract are used extensively by bakers to flavor a wide range of desserts and other items.
9.11: Flavorings in Baking
Flavors cannot be considered a truly basic ingredient in bakery products but are important in producing the most desirable products. Flavoring materials consist of:
• Extracts or essences
• Emulsions
• Aromas
• Spices
Note: Salt may also be classed as a flavoring material because it intensifies other flavors.
These and others (such as chocolate) enable the baker to produce a wide variety of attractively flavored pastries, cakes, and other bakery products. Flavor extracts, essences, emulsions, and aromas are all solutions of flavor mixed with a solvent, often ethyl alcohol.
The flavors used to make extracts and essences are the extracted essential oils from fruits, herbs, and vegetables, or an imitation of the same. Many fruit flavors are obtained from the natural parts (e.g., rind of lemons and oranges or the exterior fruit pulp of apricots and peaches). In some cases, artificial flavor is added to enhance the taste, and artificial coloring may be added for eye appeal. Both the Canadian and U.S. departments that regulate food restrict these and other additives. The flavors are sometimes encapsulated in corn syrup and emulsifiers. They may also be coated with gum to preserve the flavor compounds and give longer shelf life to the product. Some of the most popular essences are compounded from both natural and artificial sources. These essences have the true taste of the natural flavors.
Aromas are flavors that have an oil extract base. They are usually much more expensive than alcoholic extracts, but purer and finer in their aromatic composition. Aromas are used for flavoring delicate creams, sauces, and ice creams.
Emulsions are homogenized mixtures of aromatic oils and water plus a stabilizing agent (e.g., vegetable gum). Emulsions are more concentrated than extracts and are less susceptible to losing their flavor in the oven. They can therefore be used more sparingly. | textbooks/chem/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/09%3A_Spices/9.10%3A_Spices.txt |
• 1.1: What is a Fluid?
What is a fluid? Almost everything that we will discuss is soft matter under physiological temperature conditions: liquids and solutions, cytoplasm and cytosol, DNA and proteins in solution, membranes, micelles, colloids, gels... All of these materials can in some respect be considered a fluid.
• 1.2: Radial Distribution Function
The radial distribution function, g(r), is the most useful measure of the “structure” of a fluid at molecular length scales. g(r) provides a statistical description of the local packing and particle density of the system, by describing the average distribution of particles around a central reference particle.
• 1.3: Excluded Volume
One of the key concepts that arises from a particulate description of matter is excluded volume. Even in the absence of attractive interactions, at short range the particles of the fluid collide and experience repulsive forces. These repulsive forces are a manifestation of excluded volume, the volume occupied by one particle that is not available to another. This excluded volume gives rise to the structure of solvation shells that is reflected in the short-range form of g(r) and W(r).
01: Fluids
Fluids
What is a fluid? Almost everything that we will discuss is soft matter under physiological temperature conditions: liquids and solutions, cytoplasm and cytosol, DNA and proteins in solution, membranes, micelles, colloids, gels... All of these materials can in some respect be considered a fluid. So, what is a fluid?
• A substance that flows, deforms, and changes shape when subject to a force, or stress.
• It has no fixed shape, but adapts its surface to the shape of its container. Gasses are also fluids, but we will focus on fluids that are mostly incompressible.
For physicists, fluids are commonly associated with flow—a non-equilibrium property—and how matter responds to forces (i.e., "Newtonian fluids"). This topic—"rheology"—will be discussed in more detail later. From this perspective, all soft condensed matter can be considered a fluid. For chemists, fluids most commonly appear as liquids and solutions. Chemists typically use a molecular description for the solute, but less so for the solvent. However, chemists have a clear appreciation of how liquids influence chemical behavior and reactivity, a topic commonly called "solvation". The most common perspective of fluids is as continuous dielectric media, however fluids can be multicomponent heterogeneous mixtures. For our biophysical purposes, we use the perspectives above, with a particular interest in the uniquely biological fluid: water. Since we are particularly interested in molecular-scale phenomena, we will add some additional criteria:
• Composition: Fluids are dense media composed of particulate matter (atoms, molecules, proteins...) that can interact with one another. Since no two particles can occupy the same volume, each particle in a fluid has “excluded volume” that is not available to the remaining particles in the system.
• "Structure": Fluids are structured locally on the distance scale of the particle size by their packing and cohesive interactions, but are macroscopically disordered.
• The midrange or mesoscale distances involve interactions between multiple particles, leading to correlated motions of the constituents.
• "Flow" is a manifestation of these correlated structural motions in the mesoscale structure.
• Most important: The cohesive forces (intermolecular interactions) between the constituents of a fluid, and the energy barriers to changing structure, are on the order of \(k_B T\) ("thermal energy"). Thermal forces are enough to cause spontaneous flow on a microscopic level even at equilibrium.
Fluids may appear time-invariant at equilibrium, but they are microscopically dynamic. In many cases, "structure" (the positioning of constituents in space) and the "dynamics" (time-dependent changes to position) are intimately coupled. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/01%3A_Fluids/1.01%3A_What_is_a_Fluid.txt |
"Structure" implies that the positioning of particles is regular and predictable. This is possible in a fluid to some degree when considering the short-range position and packing of particles. The local particle density variation should show some structure in a statistically averaged sense. Structure requires a reference point, and in the case of a fluid we choose a single particle as the reference and describe the positioning of other particles relative to that. Since each particle of a fluid experiences a different local environment, this information must be statistically averaged, which is our first example of a correlation function. For distances longer than a "correlation length", we should lose the ability to predict the relative position of a specific pair of particles. On this longer length scale, the fluid is homogeneous.
The radial distribution function, $g(r)$, is the most useful measure of the "structure" of a fluid at molecular length scales. Although it invokes a continuum description, by "fluid" we mean any dense, disordered system which has local variation in the position of its constituent particles but is macroscopically isotropic. $g(r)$ provides a statistical description of the local packing and particle density of the system, by describing the average distribution of particles around a central reference particle. We define the radial distribution function as the ratio of $\langle \rho (r) \rangle$, the average local number density of particles at a distance $r$, to the bulk density of particles, $\rho$:
$g(r) =\dfrac{\langle \rho (r) \rangle }{\rho}\nonumber$
In a dense system, $g(r)$ starts at zero (since it does not count the reference particle), rises to a peak at the distance characterizing the first shell of particles surrounding the reference particle (i.e., the $1^{\text{st}}$ solvation shell), and approaches 1 for long distances in isotropic media. The probability of finding a particle at a distance $r$ in a shell of thickness $dr$ is $P(r)=4 \pi r^{2} g(r)\ \mathrm{d} r$, so integrating $\rho \cdot g(r)$ over the first peak in gives the average number of particles in the first shell.
The radial distribution function is most commonly used in gasses, liquids, and solutions, since it can be used to calculate thermodynamic properties such as the internal energy and pressure of the system. But is relevant at any size scale, such as packing of colloids, and is useful in complex heterogeneous media, such as the distribution of ions around DNA. For correlating the position of different types of particles, the radial distribution function is defined as the ratio of the local density of "$b$" particles at a distance $r$ from "$a$" particles, $g_{a b}(r)= \left \langle \rho_{ab}(r)\right \rangle /\rho$ In practice, $\rho_{ab} (r)$ is calculated by looking radially from an "$a$" particle at a shell at distance $r$ and of thickness $\mathrm{d} r$, counting the number of "$b$" particles within that shell, and normalizing the count by the volume of that shell.
Two-Particle Density Correlation Function1
Let’s look a little deeper, considering particles of the same type, as in an atomic liquid or granular material. If there are $N$ particles in a volume $V$, and the position of the $i^{\text{th}}$ particle is $\bar{r_i}$, then the number density describes the position of particles,
$\rho(\bar{r})=\sum_{i=1}^{N} \delta \left (\bar{r} - \bar{r_i}\right) \nonumber$
The average of a radially varying property given by $X(r)$ is determined by
$\langle X(r)\rangle=\dfrac{1}{V} \int_{V} X(r) 4 \pi r^{2} d r \nonumber$
Integrating $\rho(\bar{r})$ over a volume gives the particle number in that volume.
$\int_{V} \rho(r) 4 \pi r^{2} d r=N \nonumber$
When the integral is over the entire volume, we can use this to obtain the average particle density:
$\dfrac{1}{V} \int_{0}^{\infty} \rho (r) 4 \pi r^{2} d r = \dfrac{N}{V} = \rho \nonumber$
Next, we can consider the spatial correlations between two particles, $i$ and $j$. The two-particle density correlation function is
$\rho \left(\bar{r}, \vec{r}' \right) = \left \langle \sum_{i=1}^{N} \delta \left( \bar{r}- \bar{r_i} \right) \sum_{j=1}^{N} \delta \left(\vec{r}'-\bar{r_j}\right) \right \rangle \nonumber$
This describes the conditional probability of finding particle $i$ at position $r_i$ and particle $j$ at position $r_j$. We can expand and factor $\rho (\bar{r}, \bar{r}')$ into two terms depending on whether $i = j$ or $i \ne j$:
$\begin{array} {rcl} {\rho \left(\bar{r}, \vec{r}' \right)} & = & {N \left \langle \delta (\bar{r} - \bar{r_i}) \delta (\bar{r} - \bar{r_i}) \right \rangle + N(N - 1) \left \langle \delta (\bar{r} - \bar{r_i}) \delta (\bar{r}' - \bar{r_j}) \right \rangle} \ {} & = & {\rho^{(1)} + \rho^{(2)} \left(\bar{r}, \vec{r}' \right)} \end{array}\nonumber$
The first term describes the self-correlations, of which there are $N$ terms: one for each atom.
$\rho^{(1)}=N \left \langle \delta \left( \bar{r} - \bar{r_i} \right) \delta \left (\bar{r}' -\bar{r_i} \right ) \right \rangle = \rho \nonumber$
The second term describes the two-body correlations, of which there are $N(N‒1)$ terms.
$\begin{array} {rcl} {\rho^{(2)} \left ( \bar{r}, \bar{r}' \right )} & = & {N(N - 1) \left \langle \delta (\bar{r} - \bar{r_i}) \delta (\bar{r}' - \bar{r_j}) \right \rangle } \ {} & = & {\dfrac{N^2}{V^2} g \left ( \bar{r}, \bar{r}' \right ) = \rho^2 g \left ( \bar{r}, \bar{r}' \right )} \end{array} \nonumber$
$g() = \rho^{(2)} \left ( \bar{r}, \bar{r}' \right )/ \rho^2$ is the two-particle distribution function, which describes spatial correlation between two atoms or molecules. For isotropic media, it depends only on distance between particles, $g \left ( \left | \bar{r}, \bar{r}' \right | \right ) = g(r)$, and is therefore also called the radial pair-distribution function.
We can generalize $g(r)$ to a mixture of $a$ and $b$ particles by writing $g_{ab} (r)$:
$\begin{array} {c} {g_{ab} (r) = \dfrac{\rho_{ab}(r)}{N_{b} / V}} \ {N_{b}=\int_{V} dr 4 \pi r^{2} \rho_{ab}(r)} \end{array}\nonumber$
Potential of Mean Force
One can use $g(r)$ to describe the free energy for bringing two particles together as
$W(r)=-k_{B} T \ln g(r) \nonumber$
$W(r)$ is known as the potential of mean force. We are taking a free energy which is a function of many internal variables and projecting it onto a single coordinate. $W(r)$ is a potential function that can be used to obtain the mean effective forces that a particle will experience at a given separation $f = -\partial W /\partial r$.
_________________________________
1. J. P. Hansen and I. R. McDonald, Theory of Simple Liquids, $2^{\text{nd}}$ Ed. (Academic Press, New York, 1986); D. A. McQuarrie, Statistical Mechanics. (Harper & Row, New York, 1976).
1.03: Excluded Volume
Excluded Volume
One of the key concepts that arises from a particulate description of matter is excluded volume. Even in the absence of attractive interactions, at short range the particles of the fluid collide and experience repulsive forces. These repulsive forces are a manifestation of excluded volume, the volume occupied by one particle that is not available to another. This excluded volume gives rise to the structure of solvation shells that is reflected in the short-range form of $g(r)$ and $W(r)$. Excluded volume also has complex dynamic effects in dense fluids, because one particle cannot move far without many other particles also moving in some correlated manner.
The excluded volume can be related to $g(r)$ and $W(r)$, making note of the virial expansion. If we expand the equation of state in the density of the fluid ($\rho$):
$\frac{p}{\rho k_{B} T}=1+B_{2}(T) \rho+\cdots \nonumber$
The second virial coefficient $B_2$ is half of the excluded volume of the system. This is the leading source of non-ideality in gasses reflected in the van der Waals equation of state.
$\begin{array} {rcl} {2 B_{2}(T)} & = & {\int_{0}^{\infty} r^{2}(1-g(r)) d r} \ {} & = & {\int_{0}^{\infty} r^{2}\left(1-\exp \left[-W(r) / k_{B} T\right]\right) d r} \end{array} \nonumber$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/01%3A_Fluids/1.02%3A_Radial_Distribution_Function.txt |
• 2.1: Lattice Models
Lattice models provide a minimalist, or coarse-grained, framework for describing the translational, rotational, and conformational degrees of freedom of molecules, and are particularly useful for problems in which entropy of mixing, configurational entropy, or excluded volume are key variables. The lattice forms a basis for enumerating different configurations of the system, or microstates. Each microstates may have a different energy, which is then used to calculate partition functions.
• 2.2: Ideal Lattice Gas
The description of a weakly interacting fluid, gas, solution, or mixture is dominated by the translational entropy or entropy of mixing. In this case, we are dealing with how molecules occupy a volume, which leads to a translational partition function.
• 2.3: Binary Fluid
02: Lattice Model of a Fluid
Lattice Models
Lattice models provide a minimalist, or coarse-grained, framework for describing the translational, rotational, and conformational degrees of freedom of molecules, and are particularly useful for problems in which entropy of mixing, configurational entropy, or excluded volume are key variables. The lattice forms a basis for enumerating different configurations of the system, or microstates. Each of these microstates may have a different energy, which is then used to calculate a partition function.
$Q=\sum_{i} e^{-E_{i} / k_{B} T}$
The thermodynamic quantities then emerge from
$\begin{array}{l} F=-k_{B} T \ln Q \ S=-k_{B} \sum_{i} P_{i} \ln P_{i} \ U=\sum_{i} P_{i} E_{i} \end{array} \nonumber$
and other internal variables $(X)$ can be statistically described from
$\langle X\rangle=\sum_{i=1}^{N} P_{i} X_{i} \quad P_{i}\left(E_{i}\right)=\frac{e^{-E_{i} / k_{B} T}}{Q}\nonumber$
We will typically work with a macroscopic volume broken into cells, typically of a molecular size, which we can fill with the fundamental building blocks in our problem (atoms, molecules, functional groups) subject to certain constraints. In this section we will concern ourselves with the mixing of rigid particles, i.e., translational degrees of freedom. More generally, lattice models can include translational, rotational, and conformational degrees of freedom of molecules.
2.02: Ideal Lattice Gas
Lattice Gas
The description of a weakly interacting fluid, gas, solution, or mixture is dominated by the translational entropy or entropy of mixing. In this case, we are dealing with how molecules occupy a volume, which leads to a translational partition function. We begin by defining a lattice and the molecules that fill that lattice:
Parameters:
Total volume: $V$
Cell volume: $v$
Number of sites: $M = V/v$
Number of particles: $N\ (N \le M)$
Fill Factor: $x = N/M\ (0 \le x \le 1)$
Number of contacts each cell has with adjacent cells: $z$
We begin my assuming that all microstates (configurations of occupied sites in the volume) are equally probable, i.e., $E_i = \text{constant}$. This is the microcanonical ensemble, so the entropy of the fluid is given by Boltzmann's equation
$S=k_{B} \ln \Omega$
where $\Omega$ is the number of microstates available to the system. If $M$ is not equal to $N$, then the permutations for putting $N$ indistinguishable particles into $M$ sites is given by the binomial distribution:
Also, on cubic lattice, we have 6 contacts that each cell makes with its neighbors. The contact number is $z$, which will vary for $2D\ (z = 4)$ and $3D\ (z = 6)$ problems.
How do we choose the size of $v$? It has to be considered on a case-by-case basis. The objective of these models is to treat the cell as the volume that a particle excludes to occupation by other particles. This need not correspond to an actual molecular dimension in the atomic sense. In the case of the traditional derivation of the translational partition function for an ideal gas, $v$ is equivalent to the quantization volume $\Lambda^{3}=\left(h^{2} / 2 \pi m k_{B} T\right)^{3 / 2}$.
From $\Omega$ we can obtain the entropy of mixing from $S = k_B \ln \Omega$ with the help of Sterling’s approximation $\ln (M !) \simeq M \ln (M) - M$:
$\begin{array} {rcl} {S} & = & {k_B (M \ln M - N \ln N - (M - N) \ln (M - N))} \ {} & = & {-Mk_B (x \ln x + (1 - x) \ln (1 - x))} \end{array}$
In the last line, we introduced a particle fill factor
$x = N/M \nonumber$
which quantifies the fraction of cells that are occupied by particles, and is also known as the mole fraction or the packing ratio. Since $x < 1$, the entropy of mixing is always positive.
For the case of a dilute solution or gas, $N \ll M$, and $(1 - x) \approx 1$, so
$S_{\text{dilute}} \approx -N k_B \ln x \ \ \ \ or \ \ \ \ -nR \ln x\nonumber$
We can derive the ideal gas law $p = Nk_B T/V$ from this result by making use of the thermodynamic identity $p = T (\partial S / \partial V)_N$. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/02%3A_Lattice_Model_of_a_Fluid/2.01%3A_Lattice_Models.txt |
Entropy of Mixing
The thermodynamics of the mixing process is important to phase equilibria, hydrophobicity, solubility, and related solvation problems. The process of mixing two pure substances $A$ and $B$ is shown below. We define the composition of the system through the number of $A$ and $B$ particles: $N_A$ and $N_B$ and the total number of particles $N = N_A + N_B$, which also equals the number of cells. We begin with two containers of the homogeneous pure fluids and mix them together, keeping the total number of cells constant. In the case of the pure fluids before mixing, all cells of the container are initially filled, so there is only one accessible microstate, $\Omega_{\text{pure}} = 1$, and
$S_{\text{pure}} = k_B \ln 1 = 0\nonumber$
When the two containers are mixed, the number of possible microstates are given by the binomial distribution: $\Omega_{\text{mix}} = N!/N_A! N_B!$.
If these particles have no interactions, each microstate is equally probable, and similar to eq. (2.2.2) we obtain the entropy of the mixture as
$S_{\text{mix}}=-N k_{B}\left(x_{A} \ln x_{A}+x_{B} \ln x_{B}\right) \label{eq2.3.1}$
For the mixture, we define the mole fractions for the two components: $x_A = N_A / N$ and $x_B = N_B / N$. As before, since $x_A$ and $x_B < 1$, the entropy for the mixture is always positive. The entropy of mixing is then calculated from $\Delta S_{\text{mix}} = S_{\text{mix}} - (S_{\text{pure A}} + S_{\text{pure B}})$. Since the entropy of the pure substances in this model is zero, $\Delta S_{\text{mix}} = S_{\text{mix}}$. A plot of this function as a function of mole fractions illustrates that the maximum entropy mixture has $x_A = x_B = 0.5$.
In the absence of interactions between particles, the free energy of mixing is purely entropic with $\Delta F_{\text{mix}} = -T \Delta S_{\text{mix}}$. The chemical potential of $A$ particles $\mu_A$ describes the free energy needed to replace a particle $B$ with an additional $A$ particle, and is obtained from
$\begin{array} {l} {\mu_i = \left (\dfrac{\partial F}{\partial N_i} \right )_{T, V, \{N_{j \ne i}\}}} \ {\mu_A = -k_B T (\ln x_A - \ln x_B) = -\mu B} \end{array} \nonumber$
This curve illustrates the increasing challenge of finding available space as the packing fraction increases.
Intermolecular Interaction
To look at real systems, we now add interactions between particles by assigning an interaction energy $\omega$ between two cells which are in contact. The interaction energy can be positive (destabilizing) or negative (favorable).
With the addition of intermolecular interactions, each microstate will have a distinct energy, the canonical partition function can be obtained from eq. (2.1.1), and other thermodynamic properties follow.
In the case of a mixture, we assign separate interaction energies for each adjoining $A-A$, $B-B$, or $A-B$ pair in a given microstate: $\omega_{AA}, \omega_{BB}, \omega_{AB}$. How do we calculate the energy of a microstate? m is the total number of molecular contacts in the volume, and these can be divided into $A-A$, $B-B$, or $A-B$ contacts:
$m = m_{AA} + m_{BB} + m_{AB} \nonumber$
While $m$ is constant, the counts of specific contacts $m_{ij}$ vary by microstate. Then the energy of the mixture for the single $i^{th}$ microstate can be written as
$E_{\text {mix}}=m_{A A} \omega_{A A}+m_{B B} \omega_{B B}+m_{A B} \omega_{A B} \label{eq2.3.2}$
and the internal energy comes from an ensemble average of this quantity. An exact calculation of the internal energy from the partition function would require a sum over all possible configurations with their individual contact numbers. Instead, we can use a simpler, approximate approach which uses a strategy that starts by expressing each term in eq. ($\ref{eq2.3.2}$) in terms of $m_{AB}$. We know:
$\begin{array} {rcl} {m_{AA}} & = & {\text{(Total contacts for A) - (Contacts of A with B)}} \ {} & = & {\dfrac{zN_A}{2} - \dfrac{m_{AB}}{2}} \end{array}$
$m_{B B}=\dfrac{z N_{B}}{2}-\dfrac{m_{A B}}{2}$
Then we have
$\begin{array} {rcl} {E_{\text{mix}}} & = & {\left (\dfrac{z\omega_{AA} N_A}{2} \right ) + \left (\dfrac{z\omega_{BB} N_B}{2} \right ) + m_{AB} \left (\omega_{AB} - \dfrac{\omega_{AA} + \omega_{BB}}{2} \right )} \ {} & = & {U_{\text{pure A}} + U_{\text{pure B}} + m_{AB} \Delta \omega} \end{array} \label{eq2.3.5}$
The last term in this expression is half the change of interaction energy to switch an $A-A$ and a $B-B$ contact to form two $A-B$ contacts:
$\Delta \omega = \left (\omega_{A B} - \dfrac{\omega_{A A} + \omega_{B B}}{2} \right)$
We also recognize that the first two terms are just the energy of the two pure liquids before mixing. These are calculated by taking the number of cells in the pure liquid ($N_i$) times the number of contacts per cell ($z$) and then divide by two, so you do not double count the contacts.
$U_{\text{pure, i}} = \dfrac{z \omega_{ii} N_{i}}{2}$
With these expressions, eq. ($\ref{eq2.3.5}$) becomes
$E_{\text{mix}} = U_{\text{pure A}} + U_{\text{pure B}} + m_{AB} \Delta \omega \nonumber$
This equation describes the energy of a microstate in terms of the number of $A-B$ contacts present $m_{AB}$.
At this point, this is not particularly helpful because it is not practical to enumerate all of the possible microstates and their corresponding $m_{AB}$. To simplify our calculation of $U_{\text{mix}}$, we make a "mean field approximation," which replaces $m_{AB}$ with its statistical average $\langle m_{AB} \rangle$:
$\begin{array} {rcl} {\langle m_{AB} \rangle} & = & {\text{(# of contact sites for A)} \times \text{(probability of contact site being B)}} \ {} & = & {(N_A z) \left (\dfrac{N_B}{N}\right ) = zx_A x_B N} \end{array}$
Then for the energy for the mixed state $U_{\text{mix}} = \langle E_{\text{mix}} \rangle$, we obtain:
$U_{\text{mix}} = U_{\text{pure A}} + U_{\text{pure B}} + x_A x_B Nk_B T \chi_{AB}$
Here we have introduced the unitless exchange parameter,
$\chi_{A B}=\dfrac{z}{k_{B} T} \left (\omega_{A B} - \dfrac{\omega_{A A}+\omega_{B B}}{2} \right ) = \dfrac{z \Delta \omega}{k_{B} T} \label{eq2.3.10}$
which expresses $\Delta \omega$ (the change in energy on switching a single $A$ and $B$ from the pure state to the other liquid) in units of $k_B T$. Dividing by $z$ gives the average interaction energy per contact.
$\begin{array} {l} {\chi_{AB} > 0 \to \text{unfavorable A-B interaction}} \ {\chi_{AB} < 0 \to \text{favorable A-B interaction}} \end{array} \nonumber$
We can now determine the change in internal energy on mixing:
$\begin{array} {rcl} {\Delta U_{\text{mix}} } & = & {(U_{\text{mix}} - U_{\text{pure A}} - U_{\text{pure B}})} \ {} & = & {x_A x_B N k_B T \chi_{AB}} \end{array} \label{eq2.3.11}$
Note $\Delta U_{mix}$ as a function of composition has its minimum value for a mixture with $x_A = 0.5$, when $\chi_{AB} < 0$.
Note that in the mean field approximation, the canonical partition function is
$Q = \dfrac{N!}{N_A! N_B!} q_A^{N_A} q_B^{N_B} \exp [-U_{\text{mix}}/k_B T]\nonumber$
We kept the internal molecular partition functions here for completeness, but for the simple particles in this model $q_A = q_B = 1$.
Free Energy Mixing1
Using eqs. ($\ref{eq2.3.1}$) and ($\ref{eq2.3.11}$), we can now obtain the free energy of mixing
$\begin{array} {rcl} {\Delta F_{\text{mix}} } & = & {\Delta U_{\text{mix}} - T \Delta S_{\text{mix}} } \ {} & = & {Nk_B T (x_A x_B \chi_{AB} + x_A \ln x_A + x_B \ln x_B)} \end{array}\nonumber$
This function is plotted below as a function of mole fraction for different values of the exchange parameter. When there are no intermolecular interactions ($\chi_{AB} = 0$), the mixing is spontaneous for any mole fraction and purely entropic. Any strongly favorable $A-B$ interaction ($\chi_{AB} < 0$) only serves to decrease the free energy further for all mole fractions.
As $\chi_{AB}$ increases, we see the free energy for mixing rise, with the biggest changes for the 50/50 mixture. To describe the consequences, let’s look at the curve for $\chi_{AB} = 3$, for which certain compositions are miscible $(\Delta F_{\text{mix}} < 0)$ and others immiscible $(\Delta F_{\text{mix}} > 0)$.
Consider what would happen if we prepare a 50/50 mixture of this solution. The free energy of mixing is positive at the equilibrium composition of the $x_A= 0.5$ homogeneous mixture, indicating that the two components are immiscible. However, there are other mixture compositions that do have a negative free energy of mixing. Under these conditions the solution can separate into two phases in such a way that $(\Delta F_{\text{mix}}$ is minimized. This occurs at mole fractions of $x_A$ = 0.07 & 0.93, which shows us that one phase will be characterized by $x_A \gg x_B$ and the other with $x_A \ll x_B$. If we prepare an unequal mixture with positive $(\Delta A_{\text{mix}}$, for example $x_A = 0.3$, the system will still spontaneously phase separate although mass conservation will dictate that the total mass of the fraction with $x_A = 0.07$ will be greater than the mass of the fraction at $x_A = 0.93$. As $\chi_{AB}$ increases beyond 3, the mole fraction of the lesser component decreases as expected for the hydrophobic effect. Consider if $A$ = water and $B$ = oil. $\omega_{BB}$ and $\omega_{AB}$ are small and negative, $\omega_{AA}$ is large and negative, and $\chi_{AB} \gg 1$.
Critical Behavior
Note that 50/50 mixtures with $2 < \chi_{AB} < 2.8$ have a negative free energy of mixing to create a single homogeneous phase, yet, the system can still lower the free energy further by phase separating. As seen in the figure, $\chi_{AB} = 2$ marks a crossover from one phase mixtures to two phase mixtures, which is the signature of a critical point. We can find the conditions for phase equilibria by locating the free energy minima as a function of $\chi_{AB}$, which leads to the phase diagrams as a function of $\chi_{AB}$ and $T$ below. The critical temperature for crossover from one- to two-phase behavior is $T_0$, and $\Delta \omega$ is the average differential change in interaction energy defined in eq. ($\ref{eq2.3.10}$).
Readings
1. K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010).
2. W. W. Graessley, Polymeric Liquids and Networks: Structure and Properties. (Garland Science, New York, 2004), Ch. 3.
_____________________________________
1. J. H. Hildebrand and R. L. Scott, Regular Solutions. (Prentice-Hall, Englewood Cliffs, N.J., 1962). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/02%3A_Lattice_Model_of_a_Fluid/2.03%3A_Binary_Fluid.txt |
Water's Physical Properties
Water is a structured liquid. Its unique physical properties stem from its hydrogen bond network.
• On average, each molecule can donate two hydrogen bonds and accept two hydrogen bonds.
• Strong hydrogen bond (HB) interactions give preferential directionality along tetrahedral orientation.
• Large variation in HB distances and angles.
• Structural correlations last about 1–2 solvent shells, or <1 nm.
3.02: Water Dynamics
Water Dynamics
• Hydrogen bond distances and angles fluctuate with 200 and 60 femtosecond time scales, respectively.
• Hydrogen bonded structures reorganize in a collective manner on picosecond time scales (1–8 ps).
The water HB energy is tough to measure:
• 2-6 kcal mol\(^{-1}\) depending on the method used.
• These are \(\Delta H\) for reorganization, but we do not know how many HB broken or formed in the process.
3.03: Electrical Properties of Pure Water
Electrical Properties of Pure Water
The motion of water’s dipoles guide almost everything that happens in the liquid. Two important contributions:
1. Permanent dipole moment of molecule lies along symmetry axis.
2. Induced dipole moments (polarization) along the hydrogen bonds. Strengthening hydrogen bond increases $r_{OH}$ and decreases $R_{OO}$, which increases the dipole moment. The dipole moment per molecule changes from 1.7 to 3.0 D going from gas phase to liquid.
Water Dielectric Response
Pure water is a strong dielectric medium, meaning that long-range electrostatic forces acting between two charges in water are dramatically reduced. The static dielectric constant is $\varepsilon = 80$, also known as the relative permittivity $\varepsilon_r = \varepsilon /\varepsilon_0$. The dielectric response is strongly frequency and temperature dependent. Motion of water charges encoded in complex dielectric constant ($\varepsilon$) or index of refraction ($\tilde{n}$).
Water Autoionization and pH
• Protons and hydroxide govern acid base chemistry.
• Any water molecule in the bulk lives about 10 hours before dissociating.
• In a liter, a water molecule dissociates every 30 microseconds.
Protons in Water
• Structure of $H^+$ in water and the extent to which the excess charge is delocalized is still unresolved. It is associated strongly enough to describe as covalently interacting, but its time evolution is so rapid (<1 ps) that it is difficult to define a structure.
• Much higher mobility than expected by diffusion of a cation of similar size.
• Explained by Grotthus mechanism for transfer of proton to neighboring water molecules.
• OH is also very mobile and acts as a proton acceptor from water.
4.01: Solvation
• 4.1: Solvation
Solvation describes the intermolecular interactions of a molecule or ion in solution with the surrounding solvent, which for our purposes will refer to water. Aqueous solvation influences an enormous range of problems in molecular biophysics, including (1) charge transfer and charge stabilization; (2) chemical and enzymatic reactivity; (3) the hydrophobic effect; (4) solubility, phase separation, and precipitation; (5) binding affinity; (6) self-assembly; and (7) transport processes in water.
• 4.2: Solvation Thermodynamics
Let’s consider the thermodynamics of an aqueous solvation problem. This will help identify various physical processes that occur in solvation, and identify limitations to this approach. Solvation is described as the change in free energy to take the solute from a reference state, commonly taken to be the isolated solute in vacuum, into dilute aqueous solution.
• 4.3: Solvation Dynamics and Reorganization Energy
04: Solvation
Solvation describes the intermolecular interactions of a molecule or ion in solution with the surrounding solvent, which for our purposes will refer to water. Aqueous solvation influences an enormous range of problems in molecular biophysics, including (1) charge transfer and charge stabilization; (2) chemical and enzymatic reactivity; (3) the hydrophobic effect; (4) solubility, phase separation, and precipitation; (5) binding affinity; (6) self-assembly; and (7) transport processes in water. The terms solute and solvent commonly apply to dilute mixtures in the liquid phase in which the solute (minor component) is dispersed into the solvent (major component). For this reason, the concept of solvation is also at times extended to refer to the influence of any surrounding environment in which a biomolecule is embedded, for instance, a protein or membrane.
There are numerous types of interactions and dynamical effects that play a role in solvation. Typically, solute–solvent interactions are dominated by electrostatics (interactions of charges, dipoles, and induced dipoles), as well as hydrogen bonding and repulsion (both of which have electrostatic components). Therefore there is a tendency to think about solvation purely in terms of these electrostatic interaction energies. A common perspective—polar solvation—emphasizes how the dipoles of a polar liquid can realign themselves to energetically stabilize solute charges, as illustrated here for the case of ion solvation in water. The extent of solute stabilization in the liquid is the reorganization energy.
Unlike most solvents, the presence of water as a solvent for biological molecules fundamentally changes their properties and behavior from the isolated molecule. This means that water influences the conformation of flexible molecules, and sometimes hydrogen bonding interactions with water can be strong enough that it is hard to discern where the boundary of solute ends and water begins. But there is also a significant energetic cost to disrupting water’s hydrogen bonding network in order to insert a solute into the liquid. Furthermore, the fluctuating hydrogen bond network of water introduces a significant entropy to the system which can be competitive or even the dominant contributor to the free energy of solvation. As a result, there are competing interactions involving both solute and water that act to restructure the solute and solvent relative to their isolated structures.
It is also important to remember that solvation is a highly dynamical process. Solvation dynamics refers to the time-dependent correlated motions of solute and solvent. How does a solvent reorganize in response to changes in solute charge distribution or structure? Conversely, how do conformational changes to the intermolecular configuration of the solvent (i.e., flow) influence changes in structure or charge distribution in the solute? The latter perspective views the solute as "slaved" to the solvent dynamics. These coupled processes result in a wide variety of time-scales in the solvation of biological macromolecules that span timescales from \(10^{-14}\) to \(10^{-7}\) seconds. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/03%3A_Water's_Physical_Properties/3.01%3A_Water_Structure.txt |
Let’s consider the thermodynamics of an aqueous solvation problem. This will help identify various physical processes that occur in solvation, and identify limitations to this approach. Solvation is described as the change in free energy to take the solute from a reference state, commonly taken to be the isolated solute in vacuum, into dilute aqueous solution:
Solute($g$) $\to$ Solute($aq$)
Conceptually, it is helpful to break this process into two steps: (1) the energy required to open a cavity in the liquid, and (2) the energy to put the solute into the cavity and turn on the interactions between solute and solvent.
Each of these terms has enthalpic and entropic contributions:
$\begin{array} {rcl} {\Delta G_{\text{sol}}} & = & {\Delta H_{\text{sol}} - T \Delta S_{\text{sol}}} \ {\Delta G_{\text{sol}}} & = & {\Delta G_1 + \Delta G_2} \ {} & = & {\Delta H_1 - T\Delta S_1 + \Delta H_2 - T \Delta S_2} \end{array} \nonumber$
$\Delta G_1$: Free energy to open a cavity in water. We are breaking the strong cohesive intermolecular interactions in water ($\Delta H_1$), creating a void against constant pressure, and reducing the configurational entropy of the water hydrogen-bond network ($\Delta S_1$). Therefore $\Delta G_1$ is large and positive.The hydrophobic effect is dominated by this term. In atomistic models, cavities for biomolecules are commonly defined through the solute’s solvent accessible surface area (SASA). In order to account for excluded volume on the distance scale of a water molecule, the SASA can be obtained by rolling a sphere with radius 1.4 $\mathring{A}$ over the solute’s van der Waals surface.
$\Delta G_2$: Free energy to insert the solute into the cavity, turn on the interactions between solute and solvent. Ion and polar solvation is usually dominated by this term. This includes the favorable electrostatic and H-bond interactions ($\Delta H_2$). It also can include a restructuring of the solute and/or solvent at their interface due to the new charges. The simplest treatment of this process describes the solvent purely as a homogeneous dielectric medium and the solute as a simple sphere or cavity embedded with point charges or dipoles. It originated from the Born–Haber cycle first used to describe $\Delta H_{rxn}$ of gas-phase ions, and formed the basis for numerous continuum and cavity-in-continuum approaches to solvation.
Given the large number of competing effects involving solute, solvent, and intermolecular interactions, predicting the outcome of this process is complicated.
Looking at the cycle above illustrates many of the complications from this approach relevant to molecular biophysics, even without worrying about atomistic details. From a practical point of view, the two steps in this cycle can often have large magnitude but opposite sign, resulting in a high level of uncertainty about $\Delta G_{\text{sol}}$—even its sign! More importantly, this simplified cycle assumes that a clean boundary can be drawn between solute and solvent—the solvent accessible surface area. It also assumes that the influence of the solvent is perturbative, in the sense that the solvent does not influence the structure of the solute or that there is conformational disorder or flexibility in the solute and/or solvent. However, even more detailed thermodynamic cycles can be used to address some of these limitations:
$\Delta G_{1a}$: Free energy to create a cavity in water for the final solvated molecule.
$\Delta G_{1b}$: Free energy to induce the conformational change to the solute for the final solvated state.
$\Delta G_{2}$: Free energy to insert the solute into the cavity, turn on the interactions between solute and solvent. This includes turning on electrostatic interactions and hydrogen bonding, as well as allowing the solvent to reorganize around the solute:
$\Delta G_{2} = \Delta G_{\text{solute-solvent}} + \Delta G_{\text{solvent reorg}}\nonumber$
Configurational entropy may matter for each step in this cycle, and can be calculated using1
$S=-k_{B} \sum_{i} P_{i} \ln P_{i} \nonumber$
Here sum is over microstate probabilities, which can be expressed in terms of the joint probability of the solute with a given conformation and the probability of a given solvent configuration around that solute structure. In step 1, one can average over the conformational entropy of the solvent for the shape of the cavity (1a) and the conformation of the solute (1b). Step 2 includes solvent configurational variation and the accompanying variation in interaction strength.
With a knowledge of solvation thermodynamics for different species, it becomes possible to construct thermodynamic cycles for a variety of related solvation processes:
1) Solubility. The equilibrium between the molecule in its solid form and in solution is quantified through the solubility product $K_{sp}$, which depends on the free energy change of transferring between these phases
2) Transfer free energy. The most common empirical way of quantifying hydrophobicity is to measure the partitioning of a solute between oil and water. The partitioning coefficient $P$ is related to the free energy needed to transfer a solute from the nonpolar solvent (typically octanol) to water
3) Bimolecular association processes
Binding with conformational selection
____________________________________________
1. See C. N. Nguyen, T. K. Young and M. K. Gilson, Grid inhomogeneous solvation theory: Hydration structure and thermodynamics of the miniature receptor cucurbit[7]uril, J. Chem. Phys. 137 (4), 044101 (2012).
4.03: Solvation Dynamics and Reorganization Energy
Some of the practical challenges of describing solvation through thermodynamic cycles include dealing with strong solute–solvent interactions, flexible solutes, and explicit solvents. Additionally, it does not reflect the fact that solvation is a highly dynamic process involving motion of the solvent. Perhaps the most common example is in charge transfer processes (i.e., electrons and protons) in which water’s dipoles can act to drive and stabilize the position of the charge. For instance, consider the transfer of an electron from a donor to an acceptor in solution:
$\ce{D + A -> D^{+} + A^{-}} \nonumber$
We most commonly consider electron transfer as dependent on a solvent coordinate in which solvent reorganizes its configuration so that dipoles or charges help to stabilize the extra negative charge at the acceptor site. This type of collective coordinate is illustrated in the figure below. These concepts are reflected in the Marcus’ theory of electron transfer. The free energy change to relax the solvent configuration after switching the charges in the initial configuration is known as the reorganization energy $\lambda$. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/04%3A_Solvation/4.02%3A_Solvation_Thermodynamics.txt |
Why do oil and water not mix? What is hydrophobicity? First, the term is a misnomer. Greasy molecules that do not mix with water typically do have favorable interaction energies, i.e., $∆H_{int} < 0$. Walter Kauzmann first used the term "hydrophobic bonding" in 1954. This naming has been controversial from the beginning, but it has stuck presumably, because in this case $\Delta G$ is what determines the affinity of one substance for another rather than just $\Delta H$. Generally speaking, the entropy of mixing governs the observation that two weakly interacting liquids will spontaneously mix. However, liquid water’s intermolecular interactions are strong enough that it would prefer to hydrogen bond with itself than solvate nonpolar molecules. It will try to avoid disrupting its hydrogen bond network if possible.
The hydrophobic effect refers to the free energy penalty that one pays to solvate a weakly interacting solute. Referring to the thermodynamic cycle above, $\Delta G_{\text{sol}}$, the reversible work needed to solvate a hydrophobic molecule, is dominated by step 1, the process of forming a cavity in water. The free energy of solvating a hydrophobic solute is large and positive, resulting from two factors:
1. $\Delta G_{\text{sol}} < 0$. The entropy penalty of creating a cavity in water. We restrict the configurational space available to the water within the cavity. This effect and the entropy of mixing (that applies to any solvation problem) contribute to $\Delta S_1$.
2. $\Delta G_{\text{sol}} > 0$. The energy penalty of breaking up the hydrogen bond network ($\Delta H _1$) is the dominant contributor to the enthalpy. This can be estimated from a count of the net number of H-bonds that needs to be broken to accommodate the solute: $\Delta H_{\text{sol}}$ increases by 1-3 kcal mol$^{-1}$ of hydrogen bonds. The interaction energy between a hydrocarbon and water ($\Delta H_2$) is weakly favorable as a result of dispersion interactions, but this is a smaller effect. (At close contact, van der Waals forces lower the energy by $\sim$ 0.1-1.0 kcal mol$^{-1}$). Therefore $\Delta H_{\text{sol}} \approx \Delta H_1$.
The net result is that $\Delta G_{\text{sol}}$ is large and positive, which is expected since water and oil do not mix.
These ideas were originally deduced from classical thermodynamics, and put forth by Frank and Evans (1945) in the "iceberg model", which suggested that water would always seek to fulfill as many hydrogen bonds as it could—wrapping the network around the solute. This is another misnomer, because the hydrophobic effect is a boundary problem about reducing configurational space, not actual freezing of fluctuations. Hydrogen bonds continue to break and reform in the liquid, but there is considerable excluded configurational space for this to occur. Let's think of this as solute-induced hydrogen-bond network reorganization.
Water Configurational Entropy
Let’s make an estimate of $\Delta G_{\text{sol}}$. Qualitatively, we are talking about limiting the configurational space that water molecules can adopt within the constraints of a tetrahedral potential.
So an estimate for the entropy of hydrophobic solvation if these configurations are equally probable is $\Delta S_{\text{sol}} = k_B \ln (\Omega_{\text{surf}} /\Omega_{\text{bulk}}) = -k \ln 2$ per hydrogen bond of lost configurational space:
$-T \Delta S_{\text{sol}}=k_{B} T \ln 2 \nonumber$
Evaluating at $300\ K$,
$\begin{array} {rcl} {-T \Delta G_{\text{sol}}} & = & {1.7 \text{ kJ/mol water molecules @ 300 K}} \ {} & = & {\text{0.4 kcal/mol water molecules}} \end{array}\nonumber$
This value is less than the typical enthalpy for hydrogen bond formation, which is another way of saying that the hydrogen bonds like to stay mostly intact, but have large amplitude fluctuations.
Temperature Dependence of Hydrophobic Solvation
From $\Delta S_{\text{sol}}$ we expect $\Delta G_{\text{sol}}$ to rise with temperature as a result of the entropic term. This is a classic signature of the hydrophobic effect: The force driving condensation or phase-separation increases with temperature. Since the hydrogen-bond strength connectivity and fluctuations in water's hydrogen-bond network change with temperature, the weighting of enthalpic and entropic factors in hydrophobic solvation also varies with $T$. Consider a typical temperature dependence of $\Delta G_{\text{sol}}$ for small hydrophobic molecules:
The enthalpic and entropic contributions are two strongly temperature-dependent effects, which compete to result in a much more weakly temperature-dependent free energy. Note, this is quite different from the temperature dependence of chemical equilibria described by the van 't Hoff equation, which assumes that $\Delta H$ is independent of temperature. The temperature dependence of all of these variables can be described in terms of a large positive heat capacity.
$\begin{array} {rcl} {\Delta C_{\text{p, sol}}} & = & {\dfrac{\partial \Delta H_{\text{sol}}^0}{\partial T} = T\dfrac{\partial \Delta S_{\text{sol}}^0}{\partial T}} \ {} & = & {-T \dfrac{\partial^2 G_{\text{sol}}^0}{\partial T^2} \ \ \ \ (\text{Curvature of } \Delta G^0)} \end{array} \nonumber$
At low temperatures, with a stronger, more rigid hydrogen-bond network, the $\Delta S$ term dominates. But at high temperature, approaching boiling, the entropic penalty is far less. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/05%3A_Hydrophobicity/5.01%3A_Hydrophobic_Solvation-_Thermodynamics.txt |
To create a new interface there are enthalpic and entropic penalties. The influence of each of these factors depends on the size of the solute (R) relative to the scale of hydrogen bonding structure in the liquid (correlation length, $\ell$, ~0.5–1.0 nm).
For small solutes ($R < \ell$ ): Network deformation
The solute can insert itself into the hydrogen bond network without breaking hydrogen bonds. It may strain the HBs ($\Delta H > 0$) and reduce the configurational entropy ($\Delta S < 0$), but the liquid mostly maintains hydrogen bonds intact. We expect the free energy of this process to scale as volume of the solute $\Delta G_{\text{sol}} (R < \ell) \propto R^3$.
For large solutes, $R > \ell$: Creating an interface
The hydrogen bond network can no longer maintain all of its HBs between water molecules. The low energy state involves dangling hydrogen bonds at the surface. One in three surface water molecules has a dangling hydrogen bond, i.e., on average five of six hydrogen bonds of the bulk are maintained at the interface.
We expect $\Delta G_{\text{sol}}$ to scale as the surface area $\Delta G_{\text{sol}} (R > \ell) \propto R^2$. Of course, large solutes also have a large volume displacement term. Since the system will always seek to minimize the free energy, there will be a point at which the $R^3$ term grows faster with solute radius than the $R^2$ term, so large solutes are dominated by the surface term.
Calculating $\Delta G$ for Forming a Cavity in Water
Let’s investigate the energy required to form cavities in water using a purely thermodynamic approach. To put a large cavity ($R > \ell$) into water, we are creating a new liquid–vapor interface for the cavity. So we can calculate the energy to create a cavity using the surface tension of water. Thermodynamically, the surface tension $\gamma$ is the energy required to deform a liquid–vapor interface: $\gamma=(\partial U / \partial a)_{N, V, T}$, where $a$ is the surface area. So we can write the change in energy as a result of inserting a spherical cavity into water as the product of the surface tension of water times the surface area of the cavity,
$U(R)=4 \pi R^{2} \gamma \nonumber$
In principle, the experimentally determined $\gamma$ should include entropic and enthalpic contributions to altering the hydrogen bond network at a surface, so we associate this with $\Delta G_{\text{sol}}$. For water at $300\ K$, $\gamma = 72\ pN/nm$. $\gamma$ varies from $75\ pN/nm$ at $0\ ^{\circ}C$ to $60\ pN/nm$ at $100\ ^{\circ}C$.
The surface tension can also be considered a surface energy per unit area: which can also be considered a surface energy, i.e., $\gamma = 72\ mJ/m^2$. To relate this to a molecular scale quantity, we can estimate the surface area per water molecule in a spherical cavity. The molecular volume of bulk water deduced from its density is $3.0 \times 10^{-26}\ \text{L/molecule}$, and the corresponding surface area per molecule deduced from geometric arguments is $\sim 10 \mathring{A}^2$. This area allows us to express $\gamma \approx 4.3\ kJ/mol$, which is on the order of the strength of hydrogen bonds in water.
For small cavities ($R < \ell$), the considerations are different since we are not breaking hydrogen bonds. Here we are just constraining the configurational space of the cavity and interface, which should scale as volume. We define
$\Delta G_{\text{sol}}(R < \ell) = \dfrac{4 \pi R^{3}}{3} \rho_{E} \nonumber$
where $\rho_E$ is an energy density1.
$\rho_{E} \approx 240 \times 10^{-9} \ pJ/nm^{3} = 240 \ pN/nm^{-2}\nonumber$
Remembering that $-\partial G /\left.\partial V\right|_{N, T}=p$, the energy density corresponds to units of pressure with a value $\rho_E = 2.4 \times 10^3$ atm. If we divide $\rho_E$ by the molarity of water (55M), then we find it can be expressed as $4.4\ kJ/mol$, similar to the surface free energy value deduced.
So combining the surface and volume terms we write
$\Delta G_{\text{sol}}(R)=4 \pi \gamma R^{2}+\dfrac{4}{3} \pi R^{3} \rho_{E}\nonumber$
Alternatively, we can define an effective length scale (radius) for the scaling of this interaction
$\dfrac{\Delta G_{\text{sol}}}{k_{B} T} = \left ( \dfrac{R}{R_{\text{surf}}} \right )^{2} + \left (\dfrac{R}{R_{V}} \right) ^{3} \quad \quad \quad R_{\text{surf}}=\sqrt{\dfrac{k_{B} T}{4 \pi \gamma}} \quad R_{V} = \left (\dfrac{3 k_{B} T}{4 \pi \rho_{E}} \right)^{1/3} \nonumber$
where $R_{\text{surf}} = 0.067\ nm$ and $R_V = 1.6\ nm$ at $300\ K$. We can assess the crossover from volume-dominated to area-dominated hydrophobic solvation effects by setting these terms equal and finding that this occurs when $R = 3\gamma /\rho_E = 0.9\ nm$. The figure below illustrates this behavior and compares it with results of MD simulations of a sphere in water.
An alternate approach to describing the molar free energy of solvation for a hydrophobic sphere of radius $r$ equates it with the probability of finding a cavity of radius $r$:
$\Delta G = -k_B T \ln P(r)\nonumber$
$\begin{array} {rcl} {P(r)} & = & {\dfrac{e^{-U(r)/k_B T}}{\int_0^{\infty} e^{-U(r)/k_B T} dr} = \dfrac{\exp \left [\dfrac{-4\pi \gamma r^2}{k_B T} \right]}{\dfrac{1}{2} \sqrt{\dfrac{k_B T}{4\gamma}}}} \ {} & = & {\dfrac{2}{\sqrt{\pi} R_{\text{surf}}} \exp [-r^2/R_{\text{surf}}^2]} \end{array} \nonumber$
This leads to an expression much like we previously described for large cavities. It is instructive to determine for water @ $300\ K$:
$\langle r\rangle=\int_{0}^{\infty} dr\ r\ P(r)=\pi^{-1 / 2} R_{\text{suff}}=\dfrac{1}{2 \pi} \left (\dfrac{k_{B} T}{\gamma} \right)^{1/2}=0.038\ nm \nonumber$
This is very small, but agrees well with simulations. (There is not much free volume in water!) However, when you repeat this to find the variance in the size of the cavities $\delta r = (\langle r^2 \rangle - \langle r \rangle^2)^{1/2}$, we find $\delta r = 0.028\ nm$. So the fluctuations in size are of the same scale as the average and therefore quite large in a relative sense, but still less than the size of a water molecule.
Simulations give the equilibrium distribution of cavities in water
$\Delta \mu^0 = -k_B T \ln (P) \nonumber$
_______________________________________
1. D. Chandler, Interfaces and the driving force of hydrophobic assembly, Nature 437, 640–647 (2005). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/05%3A_Hydrophobicity/5.02%3A_Hydrophobic_Solvation-_Solute_Size_Effect.txt |
Hydrophobic Collapse1
We see that hydrophobic particles in water will attempt to minimize their surface area with water by aggregating or phase separating. This process, known as hydrophobic collapse, is considered to be the dominant effect driving the folding of globular proteins.
Let’s calculate the free energy change for two oil droplets coalescing into one. The smaller droplets both have a radius R0 and the final droplet a radius of $R$.
$\Delta G_{\text{collapse}} = \Delta G_{\text{sol}} (R) - 2 \Delta G_{\text{sol}} (R_0) \nonumber$
The total volume of oil is constant—only the surface area changes. If the total initial surface area is $A_0$, and the final total surface area is $A$, then
$\Delta G_{\text{collapse}} = (A - A_0) \gamma \nonumber$
which is always negative since $A < A_0$ and $\gamma$ is positive.
This neglects the change in translational entropy due to two drops coalescing into one. Considering only the translational degrees of freedom of the drops, this should be approximately $\Delta S_{\text{collapse}} \approx k_B \ln (3/6)$. In other words, a small number compared to the surface term.
We can readily generalize this to a chain of $n$ beads, each of radius $R_0$, which collapse toward a single sphere with the same total volume. In this case, let’s consider how the free energy of the system varies with the number of beads that have coalesced.
Again the total volume is constant, $V=n\left(\dfrac{4}{3} \pi R_{0}^{3}\right)$ and the surface area changes. The initial surface area is $A_{0}=m 4 \pi R_{0}^{2}$ and the final surface area is $A_{\min }=4 \pi \left (R_{\min } \right)^{2} = m^{2/3} 4 \pi R_{0}^{2}$. Along the path, there is a drop of total surface area for each bead that coalesces. Let’s consider one path, in which an individual bead coalesces with one growing drop. The total surface area once $n$ of $m$ particles have coalesced is
$A_n$ = (surface area of drop formed by $n$ coalesced beads) + (total area of remaining $m-n$ beads)
$\begin{array} {rcl} {A_n} & = & {(n^{2/3} 4\pi R_0^2) + (m - n) 4\pi R_0^2} \ {} & = & {(m + n^{2/3} - n)4\pi R_0^2} \ {} & = & {A_0 + (n^{2/3} - n)4\pi R_0^2} \end{array}\nonumber$
The free energy change for coalescing $n$ beads is
$\begin{array} {rcl} {\Delta G_{\text{coll}}} & = & {(A_n - A_0) \gamma } \ {} & = & {(n^{2/3} - n) 4\pi R_0^2 \gamma} \end{array} \nonumber$
This free energy is plotted as a function of the bead number at fixed volume. This is an energy landscape that illustrates that the downhill direction of spontaneous change leads to a smaller number of beads. The driving force for the collapse of this chain can be considered to be the decrease in free energy as a function of the number of beads in the chain:
$\begin{array} {c} {f_{\text{coll}} = -\dfrac{\partial \Delta G_{\text{coll}}}{\partial r} \propto - \dfrac{\partial \Delta G_{\text{coll}}}{\partial n}} \ {-\dfrac{\partial \Delta G_{\text{coll}}}{\partial n} = 4\pi R_0^2 \gamma \left (1 - \dfrac{2}{3} n^{-1/3} \right )} \end{array} \nonumber$
This is not a real force expressed in Newtons, but we can think of it as a pseudo-force, with the bead number acting as a proxy for the chain extension. If you want to extend a hydrophobic chain, you must do work against this. Written in terms of the extension of the chain $x$ (not the drop area $A$)
$w=-\int_{x_{0}}^{x} f_{ext} d x = \int_{x_{0}}^{x} \left (\dfrac{\partial \Delta G_{coll}}{\partial A_{n}}\right) \left(\dfrac{\partial A_{n}}{\partial x}\right) d x\nonumber$
Here we still have to figure out the relationship between extension and surface area, $\partial A_{n} / \partial x$.
Alternatively, we can think of the collapse coordinate as the number of coalesced beads, $n$.
Hydrophobic Collapse and Shape Fluctuations
An alternate approach to thinking about this problem is in terms of the collapse of a prolate ellipsoid to a sphere as is seeks to minimize its surface area. We take the ellipsoid to have a longradius $\ell /2$ and a short radius $r$. The area and volume are then:
$\begin{array}{l} A=2 \pi\left(r^{2}+\frac{\ell^{2}}{4} \frac{\alpha}{\tan \alpha}\right) \quad \alpha=\cos ^{-1}\left(\frac{2 r}{\ell}\right) \ V=\frac{2}{3} \pi r^{2} \ell \quad(\text { constant }) \ \therefore \quad r^{2}=3 V / 2 \pi \ell \ A=\left(\frac{3 V}{\ell}+\pi \frac{\ell^{2}}{2} \frac{\alpha}{\tan \alpha}\right) \end{array} \nonumber$
Let’s plot the free energy of this ellipsoid as a function of $\ell$. For $V = 4\ nm^3$, $k_B T = 4.1\ pN/nm$ we find $\ell_{\min} = 1.96\ nm$. Note that at $k_B T$ the dimensions of the ellipsoid can fluctuate over many $\sim 5 \mathring{A}$.
Readings
1. N. T. Southall, K. A. Dill and A. D. J. Haymet, A view of the hydrophobic effect, J. Phys. Chem. B 106, 521–533 (2002).
2. D. Chandler, Interfaces and the driving force of hydrophobic assembly, Nature 437, 640–647 (2005).
3. G. Hummer, S. Garde, A. E. García, M. E. Paulaitis and L. R. Pratt, Hydrophobic effects on a molecular scale, J. Phys. Chem. B 102, 10469–10482 (1998).
4. B. J. Berne, J. D. Weeks and R. Zhou, Dewetting and hydrophobic interaction in physical and biological systems, Annu. Rev. Phys. Chem. 60, 85–103 (2009).
____________________________________________
1. See K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010), p. 675. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/05%3A_Hydrophobicity/5.03%3A_Hydrophobic_Collapse.txt |
Electrical Properties of Water and Aqueous Solutions
We want to understand the energy and electrical properties and transport of ions and charged molecules in water. These are strong forces. Consider an example of $\ce{NaCl}$ dissociation in gas phase dissociation energy $\Delta H_{\text{ionization}} \approx 270\ kJ/mol$:
$K_{\text {ioniztion }}(\text { gas })=\dfrac{\left[\mathrm{Na}^{+}\right]\left[\mathrm{Cl}^{-}\right]}{[\mathrm{NaCl}]} \approx 10^{-89} \nonumber$
In solution, this process [$\ce{NaCl} (aq) \to \text{Na}^+ (aq) + \text{Cl}^- (aq)$] occurs spontaneously; the solubility product for $\ce{NaCl}$ is $K_{\text{sp}} = [\text{Na}^+ (aq)][\text{Cl}^- (aq)] / [\ce{NaCl} (aq)] = 37$. Similarly, water molecules are covalently bonded hydrogen and oxygen atoms, but we know that the internal forces in water can autoionize a water molecule:
$K_{\text {ionization }}(\text { gas })=\left[\mathrm{H}^{+}\right]\left[\mathrm{OH}^{-}\right] \approx 10^{-75} \text { and } K_{W}\left(\mathrm{H}_{2} \mathrm{O}\right) = \left[\mathrm{H}^{+}\right] \left [\mathrm{OH}^{-} \right]=10^{-14} \nonumber$
These tremendous differences originate in the huge collective electrostatic forces that are present in water. “Polar solvation” refers to the manner in which water dipoles stabilize charges.
These dipoles are simplifications of the rearrangements of water’s structure to accommodate and lower the energy of the ion. It is important to remember that water is a polarizable medium in which hydrogen bonding dramatically modifies the electrostatic properties.
Electrostatics
Let’s review a number of results from classical electrostatics. The interactions between charged objects can be formulated using force, the electric field, or the electrostatic potential. The potential is our primary consideration when discussing free energies in thermodynamics and the Hamiltonian in statistical mechanics. Let’s describe these, consider the interaction between two ions $A$ and $B$, separated by a distance $r_{AB}$, with charges $q_A$ and $q_B$.
Force and Work
Coulomb’s Law gives the force that $B$ exerts on $A$.
$\boldsymbol{f}_{A B}=-\dfrac{1}{4 \pi \varepsilon} \dfrac{q_{A} q_{B}}{r_{A B}^{2}} \hat{r}_{A B}\nonumber$
$\hat{r}_{AB}$ is a unit vector pointing from $\mathbf{r}_B$ to $\mathbf{r}_A$. A useful identity to remember for calculations is
$\dfrac{e^{2}}{4 \pi \varepsilon_{0}}=230 p N /n m^{2}\nonumber$
For thermodynamic purposes it is helpful to calculate the reversible work for a process. Electrical work comes from moving charges against a force
$d w=-\boldsymbol{f} \cdot d \mathbf{r} \nonumber$
As long as q and ε are independent of r, and the process is reversible, then work only depends on r, and is independent of path. To move particle B from point 1 at a separation r0 to point 2 at a separation r requires the following work$w_{1 \rightarrow 2}=\frac{1}{4 \pi \varepsilon} q_{A} q_{B}\left(\frac{1}{r_{2}}-\frac{1}{r_{1}}\right)$
and if the path returns to the initial position, $w_{rev} = 0$.
Field, E
The electric field is a vector quantity that describes the action of charges at a point in space. The field from charged particle $B$ at point $A$ is
$\mathbf{E}_{A B}\left(\mathbf{r}_{A}\right)=-\dfrac{1}{4 \pi \varepsilon} \frac{q_{B}}{r_{A B}^{2}} \hat{r}_{A B} \nonumber$
$\mathbf{E}_{AB}$ is related to the force that particle $B$ exerts on a charged test particle $A$ with charge $q_A$ through
$\mathbf{f}_{A} = q_{A} \mathbf{E}_{A B}\left(\mathbf{r}_{A}\right)\nonumber$
While the force at point a depends on the sign and magnitude of the test charge, the field does not. More generally, the field exerted by multiple charged particles at point $\mathbf{r}_A$ is the vector sum of the field from multiple charges ($i$):
$\mathbf{E}\left(\mathbf{r}_{A}\right)=\sum_{i} \mathbf{E}_{A i}\left(\mathbf{r}_{A}\right)=-\dfrac{1}{4 \pi \varepsilon} \sum_{i} \dfrac{q_{i}}{r_{A i}^{2}} \hat{r}_{A i}\nonumber$
where $r_{Ai} = \left |\mathbf{r}_A - \mathbf{r}_i \right|$ and the unit vector $\hat{A}_{Ai} = (\mathbf{r}_A - \mathbf{r}_i)/r_{Ai}$. Alternatively for a continuum charge density $\rho_q (\mathbf{r})$,
$\mathbf{E}\left(\mathbf{r}_{A}\right)=-\dfrac{1}{4 \pi \varepsilon} \int \rho_{q}(\mathbf{r}) \dfrac{\left(\mathbf{r}_{A}-\mathbf{r}\right)}{\left|\mathbf{r}_{A}-\mathbf{r}\right|^{3}} d \mathbf{r}\nonumber$
where the integral is over a volume.
Electrostatic Potential, $\Phi$
For thermodynamics and statistical mechanics, we wish to express electrical interactions in terms of an energy or electrostatic potential. While the force and field are vector quantities, the electrostatic potential $\Phi$ is a scalar quantity which is related to the electric field through
$\mathbf{E}=-\bar{\nabla} \Phi \nonumber$
It has units of energy per unit charge. The electrostatic potential at point $\mathbf{r}_A$, which results from a point charge at $\mathbf{r}_B$, is
$\Phi \left(r_{A}\right)=\dfrac{1}{4 \pi \varepsilon} \dfrac{q_{B}}{r_{A B}}$
The electric potential is additive in the contribution from multiple charges:
$\Phi\left(r_{A}\right)=\dfrac{1}{4 \pi \varepsilon} \sum_{i} \dfrac{q_{i}}{r_{A i}} \quad \text { or } \quad \Phi\left(r_{A}\right)=\dfrac{1}{4 \pi \varepsilon} \int \dfrac{\rho_{q}(\mathbf{r})}{\left|\mathbf{r}_{A}-\mathbf{r}\right|} d \mathbf{r} \nonumber$
The electrostatic energy of a particle $A$ as a result of the potential due to particle $B$ is
$U_{A B}\left(r_{A}\right)=q_{A} \Phi\left(r_{A}\right)=\dfrac{1}{4 \pi \varepsilon} \dfrac{q_{A} q_{B}}{r_{A B}}\nonumber$
Note that $U_{A B}=q_{A} \Phi\left(r_{A}\right)=q_{B} \Phi\left(r_{B}\right)=\tfrac{1}{2}\left(q_{A} \Phi\left(r_{A}\right)+q_{B} \Phi\left(r_{B}\right)\right)$, so we can generalize this to calculate the potential energy stored in a collection of multiple charges as
\begin{aligned} U &=\dfrac{1}{2} \sum_{i} q_{i} \Phi\left(r_{A i}\right) \ &=\frac{1}{2} \int \Phi_{A}\left(\mathbf{r}_{A}\right) \rho_{q}\left(\mathbf{r}_{A}\right) d \mathbf{r}_{A} \end{aligned} \nonumber | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/06%3A_Electrical_Properties_of_Water_and_Aqueous_Solutions/6.01%3A_Electrostatics.txt |
Charge interactions are suppressed in a polarizable medium, which depends on the dielectric constant. The potential energy for interacting charges is long range, scaling as $r^{-1}$.
$U(r)=\dfrac{q_{A} q_{B}}{4 \pi} \dfrac{1}{\varepsilon r}\nonumber$
You can think of $\varepsilon$ as scaling the potential interaction distance $U \propto (\varepsilon r)^{-1}$. Here we equate the dielectric constant and the relative permittivity $\varepsilon_r = \varepsilon / \varepsilon_0$, which is a unitless quantity equal to the ratio of the sample permittivity $\varepsilon$ to the vacuum permittivity $\varepsilon_0$.
The dielectric constant is used to treat the molecular structure and dynamics of the charge environment in a mean sense, to give you a sense of how the polarizable medium screens the interaction of charges. Making use of a dielectric constant implies a separation of the charges of the system into a few important charges and the environment, which encompassed countless countess charges and their associated degrees of freedom.
Two treatments of the electrostatic force that charge b exerts on charge a in a dense medium:
Continuum
$f_{A}=\dfrac{1}{4 \pi \varepsilon_{0}} \dfrac{q_{a} q_{b}}{\varepsilon_{r} r^{2}}\nonumber$
Explicit Charges
\begin{aligned} f_{A} &=\frac{1}{4 \pi \varepsilon_{0}}\left[\frac{q_{a} q_{b}}{r^{2}}+\sum_{i=1}^{N} \frac{q_{a} q_{i}}{r_{a i}^{2}}\right] \ &=\frac{1}{4 \pi \varepsilon_{0}} \frac{q_{a} q_{b}}{r^{2}}\left[1+\sum_{i=1}^{N} \frac{q_{i}}{q_{b}} \frac{r^{2}}{r_{a i}^{2}}\right] \end{aligned} \nonumber
$i$: charged particles of the environment
6.03: Free Energy of Ions in Solution
Returning to our continuum model of the solvation free energy, and apply this to solvating an ion. As discussed earlier, $\Delta G_{\text{sol}}$ will require forming a small cavity in water and turning on the interactions between the ion and water. We can calculate the energy for solvating an ion in a dielectric medium as the reversible work needed to charge the ion from a charge of 0 to its final value $q$ within the dielectric medium:
$w = \int_{0}^{q} \Phi_{\text{ion}} dq$
As we grow the charge, it will induce a response from the dielectric medium (a polarization) that scales with electrostatic potential: $\Phi = q / 4\pi \varepsilon r$. We take the ion to occupy a spherical cavity with radius $a$. Although we can place a point charge at the center of the sphere, it is more easily solved assuming that the charge $q$ is uniformly distributed over the surface of the sphere. Then the electrostatic potential at the surface of the sphere is $q/4\pi \varepsilon a$ and the resulting work is
$w = \dfrac{q^2}{8\pi \varepsilon b}\nonumber$
In a similar manner, we can calculate the energy it takes to transfer an ion from one medium with $\varepsilon_1$ to another with $\varepsilon_2$. We first discharge the ion in medium 1, transfer, and recharge the ion in medium 2. The resulting work, the Born transfer energy, is
$\Delta w = \dfrac{q^2}{8\pi a} \left (\dfrac{1}{\varepsilon_2} - \dfrac{1}{\varepsilon_1} \right ) \nonumber$
If you choose to distribute the charge uniformly through the spherical cavity, the prefactor $q^2 /8\pi a$ becomes $3q^2 /20\pi a$.
6.04: Ion Distributions in Electrolyte Solution
To gain some insight into how ions in aqueous solution at physiological temperatures behave, we begin with the thermodynamics of homogeneous ionic solutions. Let’s describe the distribution of ions relative to one another as a function of the concentration and charge of the ions. The free energy for an open system containing charged particles can be written
$dG = -SdT + Vdp + \sum_{j = 1}^{N_{\text{comp}}} \mu_j dN_j + \sum_{i = 1}^{N_{\text{charges}}} \Phi (x) dq_i$
$\mu_j$ and $N_j$ are the chemical potential and the number of solutes of type $j$, in which the solute may or may not be charged and where the contribution of electrostatics is not included. This term primarily reflects the entropy of mixing in electrolyte solutions. The sum $i$ only over charges $q_i$, under the influence of a spatially varying electrostatic potential. This reflects the enthalpic contribution to the free energy from ionic interactions.
In our case, we will assume that ions are the only solutes present, so that the sum over $i$ and $j$ are the same and this extends over all cations and anions in solution. We can relate the charge and number density through
$q_i = z_i e N_i\nonumber$
where $z$ is the valency of the ion $(\pm 1, 2, ...)$ and e is the fundamental unit of charge. Then expressing $dq_i$ in terms of $dN_i$, we can write the free energy under constant temperature and pressure conditions as
$dG|_{T, p} = \sum_i (\mu_i + z_i e \Phi ) d N_i = \sum_i \mu_i' dN_i \nonumber$
Here $\mu_i'$ is known as the electrochemical potential.
To address the concentration dependence of the electrochemical potential, we remember that
$\mu_i = \mu_i^{\circ} + k_B T \ln C_i\nonumber$
where $C_i$ is the concentration of species $i$ referenced to standard state, $C^{\circ} = 1M$. (Technically ionic solutions are not ideal and $C_i$ is more accurately written as an activity.) Equivalently we can relate concentration to the number density of species $i$ relative to standard state. Then the electrochemical potential of species $i$ is
$\mu_i' (x) = \mu_i^{\circ} + k_B T \ln C_i (x) + z_i e \Phi (x) \label{eq6.4.2}$
Here we write $C(x)$ to emphasize that there may be a spatial concentration profile. At equilibrium, the chemical potential must be the same at all points in space. Therefore, we equate the electrochemical potential at two points:
$\mu '(x_2) = \mu '(x_1)\nonumber$
So from eq. ($\ref{eq6.4.2}$)
$\ln \dfrac{C(x_2)}{C(x_1)} = \dfrac{-ze \Delta \Phi}{k_B T} \label{eq6.4.3}$
where the potential difference is
$\Delta \Phi = \Phi (x_2) - \Phi (x_1).\nonumber$
Equation ($\ref{eq6.4.3}$) is one version of the Nernst Equation, which describes the interplay of the temperature-dependent entropy of mixing the ions and their electrostatic interactions. Rewriting it to describe $\Delta \Phi$ as a function of concentration is sometimes used to calculate the transmembrane potential as a function of ion concentrations on either side of the membrane.
The Nernst equation predicts Boltzmann statistics for the spatial distribution of charged species, i.e., that concentration gradients around charged objects drop away exponentially in space with in the interaction energy
$\begin{array} {rcl} {\Delta U (x)} & = & {ze \Delta \Phi (x)} \ {C(x)} & = & {C(x_0) e^{-\Delta U (x)/k_B T}} \end{array}$
This principle will hold whether we are discussing the ion concentration profile around a macroscopic object, like a charged plate, or for the average concentration profiles about a single ion. At short distances, oppositely charged particles will have their concentrations enhanced, whereas equally charged objects will be depleted. At short range, the system is dominated by the electrostatic interaction between charges, whereas at long distance, the entropy of mixing dominates.
For the case of charged plates:
Bjerrum Length, $\ell_B$
The distance at which electrostatic interaction energy between two charges equals $k_BT$.
For $\pm 1$ charges $\ell_B = \dfrac{1}{4\pi \varepsilon} \dfrac{e^2}{k_B T}$
At $T = 300\ K$, $k_B T/e = 25 \ mV$ and
For: $\varepsilon_r = 1$ $\ell_B = 560 \ \mathring{A}$
$\varepsilon_r = 80$ $\ell_B = 7.0 \ \mathring{A}$
For $\ell > \ell_B$ Electrostatic interactions are largely screened, and motion is primarily Brownian
For $\ell < \ell_B$ Attractive and repulsive forces dominate. The Bjerrum length gives ion pairing threshold. For $\ell_B = 7.0\ \mathring{A}$, the ion concentrations are approximately, $6.9 \times 10^{26} m^{-3}$ or $\sim 1\ M$. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/06%3A_Electrical_Properties_of_Water_and_Aqueous_Solutions/6.02%3A_Dielectric_Constant_and_Screening.txt |
Poisson–Boltzmann Equation1
The Poisson–Boltzmann Equation (PBE) is used to evaluate charge distributions for ions around charged surfaces. It brings together the description of the electrostatic potential around a charged surface with the Boltzmann statistics for the thermal ion distribution. Gauss' equation relates the flux of electric field lines through a closed surface to the charge density within the volume: $\nabla \cdot \bar{E} = \rho /\varepsilon$. The Poisson equation can be obtained by expressing this in terms of the electrostatic potential using $\bar{E} = -\nabla \Phi$
$-\nabla^2 \Phi = \dfrac{\rho}{\varepsilon} \label{eq6.5.1}$
Here $\rho$ is the bulk charge density for a continuous medium.
We seek to describe the charge distribution of ions about charged surfaces of arbitrary geometry. The surface will be described by a surface charge density $\sigma$. We will determine $\rho (r)$, which is proportional to the number density or concentration of ions
$\rho (r) = \sum_{i} z_i eC_i (r) \label{eq6.5.2}$
where the sum is over all ionic species in the solution, and $z_i$ is the ion valency, which may take on positive or negative integer values. Drawing from the Nernst equation, we propose an ion concentration distribution of the Boltzmann form
$C_i (r) = C_{0, i} e^{-z_i e\Phi (r)/k_B T}$
Here we have defined the bulk ion concentration as $C_0 = C(r \to \infty)$, since $\Phi \to 0$ as $r \to \infty$. Note that the ionic composition is taken to obey the net charge neutrality condition
$\sum_i z_i C_{0, i} = 0 \label{eq6.5.4}$
The expressions above lead to the general form of the PBE:
$-\nabla^2 \Phi = \dfrac{e}{\varepsilon} \sum_i z_i C_{0, i} \exp [-z_i e \Phi / k_B T] \label{eq6.5.5}$
This is a nonlinear differential equation for the electrostatic potential and can be solved for the charge distribution of ions in solution for various boundary conditions. This can explain the ion distributions in aqueous solution about a charged structure. For instance:
• Surface (membrane) $\dfrac{\partial^2 \Phi}{\partial x^2} = \dfrac{e}{\varepsilon} \sum_i z_i C_{0, i} e^{-z_i e \Phi (x) /k_B T}$
• Sphere (protein) $\dfrac{1}{r^2} \dfrac{\partial}{\partial r} r^2 \dfrac{\partial \Phi}{\partial r} = \dfrac{e}{\varepsilon} \sum_i z_i C_{0, i} e^{-z_i e \Phi (x) /k_B T}$
• Cylinder (DNA) $\dfrac{1}{r} \dfrac{\partial}{\partial r} r \dfrac{\partial \Phi}{\partial r} + \dfrac{\partial^2 \Phi}{\partial z^2} = \dfrac{e}{\varepsilon} \sum_i z_i C_{0, i} e^{-z_i e \Phi (x) /k_B T}$
These expressions only vary in the form of the Laplacian $\nabla^2$. They are solved by considering two boundary conditions: (1) $\Phi (\infty) = 0$ and (2) the surface charge density $\sigma /\epsilon = -\nabla \Phi$. We will examine the resulting ion distributions below.
In computational studies, the interactions of a solute with water and electrolyte solutions are often treated with "implicit solvent", a continuum approximation. Solving the PBE is one approach to calculating the effect of implicit solvent. The electrostatic free energy is calculated from $\Delta G_{\text{elec}} = \tfrac{1}{2} \sum_i ez_i \Phi_i$ and the electrostatic potential is determined from the PBE.
As a specific case of the PBE, let’s consider the example of a symmetric electrolyte, obtained from dissolving a salt that has positive and negative ions with equal valence $(z_+ = -z_- = z)$, resulting in equal concentration of the cations and anions $(C_{0, +} = C_{0, -} = C_0)$, as for instance when dissolving NaCl. Equation ($\ref{eq6.5.2}$) is used to describe the interactions of ions with the same charge (co-ions) versus the interaction of ions with opposite charge (counterions). For counterions, $z$ and $\Phi$ have opposite signs and the ion concentration should increase locally over the bulk concentration. For co-ions, $z$ and $\Phi$ have the same sign and we expect a lowering of the local concentration over bulk. Therefore, we expect the charge distribution to take a form
$\begin{array} {rcl} {\rho } & = & {-ze C_0 (e^{ze\Phi /k_B T} - e^{-ze\Phi /k_B T})} \ {} & = & {-2zeC_0 \text{sinh} \left (\dfrac{ze\Phi}{k_B T} \right )} \end{array}$
Remember: $2\text{sinh} (x) = e^x - e^{-x}$. Then substituting into eq. ($\ref{eq6.5.1}$), we arrive at a common form of the PBE2
$\nabla^2 \Phi = \dfrac{2zeC_0}{\varepsilon} \text{sinh} \left (\dfrac{ze\Phi}{k_B T} \right )$
________________________________________
1. M. Daune, Molecular Biophysics: Structures in Motion. (Oxford University Press, New York, 1999); M. B. Jackson, Molecular and Cellular Biophysics. (Cambridge University Press, Cambridge, 2006).
2. Alternate forms in one dimension:
$\dfrac{\partial^2 \Phi}{\partial x^2} = \dfrac{e}{\varepsilon} C_0 2 \text{sinh} \left (\dfrac{e\Phi}{k_B T} \right ) = \dfrac{k_B T}{e} \dfrac{1}{\lambda_D^2} \text{sinh} \left (\dfrac{e\Phi}{k_B T} \right ) = \dfrac{4\pi k_B T}{e} \ell_B C_0 \text{sinh} \left (\dfrac{e\Phi}{k_B T} \right ) \nonumber$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/06%3A_Electrical_Properties_of_Water_and_Aqueous_Solutions/6.05%3A_PoissonBoltzmann_Equation.txt |
Since it is nonlinear, it is not easy to solve the PBE, but for certain types of problems, we can make approximations to help. The Debye–Hückel approximation holds for small electrostatic potential or high temperature conditions such that
$\dfrac{ze\Phi}{k_B T} \ll 1 \nonumber$
This is the regime in which the entropy of mixing dominates the electrostatic interactions between ions. In this limit, we can expand the exponential in eq. (6.5.5) as $\exp [-ze \Phi /k_B T] \approx 1 - ze \Phi /k_B T$. The leading term in the resulting sum drops because of the charge neutrality condition, eq. (6.5.4). Keeping the second term in the expansion leads to
$\nabla^2 \Phi = \kappa^2 \Phi \label{eq6.6.1}$
where
$\kappa^2 = \dfrac{2e^2}{\varepsilon k_B T} I \nonumber$
and the ionic strength, $I$, is defined as
$I = \dfrac{1}{2} \sum_i C_{0, i} z_i^2 \nonumber$
Looking at eq. ($\ref{eq6.6.1}$), we see that the Debye–Hückel approximation linearizes the PBE. It is known as the Debye–Hückel equation, or the linearized PBE. For the case of the 1:1 electrolyte solution described by eq. , we again obtain eq. ($\ref{eq6.6.1}$) using $\text{sinh} (x) \approx x$ as $x \to \infty$, with
$\kappa^2 = \dfrac{2z^2 e^2 C_0}{\varepsilon k_B T} = 8\pi z^2 C_0 \ell_B \nonumber$
The constant $\kappa$ has units of inverse distance, and it’s inverse is known as the Debye length $\lambda_D = \kappa^{-1}$. The Debye length sets the distance scale over which the electrostatic potential decays, i.e., the distance over which charges are screened from one another. For the symmetric electrolytes
$\lambda_D = \kappa^{-1} = \sqrt{\dfrac{\varepsilon k_B T}{2z^2 e^2 C_0}}$
As an example: 1:1 electrolytes in $\text{H}_2\text{O}$: $\varepsilon = 80$; $z_+ = -z_- = 1$; $T = 300\ K$ leads to
$\begin{array} {ll} {C_0 = 100\ mM} & {\lambda_D = 9.6\ \mathring{A}} \ {C_0 = 10\ mM} & {\lambda_D = 30.4\ \mathring{A}} \end{array} \nonumber$
$\lambda_D (\mathring{A}) \approx 3.04 \cdot [C_0 (M)]^{-1/2} \nonumber$
The Debye approximation holds for small electrostatic potentials relative to $k_B T (r > \lambda_D)$. For instance, it’s ok for ion distribution about large protein or vesicle but not for water in a binding pocket.
The variation of Debye length with concentrations of electrolytes. Reprinted from P. Ghosh http://nptel.ac.in/courses/103103033/module3/lecture3.pdf.
6.07: Ion Distributions Near a Charged Interface
Debye–Hückel Approximation
Describing ions near a negatively charged plane is a way of describing the diffuse layer of cations that forms near the negatively charge interface in lipid bilayers. The simplest approach is to use the Debye–Hückel equation (linearized PBE) in one dimension. $x$ is the distance away from the infinite charged plane with a surface charge density of $\sigma = q/a$.
$\dfrac{\partial^2 \Phi (x)}{\partial x^2} = \dfrac{1}{\lambda_D^2} \Phi (x)\nonumber$
Generally, the solution is
$\Phi (x) = a_1 e^{-x/\lambda_D} + a_2 e^{x/\lambda_D} \label{eq6.7.1}$
Apply boundary conditions:
1. $\lim_{x \to \infty} \Phi (x) = 0$ $\therefore a_2 = 0$
2. The electric field for surface with charge density σ (from Gauss’ theorem)
$E = - \dfrac{\partial \Phi}{\partial x} |_{\text{surface}} = \dfrac{\sigma}{\varepsilon} \label{eq6.7.2}$
Differentiate eq. ($\ref{eq6.7.1}$) and compare with eq. ($\ref{eq6.7.2}$):
$a_1 = \dfrac{\sigma \lambda_D}{\varepsilon} \nonumber$
The electrostatic potential decays exponentially away from the surface toward zero.
$\Phi (x) = \dfrac{\sigma \lambda_D}{\varepsilon} e^{-x/\lambda_D} \nonumber$
Nominally, the prefactor would be the "surface potential" at $x = 0$, but the Debye approximation would significantly underestimate this, as we will see later. Substituting $\Phi$ into the Poisson equation gives
$\rho (x) = \dfrac{-\sigma}{\lambda_D} e^{-x/\lambda_D} \label{eq6.7.3}$
Ion distribution density in solution decays exponentially with distance. This description is valid for weak potentials, or $x > \lambda_D$. The potential and charge density are proportional as $\Phi (x) = -\lambda_D^2 \rho (x)/\varepsilon$; both decay exponentially on the scale of the Debye length at long range.
Note:
Higher ion concentration $\to$ smaller $\lambda_D \to$ Double layer less diffuse.
Higher temperature $\to$ larger $\lambda_D \to$ Double layer more diffuse.
Note also that the surface charge is balanced by ion distribution in solution:
$\sigma = -\int_0^{\infty} \rho (x) dx$
which you can confirm by substituting eq. ($\ref{eq6.7.3}$).
Gouy–Chapman Model1
To properly describe the ion behavior for shorter distances ($x < \lambda_D$), one does not need to make the weak-potential approximation and can retain the nonlinear form of the Poisson–Boltzmann equation:
$\begin{array} {rcl} {\dfrac{\partial^2 \Phi (x)}{\partial x^2}} & = & {\dfrac{2zeC_0}{\varepsilon} \text{sinh} \left (\dfrac{ze \Phi (x)}{kT} \right )} \ {E} & = & {-\dfrac{\partial \Phi}{\partial x}|_{\text{surf}} = \dfrac{4\pi \ell_B \sigma k_B T}{e^2}} \end{array} \nonumber$
In fact, this form does have an analytical solution. It is helpful to define a dimensionless reduced electrostatic potential, expressed in thermal electric units:
$\underline{\Phi} = \dfrac{e}{k_B T} \Phi \nonumber$
and a reduced distance which is scaled by the Debye length
$\underline{x} = x/\lambda_D \nonumber$
Then the PBE for a 1:1 electrolyte takes on a simple form
$\nabla^2 \underline{\Phi} (x) = \text{sinh} \underline{\Phi} (x)\nonumber$
with the solution:
$\underline{\Phi} (\underline{x}) = 2 \ln \left (\dfrac{1 + ge^{-\underline{x}}}{1 - ge^{-\underline{x}}} \right )\nonumber$
Here $g$ is a constant, which we can relate to the surface potential, by setting $x$ to zero.
$\exp (-\underline{\Phi} (0)/2) = \dfrac{1 - g}{1 + g} = -\text{tanh} (\ln (g) /2)\nonumber$
$\underline{\Phi} (0)$ is the scaled surface potential. Using the surface charge density $\sigma$ we can find:
$g = - \dfrac{x_0}{\lambda_D} + \sqrt{1 + \left (\dfrac{x_0}{\lambda_D} \right )^2} \text{ with } x_0 = \dfrac{e}{2\pi \ell_B \sigma} \nonumber$
Then you can get the ion distribution from Poisson equation: $\rho (x) = \varepsilon \nabla^2 \Phi (x)$.
The Gouy–Chapman Layer, which is $x < \lambda_D$, has strong enough ionic interactions that you will see an enhancement over Debye–Hückel.
Stern Layer
In immediate proximity to a strongly charged surface, one can form a direct contacts layer of counterions on surface: the Stern layer. The Stern Layer governs the slip plane for diffusion of charged particles. The zeta potential $\zeta$ is the potential energy difference between the Stern layer and the electroneutral region of the sample, and governs the electrophoretic mobility of particles. It is calculated from the work required to bring a charge from $x = \infty$ to the surface of the Stern layer.
_______________________________
1. H. H. Girault, Analytical and Physical Electrochemistry. (CRC Press, New York, 2004).; M. B. Jackson, Molecular and Cellular Biophysics. (Cambridge University Press, Cambridge, 2006), Ch. 11.; M. Daune, Molecular Biophysics: Structures in Motion. (Oxford University Press, New York, 1999), Ch. 18.; S. McLaughlin, The electrostatic properties of membranes, Annu. Rev. Biophys. Biophys. Chem. 18, 113-136 (1989). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/06%3A_Electrical_Properties_of_Water_and_Aqueous_Solutions/6.06%3A_DebyeHuckel_Theory.txt |
Ion Distributions Near a Charged Sphere1
Now let’s look at how ions will distribute themselves around a charged sphere. This sphere could be a protein or another ion. We assume a spherically symmetric charge distribution about ions, and a Boltzmann distribution for the charge distribution for the ions ($i$) about the sphere ($j$) of the form
$\rho (r) = \sum_i ez_i C_{0, i} e^{-z_i e \Phi_j (r) /k_B T}$
$\Phi_j (r)$ is the electrostatic potential at radius $r$ which results from a point charge $z_j e$ at the center of the sphere. Additionally, we assume that the sphere is a hard wall, and define a radius of closest approach by ions in solution, $b$. The PBE becomes
$\dfrac{1}{r^2} \dfrac{d}{dr} \left (r^2 \dfrac{d\Phi}{dr} \right ) = \dfrac{1}{\varepsilon} \sum_i ez_i C_{0, i} e^{-z_i e \Phi_j (r) /k_B T} \nonumber$
To simplify this, we again apply the Debye–Hückel approximation $(ze\Phi \ll k_B T)$, expand the exponential in eq. , drop the leading term due to the charge neutrality condition, and obtain
$\rho (r) = -\sum_i C_{0, i} z_i^2 e^2 \Phi_j (r)/k_B T \label{eq6.8.2}$
Then the linearized PBE is in the Debye–Hückel approximation is
$\dfrac{1}{r^2} \dfrac{d}{dr} \left (r^2 \dfrac{d\Phi}{dr} \right ) = \kappa^2 \Phi \label{eq6.8.3}$
As before: $\kappa^2 = \lambda_D^{-2} = 2e^2 I/\varepsilon k_B T$. Solutions to eq. ($\ref{eq6.8.3}$) will take the form:
$\Phi = A_1 \dfrac{e^{-\kappa r}}{r} + A_2 \dfrac{e^{\kappa r}}{r} \label{eq6.8.4}$
To solve this use boundary conditions:
1. $A_2 = 0$, since $\Phi \to 0$ at $r = \infty$.
2. The field at the surface of a sphere with charge $z_j e$ and radius $b$ is determined from
$4\pi b^2 E(b) = \dfrac{z_j e}{\varepsilon} \label{eq6.8.5}$
Now, using
$E(b) = -\dfrac{d\Phi}{dr}|_{r = b} \label{eq6.8.6}$
Substitute eq. ($\ref{eq6.8.4}$) into RHS and eq. ($\ref{eq6.8.5}$) into LHS of eq. ($\ref{eq6.8.6}$). Solve for $A_1$.
$A_1 = \dfrac{z_j e e^{\kappa b}}{4\pi \varepsilon (1 + \kappa b)}\nonumber$
So, the electrostatic potential for $r \ge b$ is
$\Phi (r) = \underbrace{\dfrac{z_j e}{4\pi \varepsilon_0 r}}_{\text{vacuum}} \dfrac{e^{-\kappa (r - b)}}{\varepsilon_r (1 + \kappa b)} \label{eq6.8.7}$
Setting $r = b$ gives us the surface potential of the sphere:
$\Phi (b) = \dfrac{z_j e}{4\pi \varepsilon b (1 + \kappa b)}\nonumber$
Note the exponential factor in eq. ($\ref{eq6.8.7}$) says that $\Phi$ drops faster than $r^{-1}$ as a result of screening. Now substitute eq. ($\ref{eq6.8.7}$) into eq. ($\ref{eq6.8.2}$) we obtain the charge probability density
$\rho (r) = \dfrac{-\kappa^2 z_j e}{4\pi r} \dfrac{e^{-\kappa (r - b)}}{1 + \kappa b}$
We see that the charge density about ion drops as $e^{-\kappa (r - b)}/r$, a rapidly decaying function that emphasizes the strong tendency for ions to attract or repel at short range. However, the charge density between $r$ and $r + dr$ is $4\pi r^2 \rho (r)$ and therefore grows linearly with r before decaying exponentially: $r e^{-\kappa (r - b)}$. We plot this function to illustrate the thickness of the "ion cloud" around the sphere, which is peaked at $r = \lambda_D$. Additionally, note, that the charge distribution around that ion is equal and opposite to the charge of the sphere "$j$".
$\int_b^{\infty} \rho (r) 4 \pi r^2 dr = -z_j e\nonumber$
It is also possible to calculate radial distribution functions for ions in the Debye–Hückel limit.2 The radial pair distribution function for ions of type $i$ and $j$, $g_{ij} (r)$, is related to the potential of mean force $W_{ij}$ as
$g_{ij} (r) = \exp [-W_{ij} (r) / k_B T]$
If only considering electrostatic effects, we can approximate $W_{ij}$ as the interaction energy $U_{ij} (r) = z_i e\Phi_j (r)$. Using the Debye–Hückel result, eq. ($\ref{eq6.8.7}$),
$U_{ij} (r) = \dfrac{z_i z_j e^2}{4\pi \varepsilon (1 + \kappa b)} \dfrac{e^{-\kappa (r - b)}}{r} \nonumber$
Let’s look at the form of $g(r)$ for two singly charged ions with $\lambda_D = 0.7\ nm$, $\epsilon = 80$, and $T = 300\ K$. The Bjerrum length is calculated as $\ell_B = e^2/4\pi \epsilon k_B T = 0.7\ nm$. Since the Debye–Hückel holds for $ze\Phi \ll k_B T$, we can expand the exponential in eq. as
$g_{ij} (r) = 1 - \chi_{ij} + \dfrac{1}{2} \chi_{ij}^2 + \cdots \nonumber$
where we define $\chi_{ij} = U_{ij} (r) /k_B T = \ell_B e^{-\kappa (r - b)} r^{-1} (1 + \kappa b)^{-1}$. The resulting radial distribution function for co- and counterions calculated for $b = 0.15\ nm$ are shown below.
Readings
1. M. Daune, Molecular Biophysics: Structures in Motion. (Oxford University Press, New York, 1999), Ch. 16, 18.
2. D. A. McQuarrie, Statistical Mechanics. (Harper & Row, New York, 1976), Ch. 15.
______________________________
1. See M. Daune, Molecular Biophysics: Structures in Motion. (Oxford University Press, New York, 1999), Ch. 16.; D. A. McQuarrie, Statistical Mechanics. (Harper & Row, New York, 1976), Ch. 15.; Y. Marcus, Ionic radii in aqueous solutions, Chem. Rev. 88 (8), 1475-1498 (1988).
2. See D. A. McQuarrie, Statistical Mechanics. (Harper & Row, New York, 1976), Ch. 15. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/01%3A_Water_and_Aqueous_Solutions/06%3A_Electrical_Properties_of_Water_and_Aqueous_Solutions/6.08%3A_Ion_Distributions_Near_a_Charged_Sphere.txt |
There are a number of ways in which macromolecular structure is described in biophysics, which vary in type of information they are trying to convey. Consider these two perspectives on macromolecular structure that represent opposing limits: atomistic vs. statistical.
1. Atomistic: Use of atoms, small molecules, or functional groups as building blocks for biomolecular structure. This perspective is rooted in the dominant methods used for studying macromolecular structure (90% X-ray crystallography; 10% NMR). It has the most value for describing detailed Ångstrom to nanometer scale interactions of a chemical nature, but also tends to reinforce a unique and rigid view of structure, even though this cannot be the case at physiological temperatures. The atomistic perspective is inherent to molecular force fields used in computational biophysics, which allow us to explore time-dependent processes and molecular disorder. Even within the atomistic representation, there are many complementary ways of representing macromolecular structure. Below are several representations of myoglobin structure, each is used to emphasize specific physical characteristics of the protein.
2. Statistical/physical: More applicable for disordered or flexible macromolecules. Emphasis is on a statistical description of molecules that can have multiple configurations. Often the atomic/molecular structure is completely left out. These tools have particular value for describing configurational entropy and excluded volume, and are influenced by the constraints of covalent bonding linkages along the chain. This approach is equally important: 30–40% of primary sequences in PDB are associated with disordered or unstructured regions. Conformational preferences are described statistically.
Statistical Models
• Structure described in terms of spatial probability distribution functions.
• There may be constraints on geometry or energy functions that describe interactions between and within chains.
• We will discuss several models that emerge for a continuous chain in space that varies in stiffness, constraints on conformation, and excluded volume.
• Segment models: random coils, feely jointed chain, freely rotating chain
• Lattice models: Flory–Huggins theory
• Continuum model: worm-like chain
07: Statistical Description of Macromolecular Structure
Segment Models1
• $(n + 1)$ beads link by $n$ segments or bonds of length $\ell$.
• Each bead has a position $\vec{r_i}$.
• Each bond is assigned a vector, $\vec{\ell_i} = \vec{r_i} - \vec{r_{i - 1}}$.
• The bending angle between adjacent segments $i$ and $(i + 1)$ is $\theta_i$: $\cos \theta = \vec{\ell_i} \cdot \vec{\ell_{i - 1}}$
• For each bending angle there is an associated dihedral angle $\phi_i$ defined as the rotation of segment $(i+1)$ out of the plane defined by segments $i$ and $(i - 1)$.
• There are $n - 1$ separate bending and dihedral angles.
Statistical Variables for Macromolecules
End-to-end distance
The contour length is the full length of the polymer along the contour of the chain:
$L_C = n \ell\nonumber$
Each chain has the same contour length, but varying dimensions in space that result from conformational flexibility. The primary structural variable for measuring this conformational variation is the end-to-end vector between the first and last bead, $\vec{R} = \vec{r_n} - \vec{r_0}$, or equivalently
$\vec{R} = \sum_{i = 1}^{n} \vec{\ell_i}\nonumber$
Statistically, the dimensions of a polymer can be characterized by the statistics of the end-to-end distance. Consider its mean-square value:
$\langle \vec{R}^2 \rangle = \langle \vec{R} \cdot \vec{R} \rangle = \left \langle \left (\sum_{i = 1}^{n} \vec{\ell_i} \right ) \cdot \left (\sum_{j= 1}^{n} \vec{\ell_j} \right ) \right \rangle$
After expanding these sums, we can collect two sets of terms: (1) the self-terms with $i = j$ and (2) the interbond correlations $(i \ne j)$:
$\begin{array} {rcl} {\langle \vec{R}^2 \rangle } & = & {n \ell^2 + \sum_{j \ne i} \langle \vec{\ell_i} \cdot \vec{\ell_j} \rangle} \ {} & = & {n \ell^2 + \ell^2 \sum_{j \ne i} \langle \cos \theta_{ij} \rangle} \end{array} \label{eq7.1.1}$
Here $\theta_{ij}$ is the angle between segments $i$ and $j$. This second term describes any possible conformational preferences between segments along the chain. We will call the factor $\langle \cos \theta_{ij} \rangle$ the segment orientation correlation function, which is also written
$\begin{array} {rcl} {g(k)} & = & {\langle \cos \theta_k \rangle} \ {\theta_k} & = & {\vec{\ell_i} \cdot \vec{\ell_{i + k}} \ \ \ \ \ \ \ \ k = |j - i|} \end{array}$
Here $k$ refers to the separation between two segments. This correlation function can vary in value from 1 to -1, where +1 represents a highly aligned or extended chain and negative values would be very condensed or compact. No interband correlations $(g = 0)$ is expected for placement of segments by a random walk.
Interbond correlation can be inserted into segment models, both through ad hoc rules, or by applying an energy function that constrains the intersegment interactions. For instance, the torsional energy function below, $U_{\text{conf}}$, would be used to weight the probability that adjacent segments adopt a particular torsional angle. A general torsional energy function $U_{\text{conf}} (\Theta)$ involves all $2(n-1)$ possible angles $\Theta = \{\theta_1, \phi_1, \theta_2, \phi_2, ... \theta_{n-1}, \phi_{n-1} \}$, the joint probability density for adopting a particular conformation is
$P(\Theta) = \dfrac{e^{-U_{\text{conf}} (\Theta)/k_B T}}{\int d \Theta e^{-U_{\text{conf}} (\Theta)/k_B T}} \nonumber$
The integral over $\Theta$ reflects $2(n - 1)$ integrals over polar coordinates for all adjacent segments,
$\int d \Theta = \int_{0}^{\pi} \int_{0}^{2\pi} \sin \theta_1 d \theta_1 d \phi_1 \cdots \int_{0}^{\pi} \int_{0}^{2\pi} \sin \theta_{n - 1} d \theta_{n - 1} d \phi_{n - 1} \nonumber$
Then the alignment correlation function is
$\langle \vec{\ell_i} \cdot \vec{\ell_j} \rangle = \ell^2 \int d \Theta \cos \theta_{ij} P (\Theta) \nonumber$
This is not a practical form, so we will make simplifying assumptions about the form of this probability distribution. For instance, if any segments configuration depends only on its nearest neighbors then $P(\Theta) = P(\theta, \phi)^{(n - 1)}$.
Persistence Length
For any polymer, alignment of any pair of vectors in the chain becomes uncorrelated over a long enough sequence of segments. To quantify this distance we define a "persistence length" $\ell_p$.
$\ell_p = \langle \hat{\ell_i} \cdot \sum_{j = 1}^{n} \vec{\ell_j} \rangle \ \ \ \ \hat{\ell_i} = \dfrac{\vec{\ell_i}}{|\ell |} \nonumber$
This is the characteristic distance along the chain for the decay for the orientational correlation function between bond vectors,
$g(k) = \ell ^2 \langle \cos^k \theta \rangle \nonumber$
How will this behave? If you consider that $|\cos \theta | < 1$, then $\langle \cos^k \theta \rangle$ will drop with increasing $k$, approaching zero as $k \to \infty$. That is the memory of the alignment between two bond vectors drops with their separation, where the distance scale for the loss of correlation is $\ell_p$. We thus expect a monotonically decaying form to this function:
$g(k) = \ell^2 e^{-k \ell / \ell_p} \label{eq7.1.3}$
For continuous thin rod models of the polymer, this expression is written in terms of the contour distance $s$, the displacement along the contour of the chain (i.e., $s = \ell k$),
$g(s) = \ell^2 e^{- |s|/\ell_p} \nonumber$
How do we relate $\theta$ and $\ell_p$?2 Writing $\langle \cos^k \theta \rangle \approx \exp (k \ln [\langle \cos \theta \rangle ])$ and equating this with eq. ($\ref{eq7.1.3}$) indicates that
$\ell_p = -\ell \ln \langle \cos \theta \rangle \nonumber$
For stiff chains, we can approximate $\ln (x) \approx (1 - x)$, so
$\ell_p \approx \dfrac{\ell }{1 - \langle \cos \theta \rangle} \nonumber$
Radius of Gyration
The radius of gyration is another important structural variable that is closely related to experimental observables. Here the polymer dimensions are expressed as extension relative to the center of mass for the chain.
This proves useful for branched polymers and heteropolymers (such as proteins). Denoting the position and mass of the $i^{\text{th}}$ bead as $\vec{r_i}$ and $m_i$, we define the center of mass for the polymer as a mass-weighted mean position of the beads in space:
$\vec{R_0} = \dfrac{\sum_{i = 0}^{n} m_i \vec{r_i}}{\sum_{i = 0}^{n} m_i} \nonumber$
The sum index starting at 0 is meant to reflect the sum over $n+1$ beads. The denominator of this expression is the total mass of the polymer $M = \sum_{i = 0}^{n} m_i$. If all beads have the same mass, then $m_i/M = 1/(n + 1)$ and $R_0$ is the geometrical mean of their positions.
$\vec{R_0} = \dfrac{1}{n + 1} \sum_{i = 0}^{n} \vec{r_i}\nonumber$
The radius of gyration $R_G$ for a configuration of the polymer describes the mass-weighted distribution of beads $R_0$, and is defined through
$\langle R_G^2 \rangle = \dfrac{1}{n + 1} \sum_{i = 0}^n \langle \vec{S_i^2} \rangle \nonumber$
where $\vec{S_i}$ is gyration radius, i.e., the radial distance of the $i^{\text{th}}$ bead from the center of mass
$\begin{array} {rcl} {\vec{S}_i^2 = \dfrac{m_i }{M} (\vec{r_i} - \vec{R}_0)^2} & \ & {\text{(mass-weighted)}} \ {\vec{S}_i^2 = \dfrac{1}{n + 1} (\vec{r_i} - \vec{R}_0)^2} & \ & {\text{(equal mass beads)}} \end{array}\nonumber$
Additionally, we can show that the mean-squared radius of gyration is related to the average separation of all beads of the chain.
$\langle R_G^2 \rangle = \dfrac{1}{(n + 1)^2} \sum_{i = 0}^n \sum_{j = 0}^n \langle (\vec{r_i} - \vec{r_j})^2 \rangle \nonumber$
Freely Jointed Chain
The freely jointed chain describes a macromolecule as a backbone for which all possible $\theta$ and $\phi$ are equally probable, and there are no correlations between segments. It is known as an "ideal chain" because there are no interactions between beads or excluded volume, and configuration of the polymer backbone follows a random walk. If we place the first bead at $r = 0$, we find that $\langle R \rangle = 0$, as expected for a random walk, and eq. ($\ref{eq7.1.1}$) reduces to
$\langle R^2 \rangle = n \ell^2 \nonumber$
$\text{ or } R_{rms} = \langle R^2 \rangle^{1/2} = \sqrt{n} \ell \nonumber$
While the average end-to-end distance may be zero, the variance in the end-to-end distribution is
$\sigma_r = \sqrt{\langle R^2 \rangle - \langle R \rangle^2} = \sqrt{n} \ell \nonumber$
The radius of gyration for an ideal chain is:
$R_G = \sqrt{\dfrac{\langle R^2 \rangle }{6}} = \sqrt{\dfrac{n \ell^2}{6}}\nonumber$
Gaussian Random Coil
The freely jointed chain is also known as a Gaussian random coil, because the statistics of its configuration are fully described by $\langle R \rangle$ and $\langle R^2 \rangle$, the first two moments of a Gaussian end-to-end probability distribution $P(R)$. The end-to-end probability density in one dimension can be obtained from a random walk with $n$ equally sized steps of length $\ell$ in one dimension, where forward and reverse steps are equally probable. If the first bead it set at $x_0 = 0$, then the last bead is placed by the last step at position $x$. In the continuous limit:
$P(x, n) = \sqrt{\dfrac{1}{2\pi n \ell^2}} e^{-x^2/2n \ell^2}\label{eq7.1.4}$
$P(x, n) dx$ is the probability of finding the end of the chain with $n$ beads at a distance between $x$ and $x+dx$ from its first bead. Note this equates the rms end-to-end distance with the standard deviation for this distribution: $\langle R^2 \rangle = \sigma^2 = n \ell^2$.
To generalize eq. ($\ref{eq7.1.4}$) to a three-dimensional chain, we recognize that propagation in the $x, y$, and $z$ dimensions is equally probable, so that the 3D probability density can be obtained from a product of 1D probability densities $P(r) = P(x) P(y) P(z)$. Additionally, we need to consider the constraint that the distribution of end-to-end distances are equal in each dimension:
$\langle \vec{R}^2 \rangle = \sigma_x^2 + \sigma_y^2 + \sigma_z^2 = n \ell^2 \nonumber$
and since $\sigma_x^2 = \sigma_y^2 = \sigma_z^2$,
$\langle \vec{R}^2 \rangle = 3 \sigma_x^2 = n \ell^2 \nonumber$
Therefore,
$\begin{array} {rcl} {P(r, n)} & = & {\sqrt{1}{2\pi \sigma_x^2} e^{-x^2/2 \sigma_x^2} \sqrt{1}{2\pi \sigma_y^2} e^{-x^2/2 \sigma_y^2} \sqrt{1}{2\pi \sigma_z^2} e^{-x^2/2 \sigma_z^2} } \ {} & = & {\left (\dfrac{3}{2\pi \sigma^2} \right )^{3/2} e^{-3r^2/2\sigma^2}} \end{array} \nonumber$
To simplify, we define a scaling parameter with dimensions of inverse length
$\beta = \sqrt{\dfrac{3}{2n \ell^2}} = \sqrt{\dfrac{3}{2}} \langle R^2 \rangle^{-1/2} \nonumber$
Then, the probability density in Cartesian coordinates,
$P(x, y, z, n) = \dfrac{\beta^3}{\pi^{3/2}} e^{-\beta^2 r^2} \ \ \ \text{ where } r^2 = x^2 + y^2 + z^2 \nonumber$
Note the units of $P(x, y, z, n)$ are inverse volume or concentration. The probability of finding the end of a chain of $n$ beads in a box of volume dx dy dz at the position $x, y, z$ is $P(x, y, z, n)\ dx\ dy\ dz$. This function illustrates that the most probable end-to-end distance for a random walk polymer is at the origin. On the other hand, we can also express this as a radial probability density that gives the probability of finding the end of a chain at a radius between $r$ and $r+dr$ from the origin. Since the volume of a spherical shell grows in proportion to its surface area:
$P(r, n) dr = 4 \pi r^2 P(x, y, z, n) dr\nonumber$
$P(r, n) = 4\pi r^2 \left (\dfrac{3}{2\pi n \ell^2} \right )^{3/2} \exp \left [-\dfrac{3}{2} \dfrac{r^2}{n \ell^2} \right ]$
The units of $P(r, n)$ are inverse length. For the freely jointed chain, we see that $\beta^{-1} = \sqrt{2\langle R^2 \rangle /3}$ is the most probable end-to-end distance.
Freely Rotating Chain
An extension to the freely jointed chain that adds a single configurational constraint that better resembles real bonding in polymers is the freely rotating chain. In this case, the backbone angle $\theta$ has a fixed value, and the dihedral angle $\phi$ can rotate freely.
To describe the chain dimensions, we need to evaluate the angular bond correlations between segments. Focusing first on adjacent segments, we know that after averaging over all $\phi$, the fixed $\theta$ assures that $\langle \vec{\ell_i} \cdot \vec{\ell_{i+1}} \rangle = \ell^2 \cos \theta$. For the next segment in the series, only the component parallel to $\vec{\ell_{j + 1}}$ will contribute to sequential bond correlations as we average over $\phi_{i + 2}$:
$\begin{array} {rcl} {\langle \vec{\ell_i} \cdot \vec{\ell_{i + 2}} \rangle} & = & {\langle \cos (\theta_i) \cos (\theta_{i + 1}) - \sin (\theta_i) \sin (\theta_{i + 1}) \cos (\phi_{i + 1}) \rangle} \ {} & = & {\ell^2 \cos^2 \theta} \end{array}\nonumber$
Extending this reasoning leads to the observation
$\langle \vec{\ell_i} \cdot \vec{\ell_j} \rangle = \ell^2 (\cos \theta)^{j - i} \nonumber$
To evaluate the bond correlations in this expression, it is helpful to define an index for the separation between two bond vectors:
$k = j - i\nonumber$
and
$\alpha = \cos \theta \nonumber$
Then the segment orientation correlation function is
$g(k) = \langle \vec{\ell_i} \cdot \vec{\ell_j} \rangle = \ell^2 \alpha^k \nonumber$
For a separation $k$ on a chain of length $n$, there are $n-k$ possible combinations of bond angles,
$\sum_{j \ne i} \langle (\cos \theta)^{j - i} \rangle = \sum_{k = 1}^{n - 1} (n - k) \alpha^k \nonumber$
$\therefore \ \ \ \ \ \langle R^2 \rangle = n \ell^2 + \ell^2 \sum_{k = 1}^{n - 1} (n - k) \alpha^k \nonumber$
From this you can obtain
$\langle R^2 \rangle = n \ell^2 \left (\dfrac{1 + \alpha}{1 - \alpha} - \dfrac{2\alpha (1 - \alpha^n)}{n (1 - \alpha)^2} \right ) \nonumber$
In the limit of long chains ($n \to \infty$), we find
$\langle R^2 \rangle \to n \ell^2 \left ( \dfrac{1 + \alpha}{1 - \alpha} \right )\nonumber$
and
$R_G = \sqrt{\dfrac{n \ell^2}{6} \left ( \dfrac{1 + \alpha}{1 - \alpha} \right )}\nonumber$
Restricted dihedrals
When the freely rotating chain is also amended to restrict the dihedral angle $\phi$, we can solve the mean square end-to-end distance in the limit $n \to \infty$. Given an average dihedral angle,
$\beta = \langle \cos \phi \rangle \nonumber$
$\langle R^2 \rangle = n \ell^2 \left (\dfrac{1 + \alpha}{1 - \alpha} \right ) \left (\dfrac{1 + \beta}{1 - \beta} \right ) \nonumber$
Nonideal Behavior
Flory characteristic ratio
Real polymers are stiff and have excluded volume, but the $R \sim \sqrt{n}$ scaling behavior usually holds at large $n (R \gg \ell_p)$. To characterize non-ideality, we use the Flory characteristic ratio:
$C_n = \dfrac{\langle R^2 \rangle}{n \ell^2} \nonumber$
For freely jointed chains $C_n = 1$. For nonideal chains with angular correlations, $C_n > 1$. Cn depends on the chain length $n$, but should have an asymptotic value for large $n$: $C_{\infty}$. For example, if we examine long freely rotating chains
$C_{\infty} = \lim_{n \to \infty} \dfrac{\langle R^2 \rangle}{n \ell^2} = \dfrac{1 + \alpha}{1 - \alpha} \ \ \ \ \ \alpha = \cos \theta \nonumber$
(In practice, this limit typically holds for $n > 30$). Consider a tetrahedrally bonded polymer with full angle $109^{\circ}$ ($\theta = 54^{\circ}$). then $\cos \theta = 1/3$, and $C_n = 2$. In practice, we reach the long chain limit $C_{\infty}$ at $n \approx 10$. This relation works well for polyglycine and polyethylene glycol (PEG).
Statistical segment or Kuhn length
How stiff or flexible a polymer is depends on the length scale of observation. What is stiff on one scale is flexible for another. For an infinitely long polymer, one can always find a length scale for which its statistics are that of a Gaussian random coil. As a result for a segment polymer, one can imagine rescale continuous segments into one longer "effective segment" that may not represent atomic dimensions, but rather is defined in order to correspond to a random walk polymer, with $C_n = 1$. Then, the effective length of the segment is $\ell_e$ (also known as the Kuhn length) and the number of effective segments is $n_e$. Then the freely jointed chain equations apply:
$\begin{array} {c} {L_C = n_e \ell_e} \ {\langle R^2 \rangle = n_e \ell_e^2} \end{array} \nonumber$
From these equations, $\ell_e = \langle R^2 \rangle /L_C$. We see that $\ell_e \gg \ell$ applies to stiff chains, whereas $\ell_e \approx \ell$ are flexible.
We can also write the contour length as $L_C = \gamma n \ell$, where $\gamma$ is a geometric factor < 1 that describes constraint on bond angles. For a freely rotating chain: $\gamma = \cos (\theta /2)$. Using the long chain chain expressions $(n \to \infty)$: $\langle R^2 \rangle = C_{\infty} n \ell^2$, we find
$\begin{array} {c} {\ell_e = \left (\dfrac{C_{\infty}}{\gamma } \right ) \ell} \ {n _{\ell} = \left (\dfrac{\gamma^2}{C_{\infty}} \right ) n} \ {\ell_p = \left (\dfrac{C_{\infty} + 1}{2} \right ) \ell } \end{array} \nonumber$
Representative values for polymer segment models
$C_{\infty}$ $(n_e/n)$ $\ell$ (nm) $\ell_e$ (nm) $\gamma$ $\ell_p$ (nm)
Polyethylene 6.7 (n > 10) 0.154 1.24 0.83
PEG 3.8 0.34
Polyalanine 9 (n > 70) 0.38 3.6 0.95 0.5
Polyproline 90 (n > 700) 5-10
dsDNA 86 0.35 30-100 1 50
ssDNA 1.5
Cellulose 6.2
Actin 16700 10000-20000
____________________________________________
1. C. R. Cantor and P. R. Schimmel, Biophysical Chemistry Part III: The Behavior of Biological Macromolecules. (W. H. Freeman, San Francisco, 1980), Ch. 18.; K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010); P. J. Flory, Principles of Polymer Chemistry. (Cornell University Press, Ithaca, 1953).
2. C. R. Cantor and P. R. Schimmel, Biophysical Chemistry Part III: The Behavior of Biological Macromolecules. (W. H. Freeman, San Francisco, 1980), Ch. 19 p. 1033. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/02%3A_Macromolecules/07%3A_Statistical_Description_of_Macromolecular_Structure/7.01%3A_Segment_Models.txt |
Excluded Volume Effects
In real polymers, the chance of colliding with another part of the chain increases with chain length.
$\langle R^2 \rangle = n \ell^2 + \sum_{i \ne j} \langle \vec{\ell_i} \cdot \vec{\ell_j} \rangle \nonumber$
$\langle \vec{\ell_i} \cdot \vec{\ell_j} \rangle = g(s) = \langle \vec{\ell_i} \cdot \vec{\ell_{i + s}} \rangle \ \ \ \ s = |i - j|\nonumber$
$g(s)$ gives the orientational correlations between polymer segments.
Flory, statistical mechanics of chain molecules
• If correlations are purely based on bond angles and rotational potential, then $g(s)$ decays exponentially with $s$. There is no excluded volume.
• With excluded volume, $g(s)$ does not vanish for large $k$. There are "long-range" interactions within the chain. "
• "Long range" means along long distance along contour, but short range in space.
• Excluded volume depends on chain + solvent and temperature.
Virial expansion
At low densities, thermodynamic functions can be expanded in a power series in the number of particles per unit volume: $n = N/V$ (density).
$\begin{array} {rcl} {F} & = & {F^0 + F_{\text{int}}} \ {F_{\text{int}}} & = & {N_p k_B T (nB + n^2 C + ...)} \end{array} \nonumber$
• $F^0$ refers to ideal chain
• $N_p$ is # of polymer molecules
• $B$: units of volume
Excluded volume (repulsion) and attractive interactions are related to the second virial coefficient $B$. The excluded volume (or volume correlation relative to ideal behavior) for interacting beads of a polymer chain is calculated from
$V_{\text{ex}} = \int d^3 r (1 - \exp [-U(r) /k_B T]) \nonumber$
$U(r)$ is the interaction potential. In the high temperature limit $V_{\text{ex}} = 2B$. So $2B$ can be associated with the excluded volume associated with one segment (bead) of the chain.
Temperature dependence
• At hight $T$ ($k_B T \gg \varepsilon$)
The attractive part of potential is negligible, and repulsions result in excluded volume. In this limit $2B \approx V_{\text{ex}}$.
• As $T \to 0$, the attractive part of potential matters more and more, resulting in collapse relative to ideal chain.
• Cross over: Theta point $T = \Theta$
Near $\Theta$ $2B \sim V_{\text{ex}} \left (\dfrac{T - \Theta}{\Theta} \right )$
$T > \Theta$ High $T$. Repulsion dominates. Polymer swells (good solvent)
$T < \Theta$ Low $T$. Attractions dominate. Polymer collapses (globule, poor solvent)
Polymer swelling
At high temperatures $(T \gg \Theta)$, the free energy of a coil can be expressed in terms interaction potential, which is dominated by repulsions that expand the chain, and the entropic elasticity that opposes it (see next chapter).
$F = U - TS = nk_B TB \dfrac{3n}{4\pi R^3} + k_B T \dfrac{3R^2}{2n \ell^2} + const. \nonumber$
By minimizing $F$ with respect to the end-to-end distance, $R$, and solving for $R$, we can find how the $R$ scales with polymer size:
$R \propto (B \ell^2)^{3/5} n^{3/5}\nonumber$
We see that the end-to-end distance of the chain with excluded volume scales with monomer number ($n$) with a slightly larger exponent than an ideal chain: $n^{3/5}$ rather than $n^{1/2}$. Generally, the relationship between $R$ and $n$ is expressed in terms of the Flory exponent, $ν$, which is related to several physical properties of polymer chains:
$R \propto n^{V}\nonumber$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/02%3A_Macromolecules/07%3A_Statistical_Description_of_Macromolecular_Structure/7.02%3A_Excluded_Volume_Effects.txt |
For certain problems, we are concerned with cyclic polymers chains:
• Bubbles/loops in DNA melting
• Polypeptide and RNA hairpins
• DNA strand separation in transcription
• Cyclic DNA, chromosomal looping, and supercoiling
In describing macromolecules in closed loop form, the primary new variable that we need to address is the loop’s configurational entropy. Because of configurational constraints that tie the ends of a loop together $(R_{ee} \to 0)$ the loop has lower configurational entropy than an unrestrained coil.
Let’s describe how the configurational entropy of a loop $S_L$ depends on the size of the loop. We will consider the segment model with $n_L$ segments in the loop. We start with the radial probability distribution for an unconstrained random coil, which is the reference state for our calculations:
$P(r, n) = 4\pi r^2 \left (\dfrac{3}{2\pi n \ell^2} \right )^{3/2} \exp \left [-\dfrac{3}{2} \dfrac{r^2}{n \ell^2} \right ]\label{eq7.3.1}$
The entropy of the loop $S_L$ will reflect the constraints placed by holding the ends of the random coil together, which we describe by saying the ends of the chain must lie within a small distance $\Delta r$ of each other. Since $R_{ee} < \Delta r$, $\Delta r^2 \ll n \ell^2$, and the exponential term in eq. ($\ref{eq7.3.1}$) is $\sim$ 1. Then the probability of finding a random coil configuration with an end-to-end distance within a radius $\Delta r$ is
\begin{align*} P_L (n_L) & \approx \int_{0}^{\Delta r} dr 4 \pi r^2 \left (\dfrac{3}{2\pi n_L \ell^2} \right )^{3/2} \[4pt] = & \left (\dfrac{6}{\pi} \right )^{1/2} \left (\dfrac{\Delta r}{\ell} \right )^3 n_L^{-3/2} \[4pt] & \equiv {bn_L^{-3/2}} \end{align*}
In the last line we find that the probability of finding a looped chain decreases as $P_L \propto n_L^{-3/2}$, where $b$ is the proportionality constant that emerges from integration. From the assumptions we made, $b \ll 1$, and $P_L<1$.
To calculate the configurational entropy of the chain, we assume that the polymer (free or looped) can be quantified by $\Omega$ configurational states per segment of the chain. This reflects the fact that our segment model coarse-grains over many internal degrees of freedom of the macromolecule. Then, the entropy of a random coil of n segments is $S_C = k_B \ln \Omega^n$. To calculate the loop entropy, we correct the unrestrained chain entropy to reflect the constraints placed by holding the ends of the random coil together in the loop.
$S_L = S_C + k_B \ln P_L \nonumber$
This expression reflects the fact that the number of configurations available to the constrained chain is taken to be $\Omega_L (n_L) = \Omega^{n_L} P_L (n_L)$, and each of these configurations are assumed to be equally probable ($S_L = k_B \ln \Omega_L$). Since $P_L<1$, the second term is negative, lowering the loop entropy relative to the coil. We find that we can express the loop configurational entropy as
$S_L (n_L) = k_B \left [n_L \ln \Omega - b - \dfrac{3}{2} \ln n_L \right]\nonumber$
Since this expression derives from the random coil, it does not account for excluded volume of the chain. However, regardless of the model used to obtain the loop entropy, we find that we can express it is the same form:
$S_L (n_L) = k_B \left [n_L a - b + c \ln n_L \right]\nonumber$
where $a, b$, and $c$ are constants. For the random coil $c = 1.50$, and for a self-avoiding random walk on a cubic lattice we find that it increases to $c = 1.77$. In 2D, a random coil results in $c = 1.0$, and a SAW gives $c = 1.44$.
Readings
1. M. Rubinstein and R. H. Colby, Polymer Physics. (Oxford University Press, New York, 2003).
2. K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010).
3. C. R. Cantor and P. R. Schimmel, Biophysical Chemistry Part III: The Behavior of Biological Macromolecules. (W. H. Freeman, San Francisco, 1980).
4. R. Phillips, J. Kondev, J. Theriot and H. Garcia, Physical Biology of the Cell, 2nd ed. (Taylor & Francis Group, New York, 2012).
5. P. J. Flory, Principles of Polymer Chemistry. (Cornell University Press, Ithaca, 1953). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/02%3A_Macromolecules/07%3A_Statistical_Description_of_Macromolecular_Structure/7.03%3A_Polymer_Loops.txt |
Polymer lattice models refer to models that represent chain configurations through the placement of a chain of connected beads onto a lattice. These models are particularly useful for describing the configurational entropy of a polymer and excluded volume effects. However, one can also explicitly enumerate how energetic interactions between beads influences the probability of observing a particular configuration. At a higher level, models can be used to describe protein folding and DNA hybridization.
________________________________________
1. K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010); S. F. Sun, Physical Chemistry of Macromolecules: Basic Principles and Issues, Array ed. (J. Wiley, Hoboken, N.J., 2004), Ch. 4.
08: Polymer Lattice Models
Entropy of Single Polymer Chain
Calculate the number of ways of placing a single homopolymer chain with $n$ beads on lattice. Place beads by describing the number of ways of adding a bead to the end of a growing chain:
A random walk would correspond to the case where we allow the chain to walk back on itself. Then the expression is $\Omega_P = M z^{n - 1}$
Note the mapping of terms in $\Omega_P = M z (z - 1)^{n - 2}$ onto $\Omega_P = \Omega_{trans} \Omega_{rot} \Omega_{conf}$.
$\text{ For } n \to \infty \ \ \ M \gg N \ \ \ \Omega_P \approx M(z - 1)^{n - 1}\nonumber$
$\begin{array} {rcl} {S_p} & = & {k_B \ln \Omega_P} \ {} & = & {k_B ((n - 1) \ln (z - 1) + \ln M)} \end{array} \nonumber$
This expression assumes a dilute polymer solution, in which we neglect excluded volume, except for the preceding segment in the continuous chain.
8.02: Self-Avoiding Walks
To account for excluded volumes, one can enumerate polymer configurations in which no two beads occupy the same site. Such configurations are called self-avoiding walks (SAWs). Theoretically it is predicted that the number of configurations for a random walk on a cubic lattice should scale with the number of beads as $\Omega_p (n) \propto z^n n^{\gamma - 1}$, where $\gamma$ is a constant which is equal to 1 for a random walk. By explicitly evaluating self-avoiding walks (SAWs) on a cubic lattice it can be shown that
$\Omega_p (n) = 0.2 \alpha^n n^{\gamma - 1}\nonumber$
where $\alpha = 4.68$ and $\gamma = 1.16$, and the chain entropy is
$S_p (n) = k_B [n \ln \alpha + (\gamma - 1) \ln n - 1.6].\nonumber$
Comparing this expression with our first result $\Omega_P = Mz(z - 1)^{n - 2}$ we note that in the limit of a random walk on a cubic lattice, α=z=6, when we exclude only the back step for placing the next bead atop the preceeding one $\alpha = (z - 1) =5$, and the numerically determined value is $\alpha = 4.68$.
_______________________________________
2. C. Vanderzande, Lattice Models of Polymers (Cambridge University Press, Cambridge, UK, 1998).
8.03: Conformational Changes with Temperature
Four bead polymer on a two‐dimensional lattice
Place polymer on lattice $z = 4$ $n = 4$ in 2D (with distinguishable end beads):
Configurational Partition Function
Number of thermally accessible microstates.
\begin{align*} Q & = (q_{conf})^N \[4pt] q_{conf} & = \underbrace{\sum_{i\ states = 1}^9 e^{-E_i/kT}}_{\text{sum over microstates}} \[4pt] &= \underbrace{\sum_{j\ levels = 1}^2 g_j e^{-E_j/kT}}_{\text{sum over energy levels}} \[4pt] & = 2 + 7e^{-\varepsilon /kT} \end{align*}
Probability of Being "Folded"
Fraction of molecules in the folded state
$P_{fold} = \dfrac{g_{fold} e^{-E_{fold}/kT}}{q_{conf}} = \dfrac{2}{2 + 7e^{-\varepsilon /kT}} \nonumber$
Mean End-to-End Distance
$\begin{array} {rcl} {\langle r_{ee} \rangle } & = & {\sum_{i = 1}^{9} \dfrac{r_i e^{-E_i/kT}}{q_{conf}}} \ {} & = & {\dfrac{(1)(2) + (\sqrt{5}) 6e^{-\varepsilon /kT} + 3e^{-\varepsilon /kT}}{q_{conf}}} \ {} & = & {\dfrac{2 + (6\sqrt{5} + 3) e^{-\varepsilon /kT}}{q_{conf}}} \end{array} \nonumber$
Also, we can access other thermodynamic quantities:
$F = -k_B T \ln Q \ \ \ \ \ U = \langle E \rangle = k_B T^2 \left (\dfrac{\partial \ln Q}{\partial T} \right )_{V, N} \nonumber$
$S = - \left (\dfrac{\partial F}{\partial T} \right )_{V, N} = k_B \ln Q + k_B T \left (\dfrac{\partial \ln Q}{\partial T} \right )_{V, N} \nonumber$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/02%3A_Macromolecules/08%3A_Polymer_Lattice_Models/8.01%3A_Entropy_of_Single_Polymer_Chain.txt |
Let’s being by defining the variables for the lattice:
• $M$: total number of lattice cells
• $N_P$: number of polymer molecules
• $n$: number of beads per polymer
• $N_S$: number of solvent cells
• $nN_P$ = total number of polymer beads
The total number of lattice sites is then composed of the fraction of sites occupied by polymer beads and the remaining sites, which we consider occupied by solvent:
$M = nN_P + N_S\nonumber$
Volume fractions of solvent and polymer:
$\phi_S = \dfrac{N_S}{M} \ \ \ \ \phi_P = \dfrac{nN_P}{M} \ \ \ \ \phi_S + \phi_P = 1 \nonumber$
The mole fraction of polymer:
$x_P = \dfrac{N_P}{N_S + N_P}\nonumber$
$x_P$ is small even if the volume fraction is high.
Excluded Volume for Single Polymer Chain
Generally, excluded volume is difficult to account for if you don’t want to elaborate configurations explicitly, as in self-avoiding walks. However, there is a mean field approach we can use to account for excluded volume.
A better estimate for chain configurations that partially accounts for excluded volume:
Large $n$:
$\Omega_P \approx \dfrac{(z - 1)^{n - 1}}{M} \dfrac{M!}{(M - n)!} \nonumber$
Entropy of Multiple Polymer Chains
For $N_P$ chains, we count growth of chains by adding beads one at a time to all growing chains simultaneously.
1) First bead. The number of ways for placing the $1^{\text{st}}$ bead for all chains:
2) Place the second bead on all chains. We assume the solution is dilute and neglect collisions between chains.
3) For placing the $n^{\text{th}}$ bead on $N_P$ growing chains. Here we neglect collisions between site $i$ and sites $>(i+4)$, which is the smallest separation that one can clash on a cubic lattice.
$V^{(n)} = \left (\dfrac{z - 1}{M} \right )^{N_P(N - 1)} \dfrac{(M - N_P)!}{(M - n \cdot N_P)!} \nonumber$
4) Total number of configurations of $N_P$ chains with $n$ beads:
Entropy of Polymer Solution
Entropy of polymer/solvent mixture:
$S_{\text{mix}} = k_B \ln \Omega_P\nonumber$
Calculate entropy of mixing:
The pure polymer has many possible entangled configurations $\Omega_P^0$, and therefore a lot of configurational entropy: $S_{\text{polymer}}^0$. But we can calculate $\Omega_P^0$ just by using the formula for $\Omega_P$ with the number of cells set to the number of polymer beads $M = nN_P$.
$\Omega_P^0 = \left (\dfrac{z - 1}{N_P \cdot n} \right )^{N_P(n - 1)} \dfrac{(N_P \cdot n)!}{N_P!} \nonumber$
$\dfrac{\Omega_P}{\Omega_P^0} = \left (\dfrac{N_P \cdot n}{M} \right )^{N_P (n - 1)} \dfrac{M!}{N_S!(N_P \cdot n)!}$
Since $\Delta S_{\text{mix}} = k_B \ln \dfrac{\Omega_P}{\Omega_P^0}$
$\begin{array} {rcl} {\Delta S_{\text{mix}}} & = & {-k_B N_S \ln \left (\dfrac{N_S}{M} \right) - k_B N_P \ln \left (\dfrac{N_P \cdot n}{M} \right)} \ {} & = & {-Mk_B \left (\phi_S \ln \phi_S + \dfrac{\phi_P}{n} \ln \phi_P \right )} \end{array} \nonumber$
where the volume fractions are:
$\phi_S = \dfrac{N_S}{M} \ \ \ \ \ \ \ \ \ \phi_P = \dfrac{nN_P}{M} = 1 - \phi_S \nonumber$
Note for $n = 1$, we have original lattice model of fluid.
8.05: PolymerSolvent Interactions
• Use same strategy as lattice model of a fluid.
• Considering polymer ($P$) and solvent ($S$) cells:
• Number of solvent cell contacts:
$zN_S = 2m_{SS} + m_{SP}\nonumber$
• Number of polymer cell contacts:
• Mean field approximation: Substitute the average number of solvent/polymer contacts.
• Polymers expand in good solvents, collapse in bad solvents, retain Gaussian random coil behavior in neutral solvents (θ solvents).
Good solvents $\chi < 0.5$ $\sqrt{\langle r^2 \rangle} \sim N^{3/5} \sim R_0 N^{3/5}$
Bad solvents (collapse) $\chi > 0.5$ $R \sim R_0 N^{1/3}$
Theta solvents $\chi = 0.5$ $\sqrt{\langle r^2 \rangle} \sim \dfrac{2N \ell^2}{3} = R_0$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/02%3A_Macromolecules/08%3A_Polymer_Lattice_Models/8.04%3A_FloryHuggins_Model_of_Polymer_Solutions.txt |
An alternative approach to describing macromolecular conformation that applied both to equilibrium and non-equilibrium phenomena uses a mechanical description of the forces acting on the chain. Of course, forces are present everywhere in biology. Near equilibrium these exist as local fluctuating forces that induce thermally driven excursions from the free-energy minimum, and biological systems use non-equilibrium force generating processes derived from external energy sources (such as ATP) in numerous processes such as those in transport and signaling. For instance, the directed motion of molecular motors along actin and microtubules, or the allosteric transmembrane communication of a ligand binding event in GPCRs.
Our focus in this section is on how externally applied forces influence macromolecular conformation, and the experiments that allow careful application and measurement of forces on single macromolecules. These are being performed to understand mechanical properties and stress/strain relationships. The can also be unique reporters of biological function involving the strained molecules.
Single Molecule Force Application Experiments
Force Range (pN) Displacement (nm) Loading Rate (pN/sec)
Optical Tweezers: 0.1-100 pN 0.1-105 5-10 Near Equilibruim
AFM: 10-104 0.5-104 100-1000 Non-equilibrium!
Stretching under flow: 0.1-1000 pN 10-105 1-100 Steady state force
MD simulations: Arb. <10 nm 105-107!
Remember
09: Macromolecular Mechanics
Here we will focus on the stretching and extension behavior of macromolecules. The work done on the system by an external force to extend a chain is
$w = -\int \vec{f}_{ext} \cdot d\vec{x} \nonumber$
Work ($w$) is a scalar, while force ($\vec{f}$) and displacement ($\vec{x}$) are vectors. On extension, the external force is negative, leading to a positive value of w, meaning work was done on the system. Classical mechanics tells us that the force is the negative gradient of the potential one is stretching against $(\vec{f} = -\partial U/ \partial x)$, but we will have to work with free energy and the potential of mean force since the configurational entropy of the chain is important. Since the change in free energy for a process is related to the reversible work needed for that process, we can relate the force along a reversible path to the free energy through
$\vec{f}_{rev} = - \left( \dfrac{\partial G}{\partial x} \right)_{p, T, N} \nonumber$
This describes the reversible process under which the system always remains at equilibrium, although certainly it is uncomfortable relating equilibrium properties ($G$) to nonequilibrium ones such as pulling a protein apart. For an arbitrary process, $ΔG ≤ w$.
Jarzynski Equality
A formal relationship between the free energy difference between two states and the work required to move the system from initial to final state has been proposed. The Jarzynski equality states
$e^{-\Delta G/kT} = \langle e^{w/k_BT_{in}} \rangle_{path} \nonumber$
Here one averages the Boltzmann-weighted work in the quantity at right over all possible paths connecting the initial and final states, setting $T$ to the initial temperature ($T_{in}$, and one obtains the Boltzmann weighted exponential in the free energy. This holds for irreversible processes! Further, since one can show that $\langle e^{-w/k_BT} \rangle \geq e^{-(w)/k_BT}$, we see that the average work done to move the system between two states is related to the free energy through $\langle w \rangle \geq \Delta G$. This reinforces what we know about the macroscopic nature of thermodynamics, but puts an interesting twist on it: Although the average work done to change the system will equal or exceed the free energy difference, for any one microscopic trajectory, the work may be less than the free energy difference. This has been verified by single molecule force/extension experiments.
Statistical Mechanics of Work
Let’s relate work and the action of a force to changes in statistical thermodynamic variables.1 The internal energy is
$U = \langle E \rangle = \sum_j P_j E_j \nonumber$
and therefore, the change in energy in a thermodynamic process is
$dU = d\langle E \rangle = \sum_j E_j dP_j + \sum_j P_j d E_j \nonumber$
Note the close relationship between this expression and the First Law:
$dU = đw+ đq \nonumber$
We can draw parallels between the two terms in these expressions:
\begin{aligned} &đq_{rev} = TdS \qquad \qquad \, \, \longleftrightarrow \sum_j E_jdP_j \ &dw \cong pdV \, or \, f \, dx \qquad \longleftrightarrow \sum_j P_jdE_j \end{aligned}
Heat is related to the ability to change populations of energetically different states, whereas work is related to the ability to change the energy levels with an external force.
__________________________________
1. T. L. Hill, An Introduction to Statistical Thermodynamics. (Addison-Wesley, Reading, MA, 1960), pp. 11–13, 66–77. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/02%3A_Macromolecules/09%3A_Macromolecular_Mechanics/9.01%3A_Force_and_Work.txt |
The worm-like chain (WLC) is perhaps the most commonly encountered models of a polymer chain when describing the mechanics and the thermodynamics of macromolecules. This model describes the behavior of a thin flexible rod, and is particularly useful for describing stiff chains with weak curvature, such as double stranded DNA. Its behavior is only dependent on two parameters that describe the rod: $\kappa_b$ its bending stiffness, and $L_C$, the contour length.
Let’s define the variables in this WLC model:
• $s \qquad \quad \, \text{The distance separating two points along the contour of the rod}$
• $\vec{r}_{\perp} \qquad \, \, \, \text{Normal unit vector}$
• $\vec{t} = \dfrac{\partial \vec{r}_{\perp}}{\partial s} \, \text{Tangent vector}$
• $\dfrac{\partial \vec{t}}{\partial s} \qquad \, \, \text{ Curvature of chain}$
• $\quad =\dfrac{1}{R} \quad \text{is inverse of local radius of curvature}$
The worm-like chain is characterized by:
• Persistence length, which is defined in terms of tangent vector correlation function:
$g(s) = \langle \vec{t}(0) \cdot \vec{t}(s) \rangle = \exp [-|s|/\ell_p] \label{9.2.1}$
• Bending energy: The energy it takes to bend the tangent vectors of a segment of length s can be expressed as
$U_b = \dfrac{1}{2} \kappa_b \int^L_0 ds \left( \dfrac{\partial \vec{t}}{\partial s} \right)^2$
Bending Energy
Let’s evaluate the bending energy of the WLC, making some simplifying assumptions, useful for fairly rigid rods. If we consider short distances over which the curvature is small, then $\theta \approx s/R$ and
$\dfrac{\partial \vec{t}}{\partial s} \approx \dfrac{d \theta}{ds} = \dfrac{1}{R} \label{9.2.3}$
Then we can express the bending energy in terms of an angle:
$U_b \approx \dfrac{1}{2s} \kappa_b \theta^2$
Note the similarity of this expression to the energy needed to displace a particle bound in a harmonic potential with force constant k: U = ½kx2. The bending energy can be used to obtain thermodynamic averages. For instance, we can calculate the variance for the tangent vector angles as a function of s (spherical coordinates):
\begin{align} \langle \theta^2(s) \rangle &= \dfrac{1}{Q_{bend}} \int^{2\pi}_0 d \phi \int^{\pi}_0 d\theta \, \sin \theta \, \theta^2 \, e^{-U_b(\theta )/k_BT} \ &= \dfrac{2sk_BT}{\kappa_b} \end{align}
Here we have used $sin θ ≈ θ$. The partition function for the bending of the rod is:
$Q_{bend} = \int^{2\pi}_0 d \phi \int^{\pi}_0 d\theta \, \sin \theta \, e^{-U_b(\theta)/k_BT} \nonumber$
Persistence Length
To describe the persistence length of the WLC, we recognize that Equation \ref{9.2.1} can be written as $g(s) = \langle \cos \theta (s) \rangle$ and expand this for small $θ$:
\begin{align*} g(s) &= \langle \cos \theta (s) \rangle \[4pt] &= \langle 1 - \dfrac{\theta^2 (s)}{2} +... \rangle \[4pt] &\approx 1 - \dfrac{1}{2} \langle \theta^2(s) \rangle \end{align*}
and from Equation \ref{9.2.3} we can write:
$g(s) \approx 1-\dfrac{sk_BT}{\kappa_b} \nonumber$
If we compare this to an expansion of the exponential in Equation \ref{9.2.1}
$g(s) = e^{-|s|/\ell_p} \approx 1-\dfrac{|s|}{\ell_p} \nonumber$
we obtain an expression for the persistence length of the worm-like chain
$\ell_p = \dfrac{\kappa_b}{k_BT} \nonumber$
End‐to‐End Distance
The end-to-end distance for the WLC is obtained by integrating the tangent vector over one contour length:
$\vec{R} = \int^{L_C}_{0} ds \, \vec{t} (s) \nonumber$
So the variance in the end-to-end distance is determined from the tangent vector autocorrelation function, which we take to have an exponential form:
\begin{aligned} \langle R^2 \rangle &= \langle R \cdot R\rangle \ &= \int^{L_C}_0 ds \int^{L_C}_0 ds' \langle t(s)t(s') \rangle \ &= \int^{L_C}_0 ds \int ^{L_C}_{0} ds' \, e^{-(s-s')/\ell_p} \end{aligned}
$\langle R^2 \rangle = 2\ell_p L_C - 2\ell_p^2 \left( 1-e^{-L_C/\ell_p} \right) \nonumber$
Let’s examine this expression in two limits:
$\text{rigid:}\quad \qquad \ell_p \gg L_C \qquad \langle R^2 \rangle \approx L_C^2$
$\text{flexible:} \qquad \ell_p \ll L_C \qquad \langle R^2 \rangle \approx 2L_C \ell_p \rightarrow n_e \ell_e^2 \rightarrow \therefore 2\ell_p = \ell_e$
Example $1$: DNA Bending in Nucleosomes
What energy is required to wrap DNA around the histone octamer in the nucleosome? Double stranded DNA is a stiff polymer with a persistence length of $\ell_p \approx 50$ nm, but the nucleosome has a radius of ~4.5 nm.
Solution
From $\ell_p$ and kBT = 4.1 pN nm, we can determine the bending rigidity using:
$\kappa_b = \ell_p k_BT = (50\text{ nm})(4.1 \text{ pN nm}) = 205\text{ pN nm}^2$
Then the energy required to bend dsDNA into one full loop is
\begin{aligned} U_b &\cong \dfrac{\kappa_b \theta^2}{2s} \approx \dfrac{\kappa_b (2\pi)^2}{2(2\pi R)} = \dfrac{\pi \kappa_b}{R} \ &= \dfrac{\pi(205 \text{ pn nm}^2 )}{4.5 \text{ nm}} = 143 \text{ pN nm} \ &= 35k_BT = 15 \text{ kcal (mol loops)}^{-1} \ &\qquad \text{or } 0.15 \text{ kcal basepair}^{-1} \end{aligned}
Continuum Mechanics of a Thin Rod1
The worm-like chain is a model derived from the continuum mechanics of a thin rod. In addition to bending, a thin rod is subject to other distortions: stretch, twist, and write.
Let’s summarize the energies required for these deformations:
Deformation variables:
• s: Position along contour of rod
• L0: Unperturbed length of rod
• $\vec{t}$: Tangent vector.
• $d\vec{t}/ds$ : curvature
• Ω: Local twist
The energy for distorting the rod is
$U = U_{st} +U_b+U_{tw} \nonumber$
In the harmonic approximation for the restoring force, we can write these contributions as
$U= \dfrac{1}{2} \int^L_{L_0} \kappa_{st} s ds \, + \dfrac{1}{2} \int^L_{L_0} \kappa_b \left( \dfrac{d\vec{t}}{ds} \right)^2 ds + \dfrac{1}{2} \int^L_{L_0} \kappa_{tw} \Omega^2 ds \nonumber$
The force constants, with representative values for dsDNA, are:
Stretching: $\kappa_{st} = \kappa_{st-entropic} + \kappa_{st-enthalpic}$
$\kappa_{st-entropic} \approx 3k_BT/ \ell_p L_c$
Bending: $\kappa_b$
$\kappa_b \approx 205 \text{ pN nm}^2$
Twisting: $\kappa_{tw}$
$\kappa_{tw} \approx (86 \text{ nm})k_BT = 353 \text{ pN nm}^2$
Writhe
An additional distortion in thin rods is writhe, which refers to coupled twisting and coiling, and is an important factor in DNA supercoiling. Twisting of a rod can induce in-plane looping of the rod, for instance as encountered with trying to coil a garden hose. The writhe number W of a rod refers to the number of complete loops made by the rod. The writhe can be positive or negative depending on whether the rod crosses over itself from right-to-left or left-to-right. The twist number T is the number of Ω = 2π rotations of the rod, and can also be positive of negative.
The linking number L = T+W is conserved in B-form DNA, so that twist can be converted into writhe and vice-versa. Since DNA in cells is naturally negatively supercoiled in nucleosomes, topoisomerases are used to change of linking number by breaking and reforming the phosphodiester backbone after relaxing the twist. Negatively supercoiled DNA can be converted into circular DNA by local bubbling (unwinding into single strands).
_____________________________________
1. D. H. Boal, Mechanics of the Cell, 2nd ed. (Cambridge University Press, Cambridge, UK, 2012). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/02%3A_Macromolecules/09%3A_Macromolecular_Mechanics/9.02%3A_Worm-like_Chain.txt |
The Entropic Spring
To extend a polymer requires work. We calculate the reversible work to extend the macromolecule from the difference in free energy of the chain held between the initial and final state. This is naturally related to the free energy of the system as a function of polymer end-to-end distance:
$w_{stretch} = F(r) - F(r_0) = - \int_{r_0}^{r} \vec{f_{rev}} \cdot d \vec{r} \nonumber$
For an ideal chain, the free energy depends only on the entropy of the chain: $F = -TS$. There are fewer configurational states available to the chain as you stretch to larger extension. The number of configurational states available to the system can be obtained by calculating the conformational partition function, $Q_{conf}$. For stretching in one-dimension, the Helmholtz free energy is:
$\begin{array} {rcl} {dF} & = & {-pdV - SdT + f\cdot dx} \ {} & = & {-k_B T \ln Q_{conf}} \ {S_{conf}} & = & {k_B \ln Q_{conf}} \end{array}\nonumber$
$f = - \left (\dfrac{\partial F}{\partial x} \right )_{V, T, N} = -k_B T \dfrac{\partial \ln Q_{conf}}{\partial x} = -T \dfrac{\partial S_{conf}}{\partial x} \label{eq9.3.1}$
When you increase the end-to-end distance, the number of configurational states available to the system decreases. This requires an increasingly high force as the extension approaches the contour length. Note that more force is needed to stretch the chain at higher temperature.
Since this is a freely joined chain and all microstates have the same energy, we can equate the conformational partition function of a chain at a particular extension $x$ with the probability density for the end-to-end distances of that chain
$Q_{conf} \to P_{fjc} (r)\nonumber$
Although we are holding the ends of the chain at a fixed and stretching with the ends restrained along one direction ($x$), the probability distribution function takes the three-dimensional form to properly account for all chain configurations: $P_{conf} (r) = P_0 e^{-\beta^2 r^2}$ with $\beta^2 = 3k_B T/2n \ell^2$ and $P_0 = \beta^3/\pi^{3/2}$ is a constant. Then
$\ln P_{conf} (r) = -\beta^2 r^2 + \ln P_0 \nonumber$
The force needed to extend the chain can be calculated from eq. ($\ref{eq9.3.1}$) after substituting $r^2 = x^2 + y^2 + z^2$, which gives
$f = -2\beta^2 k_B Tx = -\kappa_{st} x\nonumber$
So we have a linear relationship between force and displacement, which is classic Hooke’s Law spring with a force constant $\kappa_{st}$ given by
$\kappa_{st} = \dfrac{3k_B T}{n\ell^2} = \dfrac{3k_B T}{\langle r^2 \rangle_0} \nonumber$
Here $\langle r^2 \rangle_0$ refers to the mean square end-to-end distance for the FJC in the absence of any applied forces. Remember: $\langle r^2 \rangle_0 = n \ell^2 = \ell L_C$. In the case that all of the restoring force is due to entropy, then we call this an entropic spring $\kappa_{ES}$.
$\kappa_{ES} = \dfrac{T}{2} \left (\dfrac{\partial^2 S}{\partial x^2} \right )_{N, V, T}\nonumber$
This works for small forces, while the force is reversible. Notice that $\kappa_{ES}$ increases with temperature -- as should be expected for entropic restoring forces.
Example: Stretching DNA1
At low force:
dsDNA $\to \kappa_{st} = 5\ pN/nm$
ssDNA $\to \kappa_{st} = 160\ pN/nm \to \text{more entropy/more force}$
At higher extension you asymptotically approach the contour length.
Force/Extension of a Random Walk Polymer
Let’s derive force extension behavior for a random walk polymer in one dimension. The end-to-end distance is $r$, the segment length is $r$, and the total number of segments is $n$.
For any given $r$, the number of configurations available to the polymer is:
$\Omega = \dfrac{n!}{n_+ ! n_- !}\nonumber$
This follows from recognizing that the extension of a random walk chain in one dimension is related to the difference between the number of segments that step in the positive direction, $n_+$, and those that step in the negative direction, $n_-$. The total number of steps is $n = n_+ + n_-$. Also, the end-to-end distance can be expressed as
$r = (n_+ - n_-) \ell = (2n_+ - n) \ell = (n - 2n_-) \ell \label{eq9.3.2}$
$n_{\pm} = \dfrac{1}{2} \left (n \pm \dfrac{r}{\ell} \right ) \ \ \ \ \ \ \dfrac{\partial n_{\pm}}{\partial r} = \pm \dfrac{1}{2\ell}\nonumber$
Then we can calculate the free energy of the random walk chain that results from the entropy of the chain, i.e., the degeneracy of configurational states at any extension. This looks like an entropy of mixing calculation:
$\begin{array} {rcl} {F} & = & {-k_B T \ln \Omega} \ {} & = & {-k_B T (n \ln n - n_+ \ln n_+ - n_- \ln n_-)} \ {} & = & {nk_B T (\phi_+ \ln \phi_+ + \phi_- \ln \phi_-)} \end{array} \nonumber$
$\phi_{\pm} = \dfrac{n_{\pm}}{n} = \dfrac{1}{2} (1 \pm x)\nonumber$
Here the fractional end-to-end extension of the chain is
$x = \dfrac{r}{L_C}$
Next we can calculate the force needed to extend the polymer as a function of $r$:
$f = -\dfrac{\partial F}{\partial r} \to \dfrac{\partial F}{\partial \phi_{\pm}} \dfrac{\partial \phi_{\pm}}{\partial r} \ \ \ \ \ \ \dfrac{\partial \phi_{\pm}}{\partial r} = \pm \dfrac{1}{2L_C} \nonumber$
Using eq. ($\ref{eq9.3.2}$)
$\begin{array} {rcl} {f} & = & {-nk_B T (\ln \phi_+ - \ln \phi_-) \left (\dfrac{1}{2L_C} \right )} \ {} & = & {-\dfrac{nk_B T}{2L_C} \ln \left (\dfrac{1 + x}{1 - x} \right )} \ {} & = & {-\dfrac{k_B T}{\ell} \dfrac{1}{2} \ln \left (\dfrac{1 + x}{1 - x} \right )} \end{array}\nonumber$
$f = -\dfrac{k_B T}{\ell} \text{tanh}^{-1} (x) \label{eq9.3.4}$
where $I$ used the relationship: $\ln \left (\dfrac{1 + x}{1 - x} \right ) = 2 \text{tanh}^{-1} (x)$. Note, here the forces are scaled in units of $k_B T/\ell$. For small forces $x \ll 1$, $\text{tanh}^{-1} (x) \approx x$ and eq. ($\ref{eq9.3.4}$) gives $f \approx \dfrac{k_B T}{\ell L_C} r$. This gives Hooke’s Law behavior with the entropic force constant expected for a 1D chain. For a 3D chain, we would expect: $f \approx \dfrac{3k_B T}{\ell L_C} r$. The spring constant scales with dimensionality.
The relationship between position, force, and the partition function
Now let's do this a little more carefully. From classical statistical mechanics, the partition function is
$Q = \int \int dr^{3N} dp^{3N} \exp (-H/k_B T)\nonumber$
Where $H$ is the Hamiltonian for the system. The average value for the position of a particle described by the Hamiltonian is
$\langle x \rangle = \dfrac{1}{Q} \int \int dr^3 dp^3 x \exp (-H/k_B T)\nonumber$
If the Hamiltonian takes the form
$H = -f \cdot x \nonumber$
Then
$\langle x \rangle = \dfrac{k_B T}{Q} \left (\dfrac{\partial Q}{\partial f} \right )_{V, T, N} = k_B T \left (\dfrac{\partial \ln Q}{\partial f} \right )_{V, T, N} \nonumber$
This describes the average extension of a chain if a force is applied to the ends.
Force/Extension Behavior for a Freely Jointed Chain
Making use of the expressions above and $Q = q^N$
$q_{conf} = \int \int dr^3 dp^3 e^{-U/kT} e^{\vec{f} \cdot \vec{r}/k_B T} \ \ \ \ \ \ \ \langle r \rangle = Nk_B T \left (\dfrac{\partial \ln q_{conf}}{\partial f} \right )_{U, r, n}\nonumber$
Here we also inserted a general Hamiltonian which accounts for the internal chain interaction potential and the force ex the chain: $H = U - \vec{f} \cdot \vec{r}$. For $N$ freely jointed chains with n segments, we set $U \to 0$, and focus on force exerted on every segment of the chain.
$\vec{f} \cdot \vec{r} = \sum_{i = 1}^{n} \vec{f} \cdot \vec{\ell_i} = f \ell \sum_{i = 1}^{n} \cos \theta_i\nonumber$
Treating the segments as independent and integrating over all $\theta$, we find that
$q_{conf} (f) = \dfrac{2\pi \text{sinh} \varphi}{\varphi} \nonumber$
$\langle r \rangle = n \ell \left [\text{coth} \varphi - \dfrac{1}{\varphi} \right ]$
where the unitless force parameter is
$\varphi = \dfrac{f \ell}{k_B T}$
As before, the magnitude of force is expressed relative to $k_B T/\ell$. Note this calculation is for the average extension that results from a fixed force. If we want the force needed for a given average extension, then we need to invert the expression. Note, the functional form of the force-extension curve in eq. is different than what we found for the 1D random walk in eq. ($\ref{eq9.3.4}$). We do not expect the same form for these problems, since our random walk example was on a square lattice, and the FJC propagates radially in all directions.
Derivation
For a single polymer chain:
$\begin{array} {rcl} {q} & = & {\int \int dr^3 dp^3 e^{U/k_B T} e^{-f \cdot r/k_B T}} \ {P(r)} & = & {\dfrac{1}{q} e^{-U/k_B T} e^{f \cdot r/k_B T}} \ {\langle r \rangle} & = & {\dfrac{k_B T}{q} \left (\dfrac{\partial \ln q}{\partial f} \right )_u} \end{array}\nonumber$
In the case of the Freely Jointed Chain, set $U \to 0$.
$\vec{f} \cdot \vec{r} = \vec{f} \cdot \sum_{i =1}^{n} \vec{\ell_i} = f \ell \sum_{i = 1}^{n} \cos \theta_i \nonumber$
Decoupled segments:
$\begin{array} {rcl} {q} & \approx & {\int dr^3 \exp \left (\sum_i \dfrac{f \ell}{k_B T} \cos \theta_i \right )} \ {} & = & {(\int_{0}^{2\pi} \int_{0}^{\pi} \exp [\varphi \cos \theta] \sin \theta d \theta d \phi)^n} \ {} & = & {\left (\dfrac{2\pi \text{sinh} (\varphi)}{\varphi} \right )^n} \ {\langle r \rangle} & = & {k_B T \dfrac{\partial}{\partial f} \ln q} \ {} & = & {nk_B T \dfrac{\partial}{\partial f} \left [\ln \left \{\dfrac{2\pi \text{sinh} (\varphi)}{\varphi} \right \} \right ] \ \ \ \ \ \ \text{coth} (x) = \dfrac{e^x + e^{-x}}{e^x - e^{-x}}} \ {\langle r \rangle} & = & {n \ell [\text{coth} (\varphi) - \varphi^{-1}]} \ {\text{or } \langle x \rangle = \text{coth} (\varphi) - \varphi^{-1}} & \ & {\text{ The average fractional extension: } \langle x \rangle = \langle r \rangle / L_C} \end{array} \nonumber$
Now let’s look at the behavior of the expression for $\langle x \rangle$ -- also known as the Langevin function.
$\langle r \rangle = n \ell [\text{coth} (\varphi) - \varphi^{-1}]$
Looking at limits:
• Weak force $(\varphi \ll 1): f \ll k_B T/\ell$
Inserting and truncating the expansion: $\text{coth} \varphi = \dfrac{1}{\varphi} + \dfrac{1}{3} \varphi - \dfrac{1}{45} \varphi^3 + \dfrac{2}{945} \varphi^5 + \cdots$, we get
$\begin{array} {rcl} {\langle x \rangle} & = & {\dfrac{\langle r \rangle}{L_C} \approx \dfrac{1}{3} \varphi} \ {\langle r \rangle} & \approx & {\dfrac{1}{3} \dfrac{n \ell^2}{k_B T} f} \ {\text{or } \ \ \ f} & = & {\dfrac{3k_B T}{n \ell^2} \langle r \rangle = \kappa_{ES} \langle r \rangle} \end{array} \nonumber$
Note that this limit has the expected linear relationship between force and displacement, which is governed by the entropic spring constant.
• Strong force ($\varphi \gg 1$). $f \gg k_B T / \ell$ Taking the limit $\text{coth} (x) \to 1$.
$\langle r \rangle \simeq n \ell \left [1 - \dfrac{1}{\varphi} \right ] \longleftarrow \lim_{f \to \infty} = \lim_{\alpha \to \infty} = L_C \text{ Contour length} \nonumber$
$\text{Or } f = \dfrac{k_B T}{\ell} \dfrac{1}{1 - \langle x \rangle} \text{ where } \langle x \rangle = \dfrac{\langle r \rangle}{L_C} \nonumber$
For strong force limit, the force extension behavior scales as, $x \sim 1 - f^{-1}$.
So, what is the work required to extend the chain?
At small forces, we can integrate over the linear force-extension behavior. Under those conditions, to extend from $r$ to $r+\Delta r$, we have
$w_{rev} = \int_0^{\Delta r} \kappa_{ES} r dr = \dfrac{3k_B T}{2n \ell^2} \Delta r^2\nonumber$
Force/Extension of Worm-like Chain
For the worm-like chain model, we found that the variance in the end-to-end distance was
$\langle r^2 \rangle = 2 \ell_p L_C - 2 \ell_p^2 (1 - e^{-L_C/\ell_p}) \label{eq9.3.8}$
where $L_C$ is the contour length, and the persistence length was related to the bending force constant as $\ell_p = \dfrac{\kappa_b}{k_B T}$. The limiting behavior for eq. ($\ref{eq9.3.8}$) is:
$\begin{array} {lclcrclcl} {\text{rigid:}} & \ \ & {\ell_p \gg L_C} & \ \ & {\langle r^2 \rangle} & \propto & {L_C^2} & \ \ & {} \ {\text{flexible:}} & \ \ & {\ell_p \ll L_C} & \ \ & {\langle r^2 \rangle} & \sim & {2L_C \ell_p} & \ \ & {\therefore \text{for WLC}} \ {} & \ \ & {} & \ \ & {} & = & {n_e \ell_e^2} & \ \ & {(2 \ell_p = \ell_e)} \end{array} \nonumber$
Following a similar approach to the FJC above, it is not possible to find an exact solution for the force-extension behavior of the WLC, but it is possible to show the force extension behavior in the rigid and flexible limits.
Setting $2\ell_p = \ell_e$, $\varphi = f\ell_e /k_B T$, and using the fractional extension $\langle x \rangle = \dfrac{\langle r \rangle}{L_C}$:
1. Weak force ($\varphi \ll 1$) Expected Hooke’s Law behavior
$f \ell_e \ll k_B T \ \ \ \ \ \ \ \ \ \ f = \dfrac{3k_B T}{\ell_e L_C} \longrightarrow \dfrac{f \ell_e}{k_B T} = 3\langle x \rangle \nonumber$
For weak force limit, the force extension behavior scales as, $x \sim f$.
2. Strong force ($\varphi \gg 1$)
$f \ell_e \gg k_B T \ \ \ \ \ \ \ \ \langle r \rangle = L_C \left (1 - \dfrac{1}{2\sqrt{\varphi}} \right ) \longrightarrow \dfrac{f \ell_e}{k_B T} = \dfrac{1}{4(1 - \langle x \rangle)^2} \nonumber$
For strong force limit, the force extension behavior scales as, $x \sim 1 - f^{-1/2}$.
An approximate expression for the combined result (from Bustamante):
$\dfrac{f\ell_p}{kT} = \dfrac{1}{4(1 - \langle x \rangle)^2} - \dfrac{1}{4} + \langle x \rangle$
_____________________________
1. A. M. van Oijen and J. J. Loparo, Single-molecule studies of the replisome, Annu. Rev. Biophys. 39, 429–448 (2010). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/02%3A_Macromolecules/09%3A_Macromolecular_Mechanics/9.03%3A_Polymer_Elasticity_and_ForceExtension_Behavior.txt |
• 10.1: Continuum Diffusion
A significant fraction of how molecules move spatially in biophysics is described macroscopically by “diffusion” and microscopically through its counterpart “Brownian motion”. Diffusion refers to the phenomenon by which concentration and temperature gradients spontaneously disappear with time, and the properties of the system become spatially uniform. Brownian motion is also a spontaneous process observed in equilibrium and non-equilibrium systems.
• 10.2: Solving the Diffusion Equation
• 10.3: Steady-State Solutions
10: Diffusion
We are now going to start a new set of topics that involve the dynamics of molecular transport. A significant fraction of how molecules move spatially in biophysics is described macroscopically by “diffusion” and microscopically through its counterpart “Brownian motion”. Diffusion refers to the phenomenon by which concentration and temperature gradients spontaneously disappear with time, and the properties of the system become spatially uniform. As such, diffusion refers to the transport of mass and energy in a nonequilibrium system that leads toward equilibrium. Brownian motion is also a spontaneous process observed in equilibrium and non-equilibrium systems. It refers to the random motion of molecules in fluids that arises from thermal fluctuations of the environment that rapidly randomize the velocity of particles. Much of the molecular transport in biophysics over nanometer distances arises from diffusion.
This can be contrasted with directed motion, which requires the input of energy and is crucial for transporting cargo to targets over micron-scale distances. Here we will start by describing diffusion in continuum systems, and in the next section show how this is related to the Brownian motion of discrete particles.
Fick's First Law
We will describe the time evolution of spatially varying concentration distributions $C(x,t)$ as they evolve toward equilibrium. These are formalized in two laws that were described by Adolf Fick (1855).1 Fick’s first law is the “common sense law” that is in line with everyone’s physical intuition. Molecules on average will tend to diffuse from regions of higher concentration to regions of lower concentration. Therefore we say that the flux of molecules through a surface, $J$, is proportional to the concentration gradient across that surface.
$J = -D \dfrac{\partial C}{\partial x}$
J is more accurately called a flux density, since it has units of concentration or number density per unit area and time. The proportionality constant between flux density J (mol m–2 s–1) and concentration gradient (mol m–4) which sets the timescale for the process is the diffusion constant D (m2 s–1). The negative sign assures that the flux points in the direction of decreasing concentration. This relationship follows naturally, when we look at the two concentration gradients in the figure. Both C and C' have a negative gradient that will lead to a flux in the positive direction. C will give a bigger flux than C' because there is more probability for flow to right. The gradient disappears and the concentration distribution becomes constant and time invariant at equilibrium. Note, in a general sense $\partial C/\partial x$ can be considered the leading term in an expansion of C in x.
Fick’s Second Law
Fick’s second law extends the first law by adding an additional constraint based on the conservation of mass. Consider diffusive transport along x in a pipe with cross-sectional area a, and the change in the total number of particles within a disk of thickness $Δx$ over a time period $Δt$.
If we take this disk to be thin enough that the concentration is a constant at any moment in time, then the total number of particles in the slab at that time is obtained from the concentration times the volume:
$N = a C(t) \Delta x \nonumber$
Within the time interval Δt the concentration can change and therefore the total number of particles within the disk changes by an amount
$\Delta N = a\{ C(t+\Delta t)- C(t)\} \Delta x \nonumber$
Now, the change in the number of particles is also dependent on the fluxes of molecules at the two surfaces of the disk. The number of molecules passing into one surface of the disk is $‒aJ∆t$, and therefore the net change in the number of molecules during $Δt$ is obtained from the difference of fluxes between the left and right surfaces of the disk:
$\Delta N = -a \, {J(x+\Delta x)-J(x)}\Delta t \nonumber$
Setting these two calculations of ΔN equal to each other, we see that the flux and concentration gradients for the disk are related as
$\{ C(t+\Delta t)-C(t)\} \Delta x = - \{ J(x+\Delta x)-J(x)\} \Delta t \nonumber$
or rewriting this in differential form
$\dfrac{\partial C}{\partial t} = -\dfrac{\partial J}{\partial x}$
This important relationship is known as a continuity expression. Substituting eq. (10.1.1) into this expression leads to Fick’s Second Law
$\dfrac{\partial C}{\partial t} = D \dfrac{\partial^2 C}{\partial x^2}$
This is the diffusion equation in one dimension, and in three dimensions:2
$\dfrac{\partial C}{\partial t} = D \nabla^2 C$
Equation (10.1.4) can be used to solve diffusive transport problems in a variety of problems, choosing the appropriate coordinate system and applying the specific boundary conditions for the problem of interest.
Diffusion from a Point Source
As our first example of how concentration distributions evolve diffusively, we consider the time-dependent concentration profile when the concentration is initially all localized to one point in space, x = 0. The initial condition is
$C(x,t=0) = C_0 \delta (x) \nonumber$
and the solution to eq. (10.1.3) is
$C(x,t) = \dfrac{C_0}{\sqrt{4\pi Dt}}e^{-x^2/4Dt}$
The concentration profile has a Gaussian form which is centered on the origin, ⟨x⟩ = 0, with the mean square displacement broadening with time as:
$\langle x^2 \rangle = 2Dt \nonumber$
Diffusive transport has no preferred direction. Concentration profiles spread evenly in the
positive and negative direction, and the highest concentration observed will always be at the
origin and have a value $C_{max}=C_0/\sqrt{4\pi Dt}$. Viewing time-dependent concentrations in space reveal that they reach a peak at tmax = x2/2D, before decaying at t‒1/2 (dashed line below).
When we solve for 3D diffusion from a point source:
$C(x,y,z,t=0) = C_0 \delta (x)\delta (y) \delta (z) \nonumber$
If we have an isotropic medium in which D is identical for diffusion in the x, y, and z dimensions,
$C(x,y,z,t) = \dfrac{C_0}{(4\pi Dt)^{3/2}} e^{-r^2/4Dt}$
where $r^2 = x^2+y^2+z^2$. Calculating the mean square displacement from
\begin{aligned} \langle r^2 \rangle &= \dfrac{\int^{\infty}_0dr \, r^2 C(r,t)}{\int^{\infty }_0 dr\, C(r,t)} \ &= 6Dt \end{aligned}
or in d dimensions, $\langle r^2 \rangle = d (2Dt)$.
Diffusion Constants
Typical diffusion constants for biologically relevant molecules in water are shown in the graph below, varying from small molecules such as O2 and glucose in the upper left to proteins and viruses in the lower right.
• For a typical globular protein, typically diffusion coefficients are:
in water D ~ 10–10 m2/s
in cells D ~ 10–12 m2/s
in lipids D ~ 10–14 m2/s
\begin{aligned} \langle r^2 \rangle^{1/2} &= 1 \mu m, \quad t \sim 0.4 \text{sec in cells} \ &= 10 \mu m,\, \, \, t \sim \text{40 sec in cells} \end{aligned}
• Ions in water at room temperature usually have a diffusion coefficient of 0.6×10–5 to 2×10–5 cm2/s.
• Lipids:
• Self-diffusion 10-12 m2/s
• Tracer molecules in lipid bilayers 1-10x10-12 m2/s
Anomalous Diffusion
The characteristic of simple diffusive behavior is the linear relationship between the mean square displacement and time. Deviation from this behavior is known as anomalous diffusion, and is characterized by a scaling relationship $\langle r^2 \rangle \sim t^{\nu }$. We refer to ν<1 as sub-diffusive behavior and ν>1 as super-diffusive. Diffusion in crowded environments can result in sub-diffusion.3
Thermodynamic Perspective on Diffusion
Thermodynamically, we can consider the driving force for diffusion as a gradient in the free energy or chemical potential of the system. From this perspective, in the absence of any other interactions, the driving force for reaching uniform spatial concentration is the entropy of mixing. For a mixture with mole fraction xA, we showed
\begin{aligned} \Delta S_{mix} &= -Nk_B(x_A\ln x_A+x_B \ln x_A) \quad x_B = 1-x_A \ &\approx -N_Ak_B \ln x_A \qquad \qquad \qquad \quad \text{ for } x_A \ll 1 \end{aligned}
We then use $ΔF = ‒TΔS$ to calculate the chemical potential:
\begin{aligned} \mu_A &= \left( \dfrac{\partial F}{\partial N_A} \right)_{V,T} \ \mu_a &\approx k_BT \ln x_A \end{aligned}
We see that a concentration gradient, means that the mole fraction and therefore chemical potential is different for two positions in the system. At equilibrium $\mu_A(r_1)=\mu_A(r_2)$, which occurs when $x_A(r_1) = x_A(r_2)$.
Thermodynamics does not tell you about rate, only the direction of spontaneous change(although occasionally diffusion is discussed in terms of a time-dependent “entropyproduction”). The diffusion constant is the proportionality constant between gradients inconcentration or chemical potential and the time-dependent flux of particles. The flux density described in Fick’s first law can be related to $μ_i$, the chemical potential for species $i$:
$J_i = \dfrac{-D_iC_i}{k_BT} \dfrac{\partial \mu_i}{\partial r_i} \nonumber$
_______________________________
1. A. Fick, Ueber diffusion, Ann. Phys. 170, 59–86 (1855).
2. This equation assumes that D is a constant, but if it is a function of space: $C=\nabla (D\nabla C)$. In three dimensions, Fick’s First Law and the continuity expression are: $J(r,t) = vC(r,t)-D\nabla C(r,t) \text{ and } dC(r,t)/dt = -\nabla \cdot J(r,t)$ where v is the velocity of the fluid. These expressions emphasize that flux density and velocity are vectors, whereas concentration field is a scalar.
3. J. A. Dix and A. S. Verkman, Crowding effects on diffusion in solutions and cells, Annu. Rev. Biophys. 37, 247–263 (2008). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/10%3A_Diffusion/10.01%3A_Continuum_Diffusion.txt |
Solutions to the diffusion equation, such as eq. (10.1.5) and (10.1.6), are commonly solved with the use of Fourier transforms. If we define the transformation from real space to reciprocal space as
$\stackrel{\sim}{C} (k,t) = \int^{\infty}_{-\infty} C(x)e^{ikx} \, dx \nonumber$
one can express the diffusion equation in 1D as
$\dfrac{d \stackrel{\sim}{C}(k,t)}{dt} = -Dk^2 \stackrel{\sim}{C}(k,t)$
[More generally one finds that the Fourier transform of a linear differential equation in x can be expressed in polynomial form: $\mathcal{F} (\partial^nf/\partial x^n)=(ik)^n\stackrel{\sim}{f} (k)$. This manipulation converts a partial differential equation into an ordinary one, which has the straightforward solution $\stackrel{\sim}{C}(k,t)= \stackrel{\sim}{C}(k,0)\exp (-Dk^2t)$. We do need to express the boundary conditions in reciprocal space, but then, this solution can be transformed back to obtain the real space solution using $C(x,t) = (2\pi )^{-1} \int^{\infty}_{-\infty} \stackrel{\sim}{C} (k,t) e^{-ikx}dx$.
Since eq. (10.2.1) is a linear differential equation, sums of solutions to the diffusion equation are also solutions. We can use this superposition principle to solve problems for complex initial conditions. Similarly, when the diffusion constant is independent of x and t, the general solution to the diffusion equation can also be expressed as a Fourier series. If we separate the time and space variables, so that the form of the solution is $C(x,t)=X(x)T(t)$ we find that we can write
$\dfrac{1}{DT} \dfrac{\partial T}{\partial t} = \dfrac{1}{x} \dfrac{\partial^2 x}{\partial x^2} = -\alpha^2 \nonumber$
Where α is a constant. Then $T=e^{-\alpha^2Dt}$ and $x=A\cos \alpha x+B\sin \alpha x$. This leads to the general form:
$C(x,t) = \sum^{\infty}_{n=0} (A_n \cos \alpha_nx+B_n \sin \alpha_n x) e^{-\alpha_n^2Dt}$
Here An and Bn are constants determined by the boundary conditions.
Examples
Diffusion across boundary
At time t = 0, the concentration is uniform at a value C0 for x ≥ 0, and zero for x < 0, similar to removing a barrier between two homogeneous media. Using the superposition principle, the solution is obtained by integrating the point source solution, eq. (10.1.5), over all initial point sources $\delta (x-x_0)$ such that $x_0=0 \longrightarrow \infty$. Defining $y^2 = (x-x_0)^2/4Dt$,
$C(x,t) = \dfrac{C_0}{\sqrt{\pi}}\int^{\infty}_{-\dfrac{(x-x_0)}{\sqrt{4Dt}}}dy \, e^{-y^2} = \dfrac{C_0}{2} erfc \left( \dfrac{-(x-x_0)}{\sqrt{4Dt}} \right) \nonumber$
Diffusion into “hole”
A concentration “hole” of width 2a is inserted into a box of length 2L with an initial concentration of C0. Let’s take L = 2a. Concentration profile solution:
$C(x,t) = C_0 \left[ \left( \dfrac{L-a}{L} \right) -\sum^{\infty}_{n=1} A_n \cos (\alpha_n x) e^{-\alpha^2_nDt} \right] \nonumber$
$A_n = \dfrac{2\sin (\alpha_n a)}{n\pi} \qquad \alpha_n = \dfrac{n \pi}{L} \nonumber$
• Fluorescence Recovery after Photobleaching (FRAP): We can use this solution to describe the diffusion of fluorescently labeled molecules into a photobleached spot. Usually observe the increase of fluorescence with time from this spot. We integrate concentration over initial hole:
\begin{aligned} N_{FRAP} (t) &= \int^{+a}_{-a} C(x,t) dx \ &=C_0 \left[ \dfrac{2a}{L} (L-1)-L\sum^{\infty}_{n=1} A^2_n e^{-\alpha_n Dt} \right] \end{aligned}
Reflecting and Absorbing Boundary Conditions
We will be interested in describing the time-dependent probability distribution for the case in which particles are releases at x=0, subject to encountering an impenetrable wall at x=xw, which can either absorb or reflect particles.
Consider the case of a reflecting wall, where the boundary condition requires that the flux at xw is zero. This boundary condition and the resulting pile-up near the wall can be described by making use of the fact that any $P(x>x_w,t)$ can be reflected about xw , which is equivalent to removing the boundary and adding a second source term to $P(x,t)$ for particles released at $x=2x_w$
$P_{refl}(x,t) = P(x,t) + P(2x_w-x,t) \qquad (x<x_w) \nonumber$
This is also known as a wrap-around solution, since any component with any population from $P(x,t)$ that passes the position of the wall is reflected about $x_w$. Similarly, an absorbing wall, $P(x=x_w,t)=0$, means that we remove any population that reached $x_w$, which is obtained from the difference of the two mirrored probability distributions:
$P_{abs}(x,t) = P(x,t)-P(2x_w-x,t) \qquad (x<x_w) \nonumber$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/10%3A_Diffusion/10.02%3A_Solving_the_Diffusion_Equation.txt |
Steady state solutions can be applied when the concentration gradient may vary in space but does not change with time, $\partial C/\partial t = 0$. Under those conditions, the diffusion eq. (10.1.4) simplifies to Laplace’s equation
$\nabla^2 C = 0 \label{10.1.1}$
For certain conditions this can be integrated directly by applying the proper boundary conditions, and then the steady state flux at a target position is obtained from Fick’s first law, Equation 10.1.1.
Diffusion through a Membrane1
The steady-state solution to the diffusion equation in one dimension can be used to describe the diffusion of a small molecule through a cell plasma membrane that resists the diffusion of the molecule.
In this model, the membrane thickness is $h$, and the concentrations of the diffusing small molecule in the fluid on left and right side of membrane are $C_l$ and $C_r$. Within the membrane resists diffusion of the small molecule, which is reflected in the small molecule’s partition coefficient between membrane and fluid:
$K_p = \dfrac{C_{\text{membrane}}}{C_{\text{fluid}}} \nonumber$
Kp can vary between 103 and 10–7 depending on the nature of the small molecules and membrane composition.
For the steady-state diffusion equation $\partial^2C/\partial x^2=0$, solutions take the form $C(x)=A_1x+A_2$. Applying boundary conditions for the concentration of small molecule in the membrane at the two boundaries, we find
$A_1=\dfrac{K_p(C_r-C_l)}{h} \qquad A_2 = K_pC_l \nonumber$
Then we can write the transmembrane flux density of the small molecule across the membrane as
$J = -D_{mol} \dfrac{\partial C}{\partial x} = \dfrac{K_pD_{mol}}{h}(C_{\ell}-C_r) = \dfrac{K_pD_{mol}\Delta C}{h} \nonumber$
The membrane permeability is equivalent to the volume of small molecule solution that diffuses across a given area of the membrane per unit time, and is defined as
$P_m \equiv \dfrac{J}{\Delta C} = \dfrac{K_pD_{mol}}{h} (m\, s^{-1})$
The membrane resistance to flow is R = 1/Pm, and the rate of transport across the membrane is dn/dt = J A, where $A$ is area.
This linear relationship in eq. (10.3.2) between Pm and Kp, also known as the Overton relation, has been verified for thousands of molecules. For small molecules with molecular weight <50, Pm can vary from 101 to 10–6 cm s–1. It varies considerably even for water across different membrane systems, but its typical value for a phospholipid vesicle is 10–3 cm s–1. Some of the highest values (>50 cm s–1) are observed for O2. Cations such as Na+ and K+ have permeabilities of ~5×10–14 cm s–1, and small peptides have values of 10–9–10–6 cm s–1.
Diffusion to Capture
What is the flux of a diffusing species onto a spherical surface from a solution with a bulk concentration C0? This problem appears often for diffusion limited reaction rates. To find this, we calculate the steady-state radial concentration profile C(r) around a perfectly absorbing sphere with radius a, i.e. C(a) = 0. At steady state, we solve eq. (10.3.1) by taking the diffusion to depend only on the radial coordinate r and not the angular ones.
$\dfrac{1}{r^2} \dfrac{\partial}{\partial r} \left( r^2 \dfrac{\partial C}{\partial r} \right) =0 \nonumber$
Let’s look for the simplest solution. We begin by assuming that the quantity in parenthesis is a constant and integrate twice to give
$C(r) = -\dfrac{A_1}{r} +A_2$
Where A1 and A2 are constants of integration. Now, using the boundary conditions $C(a)=0$ and $C(\infty ) = C_0$ we find:
$C(r) = C_0 \left( 1- \dfrac{a}{r} \right) \nonumber$
Next, we use this expression to calculate the flux of molecules incident on the surface of the sphere (r = a).
$J(a) = -D \left. \dfrac{\partial C}{\partial r} \right|_{r=a} = - \dfrac{DC_0}{a}$
Here J is the flux density in units of (molecules area–1 sec–1) or [(mol/L) area–1 sec–1]. The sign of the flux density is negative reflecting that it is a vector quantity directed toward r = 0. We then calculate the rate of collisions of molecules with the sphere (the flux, j) by multiplying the magnitude of J by the surface area of the sphere (A = 4πa2):
$j = JA = 4 \pi DaC_0$
This shows that the rate constant, which expresses the proportionality between rate of collisions and concentration is $k = 4πDa$.
Probability of Capture
In an extension of this problem useful to ligand binding simulations, we can ask what the probability is that a molecule released near an absorbing sphere will reach the sphere rather than diffuse away?
Suppose a particle is released near a spherical absorber of radius a at a point $r = b$. What is the probability that the particle will be absorbed at r = a rather than wandering off beyond an outer perimeter at $r = c$?
To solve this problem we solve for the steady-state flux at the surfaces a and c subject to the boundary conditions C(a) = 0, C(b) = C0, and C(c) = 0. That is, the inner and outer surfaces are perfectly absorbing, but the concentration has a maximum value C(b) = C0 at r = b.
We separate the problem into two zones, a-to-b and b-to-c, and apply the general solution eq. (10.3.3) to these zones with the appropriate boundary conditions to yield:
\begin{aligned} &C(r) = \dfrac{C_0}{(1-a/b)} \left( 1-\dfrac{a}{r} \right) \qquad a \leq r \leq b \ &C(r) = \dfrac{C_0}{(c/b-1)} \left( 1-\dfrac{a}{r} \right) \qquad b \leq r \leq c \end{aligned}
Then the radial flux density is:
\begin{aligned} &J_r(r) = -\dfrac{DC_0}{(1-a/b)} \dfrac{a}{r^2} \qquad a\leq r \leq b \ & J_r(r) = \dfrac{DC_0}{(c/b-1)} \dfrac{c}{r^2} \qquad \quad b \leq r \leq c \end{aligned}
Calculating the areas of the two absorbing surfaces and multiplying the flux densities by the areas gives the flux. The flux from the spherical shell source to the inner absorber is
$j_{in} = 4\pi DC_0 \dfrac{a}{(1-a/b)} \nonumber$
and the flux from the spherical shell source to the outer absorber is
$j_{out} = 4\pi DC_0 \dfrac{c}{(c/b-1)} \nonumber$
We obtain the probability that a particle released at r = b and absorbed at r = a from the ratio
$P_{capture} = \dfrac{j_{in}}{j_{in}+j_{out}}= \dfrac{a(c-b)}{b(c-a)} \nonumber$
In the limit $c \longrightarrow \infty$, this probability is just a/b. This is the probability of capture for the sphere of radius a immersed in an infinite medium. Note that this probability decreases only inversely with the radial distance b–1, rather than the surface area of the sphere.
_________________________________
1. A. Walter and J. Gutknecht, Permeability of small nonelectrolytes through lipid bilayer membranes, J. Membr.Biol. 90, 207–217 (1986).
Readings
• H. C. Berg, Random Walks in Biology. (Princeton University Press, Princeton, N.J., 1993).
• K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/10%3A_Diffusion/10.03%3A_Steady-State_Solutions.txt |
Brownian motion refers to the random motions of small particles under thermal excitation in solution first described by Robert Brown (1827),1 who with his microscope observed the random, jittery spatial motion of pollen grains in water. This phenomenon is intrinsically linked with diffusion. Diffusion is the macroscopic realization of the Brownian motion of molecules within concentration gradients. The theoretical basis for this relationship was described by Einstein in 1905,2 and Jean Perrin3 provided the detailed experiments that confirmed his predictions.
Since the motion of any one particle is unique, the Brownian motion must be described statistically. We observe that the mean-squared displacement of a particle averaged over many measurements grows linearly with time, just as with diffusion.
The proportionality factor between mean-squared displacement and time is the diffusion constant in Fick’s Second Law. As for diffusion, the proportionality factor depends on dimensionality. In 1D, if $\langle x^2(t) \rangle /t = 2D$ then in 3D $\langle r^2(t) \rangle /t = 6D$, where D is the diffusion constant.
Brownian motion is a property of molecules at thermal equilibrium. It applies to a larger particle (i.e., a protein) experiencing an imbalance of many microscopic forces exerted by many much small molecules of the surroundings (i.e., water). The thermal agitation originates by partitioning the kinetic energy of the system on average as kBT/2 per degree of freedom. Free diffusion implies motion which is only limited by kinetic energy.
Brownian motion applies to a specific range of forces and masses where thermal energy (kBT(300 K) = 4.1 pN nm) can have a significant influence on a particle. Let’s look at the average translational kinetic energy:
$\left< \dfrac{mv_x^2}{2} \right> = \dfrac{1}{2}k_BT$
For a ~10 kDa protein with mass ~10–23 kg, the root mean squared velocity due to thermal energy is $v_{rms} = \langle v_x^2 \rangle^{1/2}$ = 20 m/s. For water at 300 K, D ~10–5 cm2/s. The same protein has a net displacement in one second of $x_{rms}=\langle x^2 \rangle ^{1/2}=\sqrt{2Dt} \approx 50 \, \mu \text{m}$. The large difference in these values indicates the large number of randomizing collisions that this particle experiences during one second of evolution: (vrms$\cdot$1sec)/xrms ≈ 4×105. For the protein, the velocities and displacements are a dominant force on the molecular scale. In comparison, a 1 kg mass with kBT of energy will have vrms ~ 10–11 m/s, and an equally insignificant displacement!
Ergodic Hypothesis
A system is known as ergodic when time average and ensemble averages for a time-dependent variable are equal.
\begin{aligned} \text{Ensemble average: } &\langle x \rangle = \dfrac{1}{N} \sum_i x_i = \int P(x)x \, dx \ \text{Time-average: } &\overline{x(t)} = \lim_{T \rightarrow \infty} \dfrac{1}{T} \int^T_0 x(t) dt \end{aligned}
In practice, the time average can be calculated using a single particle trajectory by averaging over the displacement observed for all time intervals within the trajectory such that t=(tfinal‒ tinitial).
In the case of Brownian motion and diffusion: $\left< |r(t) -r_0|^2 \right> = \overline{|r(t)-r_0|^2}$.
___________________________________
1. R. Brown, "On the Particles Contained in the Pollen of Plants; and On the General Existence of Active Molecules in Organic and Inorganic Bodies" in The Miscellaneous Botanical Works of Robert Brown, edited by J. J. Bennett (R. Hardwicke, London, 1866), Vol. 1, pp. 463-486.
2. A. Einstein, Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen, Ann. Phys. 322, 549–560 (1905).
3. J. Perrin, Brownian Movement and Molecular Reality. (Taylor and Francis, London, 1910).
11: Brownian Motion
We want to describe the correspondence between a microscopic picture for the random walk of particles and macroscopic diffusion of particle concentration gradients. We will describe the statistics for the location of a random walker in one dimension (x), which is allowed to step a distance Δx to the right (+) or left (–) during each time interval Δt. At each time point a step must be taken left or right, and steps to left and right are equally probable.
Let’s begin by describing where the system is at after taking n steps qualitatively. We can relate the position of the system to where it was before taking a step by writing:
$x(n) = x(n-1) \pm \Delta x \nonumber$
This expression can be averaged over many steps:
\begin{aligned} \langle x(n) \rangle &= \langle x(n-1) \pm \Delta x \rangle \ &= \langle x(n-1) \rangle = \langle x(n-2) \rangle = ... = \langle x(0) \rangle \end{aligned}
Since there is equal probability of moving left or right with each step, the ±Δx term averages to zero, and $\langle x \rangle$ does not change with time. The most probable position for any time will always be the starting point.
Now consider the variance in the displacement:
\begin{aligned} \langle x^2(n)\rangle &= \langle x^2(n-1) \pm 2 \Delta x \, x(n-1)+(\Delta x )^2\rangle \ &= \langle x^2(n-1) \rangle + (\Delta x)^2 \end{aligned}
In the first line, the middle term averages to zero, and the variance gains a factor of Δx2. Repeating this process for each successive step back shows that the mean square displacement grows linearly in the number of steps.
\begin{aligned} &\langle x^2(0) \rangle =0 \ &\langle x^2(1) \rangle = (\delta x)^2 \ &\langle x^2(2) \rangle = 2(\delta x )^2 \end{aligned}
$\vdots \nonumber$
$\langle x^2 (n)\rangle = n(\Delta x)^2$
Qualitatively, these arguments indicate that the statistics of a random walker should have the same mean and variance as the concentration distribution for diffusion of particles from an initial position.
Random Walk Step Distribution Function
Now let’s look at this a little more carefully and describe the probability distribution for the position of particles after n steps, which we equate with the number of possible random walk trajectories that can lead to a particular displacement. What is the probability of starting at x0 = 0 and reaching point x after n jumps separated by the time interval Δt?
Similar to our discussion of the random walk polymer, we can express the displacement of a random jumper to the total number of jumps in the positive direction n+ and in the negative direction n. If we make n total jumps, then
$n = n_++n_- \qquad \longrightarrow \qquad t=n \Delta t \nonumber$
The total number of steps n is also our proxy for the length of time for a given trajectory, t. The distance between the initial and final position is related to the difference in + and ‒ steps:
$m = n_+-n_- \qquad \longrightarrow \qquad x=m\Delta x \nonumber$
Here m is our proxy for the total displacement x. Note from these definitions we can express n+ and n as
$n_{\pm} = \dfrac{n\pm m}{2}$
The number of different ways of making n jumps with the constraint of n+ positive and n– negative jumps is
$\Omega = \dfrac{n!}{n_+!n_-!} \nonumber$
The probability of observing a particular sequence of n “+” and “–” jumps is $P(n) = (P_+)^{n_+} (P_-)^{n_-} = (1/2)^n$.
The total number of trajectories that are possible with n equally probably “+” and “‒” jumps is 2n, so the probability that any one sequence of n steps will end up at position m is given by Ω/2n or
\begin{aligned} P(m,n) &= \left( \dfrac{1}{2} \right)^n \dfrac{n!}{n_+!n_-!} \ &= \left( \dfrac{1}{2} \right)^n \dfrac{n!}{\dfrac{n+m}{2}! \dfrac{n-m}{2}!} \end{aligned}
This is the binomial probability distribution function. Looking at the example below for twenty steps, we see $\langle m\rangle =0$ and for a discrete probability distribution which has a Gaussian envelope.
For very large n, the distribution function becomes continuous. To see this, let’s apply Stirling’s approximation, $n! \approx (n/e)^n \sqrt{2\pi n}$, and after a bit of manipulation we find1
$P(m,n) = \sqrt{\dfrac{2}{\pi n}} e^{-m^2/2n}$
Note, this distribution has an envelope that follows a normal Gaussian distribution for a continuous variable where the variance σ2 is proportional to the number of steps n.
To express this with a time variable, we instead insert n = t/Δt and m = x/Δx in eq. (11.1.3) to obtain the discrete probability distribution function:
$P(x,t) = \sqrt{\dfrac{\Delta t}{2\pi t}} \exp \left[ - \dfrac{\Delta t x^2}{2t(\Delta x)^2} \right] \nonumber$
Note that we can re-write this discrete probability distribution similar to the continuum diffusion
solution
$P(x,t) = \sqrt{\dfrac{(\Delta x)^2}{4\pi Dt}}e^{-x^2/4Dt}$
if we equate the variance and diffusion constant as
$D = \dfrac{(\Delta x )^2}{2\Delta t} \nonumber$
Equation (11.1.4) is slightly different because P is a unitless probability for finding the particle between x and x+Δx, rather than a continuous probability density ρ with units of m-1: ρ(x,t) dx = P(x,t). Even so, eq. (11.1.4) suggests that the time-dependent probability distribution function for the random walk obeys a diffusion equation
$\dfrac{\partial P}{\partial t} = \Delta x D \dfrac{\partial^2 P}{\partial x^2} \qquad \qquad or \qquad \qquad \dfrac{\partial \rho}{\partial t} = D \dfrac{\partial^2 \rho}{\partial x^2}$
Three‐Dimensional Random Walk
We can extend this treatment to diffusion from a point source in three dimensions, by using a random walk of n steps of length Δx on a 3D cubic lattice. The steps are divided into those taken in the x, y, and z directions:
$n = n_x+n_y+n_z \nonumber$
and distance of the walker from the origin is obtained from the net displacement along the x, y, and z axes:
$r = (x^2+y^2+z^2)^{1/2} = m\Delta x \ m = \sqrt{m_x^2+m_y^2+m_z^2} \nonumber$
For each time-interval the walker takes a step choosing the positive or negative direction along the x, y, and z axes with equal probability. Since each dimension is independent of the others
$P(r,n) = P(m_x,n_x)P(m_y,n_y)P(m_z,n_z) \nonumber$
Looking at the radial displacement from the origin, we find
$\sigma_x^2 + \sigma_y^2 + \sigma_z^2 = \sigma_r^2 \nonumber$
where
$\sigma_x^2 = \dfrac{(\Delta x )^2t}{\Delta t} \rightarrow 2D_xt \nonumber$
but since each dimension is equally probable $\sigma_r^2 = 3\sigma_x^2$. Then using eq. (11.1.3)
$P(r,t) = \left( \dfrac{3\Delta x^2}{2\pi \sigma_r^2} \right)^{3/2} e^{-3r^2/2\sigma_r^2} \nonumber$
where $\sigma_r^2=6Dt$.
__________________________________________________
1. M. Daune, Molecular Biophysics: Structures in Motion. (Oxford University Press, New York, 1999), Ch. 7. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/11%3A_Brownian_Motion/11.01%3A_Random_Walk_and_Diffusion.txt |
Working again with the same problem in one dimension, let’s try and write an equation of motion for the random walk probability distribution: $P(x,t)$.
• This is an example of a stochastic process, in which the evolution of a system in time and space has a random variable that needs to be treated statistically.
• As above, the movement of a walker only depends on the position where it is at, and not on any preceding steps. When the system has no memory of where it was earlier, we call it a Markovian system.
• Generally speaking, there are many flavors of a stochastic problem in which you describe the probability of being at a position $x$ at time $t$, and these can be categorized by whether $x$ and $t$ are treated as continuous or discrete variables. The class of problem we are discussing with discrete $x$ and $t$ points is known as a Markov Chain. The case where space is treated discretely and time continuously results in a Master Equation, whereas a Langevin equation or Fokker–Planck equation describes the case of continuous $x$ and $t$.
• To describe the walkers time-dependence, we relate the probability distribution at one point in time, $P(x,t+Δt)$, to the probability distribution for the preceding time step, $P(x,t)$ in terms of the probabilities of a walker making a step to the right ($P_+$) or to the left ($P_-$) during the interval $Δt$. Note, when $P_+\neq P_-$, there is a stepping bias in the system. If $P_+ + P_-<1$, there is a resistance to stepping either as a result of an energy barrier or excluded volume on the chain.
• In addition to the loss of probability by stepping away from x to the left or right, we need to account for the steps from adjacent sites that end at $x$.
Then the probability of observing the particle at position $x$ during the interval $Δ$t is:
\begin{aligned} P(x,t+\Delta t) &= P(x,t)-P_+\cdot P(x,t) -P_- \cdot P(x,t) +P_+\cdot P(x-\Delta x,t)+P_- \cdot P(x+\Delta x,t) \[4pt] &= (1-P_+-P_-) \cdot P(x,t)+P_+ \cdot P(x-\Delta x,t) +P_- \cdot P(x+\Delta x,t) \[4pt] &= P(x,t) + P_+[P(x-\Delta x,t)-P(x,t)]+ P_-[P(x+\Delta x,t)-P(x,t)] \end{aligned}
and the net change probability is
$P(x,t+\Delta t) - P(x,t) = P_+[P(x-\Delta x,t) - P(x,t)]+P_-[P(x+\Delta x,t)-P(x,t)] \nonumber$
We can cast this as a time-derivative if we divide the change of probability by the time-step Δt:
\begin{aligned} \dfrac{\partial P}{\partial t} &= \dfrac{P(x,t+\Delta t)-P(x,t)}{\Delta t} \[4pt] &= P_+[P(x-\Delta x,t) - P(x,t)]+P_-[P(x+\Delta x,t)-P(x,t)] \[4pt] &= P_+ \Delta P_-(x,t)+P_-\Delta P_+(x,t) \end{aligned}
Where $P_{\pm} = P_{\pm} / \Delta t$ is the right and left stepping rate, and $\Delta P_{\pm}(x,t) = P(x \pm \Delta x,t)-P(x,t)$
We would like to show that this random walk model results in a diffusion equation for the probability density ρ(x,t) we deduced in Equation (11.1.5). To simplify, we assume that the left and right stepping probabilities $P_+=P_-=\frac{1}{2}$, and substitute
$P(x,t) = \rho (x,t) dx \nonumber$
into Equation (11.2.1):
$\dfrac{\partial \rho}{\partial t} = P[\rho(x-\Delta x,t)-2 \rho (x,t)+\rho (x+\Delta x,t)] \nonumber$
where $P=1/2 \, \Delta t$. We then expand these probability density terms in x as
$\rho (x,t) = \rho (0,t) + \dfrac{\partial \rho}{\partial x} x+\dfrac{1}{2} \dfrac{\partial^2 \rho}{\partial x^2} x^2 \nonumber$
and find that the probability density follows a diffusion equation
$\dfrac{\partial \rho}{\partial t} = D \dfrac{\partial^2 \rho}{\partial x^2} \nonumber$
where $D= \Delta x^2/2 \Delta t$.
___________________________________
Reading Materials
• A. Nitzan, Chemical Dynamics in Condensed Phases: Relaxation, Transfer and Reactions in Condensed Molecular Systems. (Oxford University Press, New York, 2006). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/11%3A_Brownian_Motion/11.02%3A_Markov_Chain_and_Stochastic_Processes.txt |
Fluorescence correlation spectroscopy (FCS) allows one to measure diffusive properties of fluorescent molecules, and is closely related to FRAP. Instead of measuring time-dependent concentration profiles and modeling the kinetics as continuum diffusion, FCS follows the steady state fluctuations in number density of a very dilute fluorescent probe molecule in the small volume observed in a confocal microscope. We measure the fluctuating changes in fluorescence intensity emitted from probe molecules as they diffuse into and out of the focal volume.
• Average concentration of sample: C0 = <10–9 M – 10–7 M.
• This corresponds to an average of ~0.1-100 molecules in the focal volume, although this number varies with diffusion into and out of the volume.
• The fluctuating fluorescence trajectory is proportional to the time-dependent number
density or concentration:
$F(t) \propto C(t) \nonumber$
• How big are the fluctuations? For a Gaussian random process, we expect $\dfrac{\delta N_{rms}}{N} \sim \dfrac{1}{\sqrt{N}}$
• The observed concentration at any point in time can be expressed as time-dependent
fluctuations about an average value: $C(t) = \overline{C} + \delta C(t)$.
To describe the experimental observable, we model the time-dependence of δC(t) from the diffusion equation:
$\dfrac{\partial \delta C}{\partial t} = D \nabla^2 \delta C \nonumber$
$\langle \delta C(r,0) \delta C(r',t) \rangle = \dfrac{C_0}{(4\pi Dt)^{3/2}} e^{-(r-r')^2/4Dt} \nonumber$
The concentration fluctuations can be related to the fluorescence intensity fluctuations as
$F(t) = AW(r) C(r,t) \nonumber$
W(r): Spatial optical profile of excitation and detection
A: Other experimental excitation and detection parameters
Calculate FCS correlation function for fluorescence intensity fluctuations. $F(t) = \langle F \rangle -\delta F(t)$
$G(t) = \dfrac{\langle \delta F(0) \delta F(t) \rangle}{\langle \delta F \rangle^2} \nonumber$
For the case of a Gaussian beam with a waist size w0:
$G(t) \sim \dfrac{B}{1+t/\tau_{FCS}} \nonumber$
Where the amplitude is $B = 4\pi A^2I_0^2\overline{\delta C^2_0} w_0^2$, and the correlation time is related to the diffusion constant by:
$\tau_{FCS} = \dfrac{w_0^2}{4D} \nonumber$
___________________________________
Readings
• P. Schwille and E. Haustein, "Fluorescence Correlation Spectroscopy: An Introduction to its Concepts and Applications" in Biophysics Textbook Online.
11.04: Orientational Diffusion
The concepts we developed for translation diffusion and Brownian motion are readily extended to rotational diffusion. For continuum diffusion, if one often assumes that one can separate the particle probability density into a radial and angular part: $P(r, \theta , \phi ) = P(r)P(\theta , \phi )$. Then one also separate the diffusion equation into two parts for which the orientational diffusion follows a small-angle diffusion equation
$\dfrac{\partial P(\Omega , t)}{\partial t} = D_{or} \nabla^2 P(\Omega , t) \label{11.4.1}$
where $\Omega$ refers to the spherical coordinates (θ,φ). Dor is the orientational diffusion constant with units of rad2 s–1. Microscopically, one can consider orientational diffusion as a random walk on the surface of a sphere, with steps being small angular displacements in $θ$ and $φ$. Equation \ref{11.4.1} allows us to obtain the time-dependent probability distribution function $P(Ω,t|Ω_0)$ that describes the distribution of directions $Ω$ at time $t$, given that the vector had the orientation $Ω_0$ at time $t = 0$. This can be expressed as an expansion in spherical harmonics
$P(\Omega ,t | \Omega_0) = \sum^{\infty}_{\ell = 0} \sum^{\ell}_{m=-\ell} c^m_{\ell}(t) [Y_{\ell}^m(\Omega_0)]^* Y_{\ell}^m(\Omega ) \nonumber$
The expansion coefficients are given by
$c_{\ell}^m(t)=\exp[-\ell (\ell+1)D_{or}t] \nonumber$
___________________________________
Readings
• H. C. Berg, Random Walks in Biology. (Princeton University Press, Princeton, N.J., 1993).
• R. Phillips, J. Kondev, J. Theriot and H. Garcia, Physical Biology of the Cell, 2nd ed. (Taylor & Francis Group, New York, 2012). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/11%3A_Brownian_Motion/11.03%3A_Fluorescence_Correlation_Spectroscopy.txt |
In this section, we extend the concepts of diffusion and Brownian motion into a regime where the time-evolution is not entirely random, but includes a driving force. We will refer to this class of problems as diffusion in a potential, although it is also referred to as diffusion with drift, diffusion in a velocity or force field, or diffusion in the presence of an external force. We will see that these problems can be related to a biased random walk or to motion of a Brownian particle subject to an internal or external potential. Our discussion below will be confined to problems involving diffusion in one dimension.
The common theme is that we account for transport of particles through a surface in terms of two sources of flux, the diffusive flux and an additional driven contribution that arises from a potential, field, or external force experienced by the particle:
\[J = J_{diff}+J_{U} \]
Here we label the second flux component with U to signify potential. This may be a result of an external force acting on a diffusing system (for instance, electrophoresis and sedimentation), or the bias that results from interactions between diffusing particles. In mass transport through fluid flow the second term is known as the advective flux, JU → Jadv.
12: Diffusion in a Potential
If diffusion occurs within a moving fluid, the time-dependent concentration profiles will be influenced by the local velocity of the fluid, or drift velocity vx. The net advective flux density for the concentration passing through an area per unit time is then
$J_{adv} = v_x C$
So that the total flux according to eq. (12.1) is
$J= - D \dfrac{\partial C}{\partial x} +v_xC$
Now using the continuity expression $\partial C/\partial t = -\partial J/\partial x$, and assuming a constant drift velocity the diffusion coefficient is1
$\dfrac{\partial C}{\partial t} = D \dfrac{\partial^2 C}{\partial x^2}-v_x \dfrac{\partial C}{\partial x}$
This equation is the same as the normal diffusion equation in the inertial frame of reference. If we shift to a frame moving at vx, we can define the relative displacement
$\overline{x} = x-v_xt \nonumber$
Remember, C is a function of x and t, and expressing eq. (12.1.2) in terms of $\overline{x}$ via the chain rule, we find that we can recast it as the simple diffusion equation:
$\dfrac{\partial C}{\partial t} = D \dfrac{\partial^2 C}{\partial \overline{x}^2}$
Then the solution for diffusion from a point source becomes
$C(\overline{x},t) = \dfrac{1}{\sqrt{4\pi Dt}}e^{-\overline{x}^2/4Dt}$
$C(x,t) = \dfrac{1}{\sqrt{4\pi Dt}} e^{-(x-v_xt)^2/4Dt}$
So the peak of the distribution moves as ⟨x⟩ = vxt and the width grows as σ = [⟨x2⟩ ‒ ⟨x⟩2]1/2 = (2Dt)1/2.
Let’s consider the relative magnitude of the diffusive and drift velocity contributions to the motion of a protein in water. A typical diffusion constant is 10−6 cm2/s, meaning that the root mean square displacement in a one microsecond time period is 14 nm. If we compare this with the typical velocity of blood in capillaries, v = 0.3 mm/s, in the same microsecond the same protein is pushed ⟨x⟩ = 0.3 nm. For this example, diffusion dominates the transport process on the nanometer scale, however, with the increase of time scale and transport distance, the drift term will grow in significance due to the t1/2 scaling of diffusive transport.
Péclet Number
The Péclet number Pe is a unitless number used in continuum hydrodynamics to characterize the relative importance of diffusive transport and advective transport processes. Language note:
• Convection: internal currents within fluid
• Advection: mass transport due to convection
We characterize this with a ratio of the rates or equivalently the characteristic time scale for
transport with these processes:
$P_e =\dfrac{\text{adjective flux}(J_{adv})}{\text{diffusive flux}(J_{diff})}\approx \dfrac{\text{diffusion timescale}(t_{diff})}{\text{advection timescale}(t_{adv})} \nonumber$
Limits
• Pe ≪ 1 Diffusion dominated. In this limit, diffusive transport spreads the concentration profile symmetrically about the maximum as illustrated above.
• Pe ≫ 1 Flow dominated. Effectively no spreading to concentration; it is just carried along with the flow.
If we define a characteristic transport length d and the flow velocity v, then
$t_{adv} \approx \dfrac{d}{v} \nonumber$
Given a diffusion constant D, the diffusive time-scale is taken to be
$t_{diff} \approx \dfrac{d^2}{D} \nonumber$
So That
$P_e = \dfrac{vd}{D} \nonumber$
____________________________________
1. In three dimensions: $\textbf{J} ( \textbf{r},t) = -D \overline{\Delta}C ( \textbf{r},t)+\textbf{v}C(\textbf{r},t) \text{ and }\dot{C}=\nabla \cdot (D\overline{\nabla}C)-\nabla \cdot (\textbf{v}C)$.
12.02: Biased Random Walk
The diffusion with drift equation can be obtained from a biased random walk problem. To illustrate, we extend the earlier description of a walker on a 1D lattice that can step left or right by an amount distance $Δx$ for every time interval $Δt$. However, in this case there is unequal probability of stepping right (+) or left (–) during $\Delta t: P_+ \neq P_-$. Probabilistically speaking, the change in position for a given time interval can be expressed as
\begin{align} \langle x(t+\Delta t) \rangle &= \langle x(t)+\Delta xP_+-\Delta xP_- \rangle \nonumber \[4pt] &= \langle x(t) \rangle +\Delta x(P_+-P_-) \label{12.2.1} \end{align}
We see that the average position of random walkers depends on the difference in left and right stepping rates. To help link stepping with time, we define rate constants for stepping left or right,
$k_{\pm} = \dfrac{P_{\pm}}{\Delta t}$
with $k_+ \neq k_-$. Then Equation \ref{12.2.1} can be written as
\begin{align} \langle x(t+\Delta t)\rangle &= \langle x(t) \rangle + (k_+-k_-) \Delta t \Delta x \nonumber \[4pt] &= \langle x(t) \rangle +v_x \Delta t \label{12.2.3} \end{align}
where the drift velocity is related to the difference in hopping rates
$v_x = (k_+-k_-) \Delta x \nonumber$
Expressing Equation \ref{12.2.3} as the result of many steps says that the mean of the position distribution behaves like traditional linear motion: ⟨x(t)⟩ = x0 + vxt.
What about the variance in the distribution? Calculating the mean-square value of $x$ from Equation \ref{12.2.1} gives
\begin{aligned} \langle x^2(t+\Delta t) \rangle &= \langle x^2(t) \pm 2\Delta x \Delta tk_{\pm}x(t) +(k_++k_-)^2\Delta x^2 \Delta t^2 \rangle \ &= \langle x^2(t) \rangle +2v_x\Delta t \langle x(t) \rangle +(k_++k_-)\Delta x^2\Delta t \end{aligned}
where we used (k++k)∆t = 1.
Using this to calculate the variance in $x$:
$σ^2(t)=(k_+ + k_–)∆x^2t$
and then comparing with ⟨x21/2 = 2Dt, leads to the conclusion that the breadth of the distribution σ spreads as it would in the absence of a drift velocity, and the diffusion coefficient for this biased random walk is given by
$D= \dfrac{1}{2} (k_++k_-) \Delta x^2 \nonumber$
When the left and right stepping rates are the same, we recover our earlier result 2D = ∆x2/∆t. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/12%3A_Diffusion_in_a_Potential/12.01%3A_Diffusion_with_Drift.txt |
Fokker–Planck Equation
Diffusion with drift or diffusion in a velocity field is closely related to diffusion of a particle under the influence of an external force f or potential U.
$f(x) = - \dfrac{\partial U}{\partial x} \nonumber$
When random forces on a particle dominate the inertial ones, we can equate the drift velocity and external force through the friction coefficient
\begin{aligned} &\cancel{m\ddot{x}} = f_d +\cancel{f_r(t)} +f_{ext} \ &f_d = -\zeta v_x \ &f_{ext} = \zeta v_x \end{aligned}
$f= \zeta v_x$
and therefore the contribution of the force or potential to the total flux is
$J_U = v_xC = \dfrac{f}{\zeta} C = -\dfrac{C}{\zeta} \dfrac{\partial U}{\partial x}$
The Fokker–Planck equation refers to stochastic equations of motion for the continuous probability density $\rho (x,t)$ with units of m−1. The corresponding continuity expression for the probability density is
$\dfrac{\partial \rho}{\partial t} = -\dfrac{\partial j}{\partial x} \nonumber$
where j is the flux, or probability current, with units of s–1, rather than the flux density we used for continuum diffusion J (m−2 s−1). If the concentration flux is instead expressed in terms of a probability density eq. (12.1.3) becomes
$j = -D \dfrac{\partial \rho}{\partial x} + \dfrac{f(x)}{\zeta}\rho$
and the continuity expression is used to obtain the time-evolution of the probability density:
$\dfrac{\partial \rho}{\partial x} = D\dfrac{\partial^2 \rho}{\partial x^2} - \dfrac{\partial}{\partial x} \left[ \dfrac{f(x)}{\zeta} \rho \right]$
This is known as a Fokker–Planck equation.
Smoluchowski Equation
Similarly, we can express diffusion in the presence of an internal interaction potential U(x) using eq. (12.3.2) and the Einstein relation
$\zeta = \dfrac{k_BT}{D}$
Then the total flux with contributions from the diffusive flux and potential flux can be written as
$J=-D\dfrac{\partial C}{\partial x}- \dfrac{DC}{k_BT} \left( \dfrac{\partial U}{\partial x} \right)$
and the corresponding diffusion equation is
$\dfrac{\partial C}{\partial t} = D \left[ \dfrac{\partial^2 C}{\partial x^2} - \dfrac{\partial}{\partial x} \left[ \dfrac{C}{k_BT} \left( \dfrac{\partial U}{\partial x} \right) \right] \right]$
This is known as the Smoluchowski Equation.
Linear Potential
For the case of a linear external potential, we can write the potential in terms of a constant external force $U=-f_{ext}x$. This makes eq. (12.3.7) identical to eq. (12.1.3), if we use eqs. (12.3.1) and (12.3.5) to define the drift velocity as
$v_x = \dfrac{f_{ext}D}{k_BT} \equiv \underset{sim}{f} D \nonumber$
$J = -D \dfrac{\partial C}{\partial x} + \underset{\sim}{f} DC \nonumber$
Here I defined $\underset{\sim}{f}$ as the constant external force expressed in units of kBT.
The probability distribution that describes the position of particles released at x0 after a time t is
$P(x,t) = \dfrac{1}{\sqrt{4\pi Dt}} \exp \left[ -\dfrac{(x-x_0-\underset{\sim}{f}Dt)^2}{4Dt} \right] \nonumber$
As expected, the mean position of the diffusing particle is given by ⟨x(t)⟩ = x0 + vxt.
To make use of this, let’s calculate the time it takes a monovalent ion to diffuse freely across the width of a membrane (d) under the influence of a linear electrostatic potential of Φ = 0.3V. With U = eΦ
$t = \dfrac{d}{v_x}= \dfrac{k_BTd}{f_{ext}D} = \dfrac{k_BTd^2}{e\Phi D} \nonumber$
Using d = 4 nm, D = 10−5 cm2/s, and e = 1.6×10−19 C, we obtain t = 1.4 ns.
Steady‐State Solutions
For steady-state solutions to the Fokker–Planck or Smoluchowski equations, we can make use of a commonly used mathematical manipulation. As an example, let’s work with eq. (12.3.3), re-writing it as
$j = -D \left[ \dfrac{\partial \rho}{\partial x} -\dfrac{\rho}{k_BT} \left( \dfrac{\partial U}{\partial x} \right) \right]$
We can rewrite the quantity in brackets as:
$e^{-U(x) /k_BT} \dfrac{d}{dx} \left[ \rho e^{U(x)/k_BT} \right] \nonumber$
Separating variables, we obtain
$- \dfrac{j}{D} e^{U(x) /k_BT} dx = d(\rho e^{U(x)/k_BT} \nonumber$
This is an expression that can be manipulated in various ways and integrated over different boundary conditions.1 For instance, recognizing that j is a constant under steady state conditions, and integrating from x to a boundary b:
\begin{aligned} -\dfrac{j}{D} \int^b_x e^{U(x)/k_BT} dx &= \int^b_x d(\rho e^{U(x)/k_BT}) \ &= \rho (b) e^{U(b)/k_BT} - \rho (x)e^{U(x)/k_BT} \end{aligned}
This leads one to an important expression for the steady state flux in the diffusive limit:
$j = \dfrac{-D\left[ \rho (b) e^{U(b)/k_BT}-\rho (x) e^{U(x)/k_BT} \right]}{\int^b_x e^{U(x)/k_BT}dx} \nonumber$
The boundary chosen depends on the problem, for instance b is set to infinity in diffusion to capture problems or set as a fixed boundary for first-passage time problems.
For problems involving an absorbing boundary condition, ρ(b) = 0, and we can solve for the probability density as
$\rho (x) = \dfrac{j}{D} e^{-U(x)/k_BT} \left[ \int^b_x e^{U(x')/k_BT} dx' \right] \nonumber$
If we integrate both sides of this expression over the entire space, the left hand side is just unity, so we can express the steady-state flux as
$j = D^{-1} \left[ \int^b_0 e^{-U(x)/k_BT} \left[ \int^b_x e^{U(x')/k_BT}dx' \right] dx \right]^{-1} \nonumber$
____________________________________________
1. The general three-dimensional expression is $\textbf{J}(\textbf{r},t)= -De^{-U(\textbf{r})/k_BT}\nabla \cdot [ e^{U(\textbf{r})/k_BT}\rho (\textbf{r},t) ]$. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/12%3A_Diffusion_in_a_Potential/12.03%3A_Diffusion_in_a_Potential.txt |
Now let’s relate the phenomena of Brownian motion and diffusion to the concept of friction, i.e., the resistance to movement that the particle in the fluid experiences. These concepts were developed by Einstein in the case of microscopic motion under thermal excitation, and macroscopically by George Stokes who was the father of hydrodynamic theory.
13: Friction and the Langevin Equation
Consider the forces acting on a particle as we pull it through a fluid. We pull the particle with an external force $f_{ext}$, which is opposed by a drag force from the fluid, $f_d$. The drag or damping acts as resistance to motion of the particle, which results from trying to move the fluid out of the way.
$f_d = -\zeta v \qquad \qquad \quad \zeta \mathrm{(kg/s)}$
A drag force requires movement, so it is proportional to the velocity of the particle $v=dx/dt=x$ and the friction coefficient $\zeta$ is the proportionality constant that describes the magnitude of the damping. Newton’s second law relates the acceleration of this particle is to the sum of these forces:
$ma = f_d+f_{ext} \nonumber$
Now microscopically, we also recognize that there are time-dependent random forces that the molecules of the fluid exert on a molecule ($f_r$). So that the specific molecular details of solute–solvent collisions can be averaged over, it is useful to think about a nanoscale solute in water (e.g., biological macromolecules) with dimensions large enough that its position is simultaneously influenced by many solvent molecules, but is also light enough that the constant interactions with the solvent leave an unbalanced force acting on the solute at any moment in time:
$\overline{f}_r(t)=-\sum_i \overline{f}_i(t) \nonumber$
Then Newton’s second law is
$ma = f_d + f_{ext} +f_r(t) \nonumber$
The drag force is present regardless of whether an external force is present, so in the absence of external forces ($f_{ext}=0$) the equation of motion governing the spontaneous fluctuations of this solute is determined from the forces due to drag and the random fluctuations:
$ma = f_d + f_r(t) \label{13.1.1}$
$m\ddot{x} + \zeta \dot{x} - f_r(t) = 0 \label{13.1.2}$
This equation of motion is the Langevin equation. An equation of motion such as this that includes a time-dependent random force is known as “stochastic”.
Inserting a random process into a deterministic equation means that we need to use a statistical approach to solve this equation. We will be looking to describe the average and root-mean-squared position of the particle. First, what can we say about the random force? Although there may be momentary imbalances, on average the perturbations from the solvent on a larger particle will average to zero at equilibrium:
$\langle f_r(t) \rangle = 0$
Equation \ref{13.1.1} seems to imply that the drag force and the random force are independent, but in fact they originate in the same molecular forces. If the molecule of interest is a protein that experiences the fluctuations of many rapidly moving solvent molecules, then the averaged forces due to random fluctuations and the drag forces are related. The fluctuation–dissipation theorem is the general relationship that relates the friction to the correlation function for the random force. In the Markovian limit this is
$\langle f_r(t) f_r(t') \rangle = 2\zeta k_BT \delta (t-t')$
or $\zeta = \dfrac{\langle f_r^2 \rangle }{2k_BT} \nonumber$
Markovian indicates that no correlation exists between the random force for |t‒t′| > 0. More generally, we can recover the friction coefficient from the integral over the correlation function for the random force
$\zeta = \dfrac{1}{2k_BT} \int^{+\infty}_{-\infty} dt \langle f_R(0)f_R(t) \rangle \nonumber$
To describe the time evolution of the position of our protein molecule, we would like to obtain an expression for mean-square displacement ⟨x2(t)⟩. The position of the molecule can be described by integrating over its time-dependent velocity: $x(t)=\int^t_0 dt' \dot{x}(t')$, so we can express the mean-square displacement in terms of the velocity autocorrelation function
$\langle x^2(t) \rangle = \int^t_0 dt' \int^t_0 dt'' \langle \dot{x}(t') \dot{x}(t'') \rangle$
Our approach to obtaining ⟨x2(t)⟩ starts by multiplying eq. (13.1.2) by x and then ensemble averaging.
$m \langle x \dfrac{d}{dt} \dot{x} \rangle + \zeta \langle x \dot{x} \rangle - \langle xf_r(t) \rangle =0$
From eq. (13.1.3), the last term is zero, and from the chain rule we know
$\dfrac{d}{dt} (x\dot{x}) = x\dfrac{d}{dt} \dot{x} + \dfrac{dx}{dt}\dot{x}$
Therefore, we can write eq. (13.1.6) as
$m \left( \dfrac{d}{dt} \langle x\dot{x} \rangle - \langle \dot{x} \dot{x} \rangle \right) + \zeta \langle x \dot{x} \rangle = 0$
Further, the equipartition theorem states that for each translational degree of freedom the kinetic energy is partitioned as
$\dfrac{1}{2} m \langle \dot{x}^2 \rangle = \dfrac{k_BT}{2}$
So, $m \dfrac{d}{dt} \langle x\dot{x} \rangle + \zeta \langle x \dot{x} \rangle = k_BT$
Here we are describing motion in 1D, but when fluctuations and displacement are included for 3D motion, then we switch x → r and kBT→3kBT. Integrating eq. (13.1.10) twice with respect to time, and using the initial condition x(0) = 0, we obtain
$\langle x^2 \rangle = \dfrac{2k_BT}{\zeta } \left\{ t +\dfrac{m}{\zeta } \left[ \exp \left( -\dfrac{\zeta }{m}t \right) -1 \right] \right\}$
in 3D:
$\langle r^2 \rangle = \dfrac{6k_BT}{\zeta } \left\{ t +\dfrac{m}{\zeta } \left[ \exp \left( -\dfrac{\zeta }{m}t \right) -1 \right] \right\} \nonumber$
To investigate eq. (13.1.11), let’s consider two limiting cases. We see that m/ζhas units of time, and so we define the relaxation time
$\tau_C = m/ \zeta$
and investigate time scale short and long compared to τC:
1) For $t \ll \tau_C$, we can expand the exponential in eq. (11) and retain the first three terms, which leads to
$\langle x^2 \rangle \approx \dfrac{k_BT}{m}t^2 = \langle v^2 \rangle t^2 \qquad \qquad \quad \text{(short time: inertial)}$
2) For $t \gg \tau_C$, eq. (11) is dominated by the leading term:
$\langle x^2 \rangle = \dfrac{2k_BT}{\zeta} t \qquad \qquad \quad \text{(long time: diffusive)}$
In the diffusive limit the behavior of the molecule is governed entirely by the fluid, and its mass does not matter. The diffusive limit in a stochastic equation of motion is equivalent to setting m→0.
We see that τC is a time-scale separating motion in the inertial and diffusive limits. It is a correlation time for the randomization of the velocity of the particle due to the random fluctuations of the environment.
For very little friction or short time, the particle moves with traditional deterministic motion xrms= vrms t, where root-mean-square displacement xrms = ⟨x21/2 and vrms comes from the average translational kinetic energy of the particle. For high-friction or long times, we see diffusive behavior with xrms~t1/2. Furthermore, by comparing eq. (13.1.14) to our earlier continuum result, ⟨x2⟩ = 2Dt, we see that the diffusion constant can be related to the friction coefficient by
$D= \dfrac{k_BT}{\zeta} \quad \text{(in 1D)}$
This is the Einstein formula. For 3D problems, we replace kBT with 3kBT in the expressions above and find $D_{3D}=3k_BT /\zeta$.
$\tau_C = \dfrac{m}{\zeta} = \dfrac{mD}{k_BT} \quad \text{(in 1D)} \nonumber$
How long does it take to approach the diffusive regime? Very fast. Consider a 100 kDa protein with R = 3 nm in water at T = 300 K, we find a characteristic correlation time for randomizing velocities of $\tau_C=3\mathrm{x}10^{-12}$ s, which corresponds to a distance of about 10–2 nm before the onset of diffusive behavior.
We can find other relationships. Noting the relationship of ⟨x2⟩ to the velocity autocorrelation function in eq. (13.1.5), we find that the particle velocity is described by
$\langle v_x(0)v_x(t) \rangle = \langle v_x^2\rangle e^{-\zeta t /m}= \langle v^2_x \rangle e^{-t/\tau_C} \qquad \qquad \qquad v_x = \dot{x} \nonumber$
which can be integrated over time to obtain the diffusion constant.
$\int^{\infty}_0 \langle v_x(0)v_x(t) \rangle dt = \dfrac{k_BT}{\zeta} = D$
This expression is the Green–Kubo relationship. This is a practical way of analyzing molecular trajectories in simulations or using particle-tracking experiments to quantify diffusion constants or friction coefficients.
13.02: Brownian Dynamics
The Langevin equation for the motion of a Brownian particle can be modified to account for an additional external force, in addition to the drag force and random force. From Newton’s Second Law:
$m \ddot{x} = f_d + f_r(t)+ f_{ext}(t) \nonumber$
where the added force is obtained from the gradient of the potential it experiences:
$f_{ext} = -\dfrac{\partial U}{\partial x}$
With the fluctuation-dissipation relation $\langle f_r(t)f_r(t')\rangle = 2\zeta k_BT \delta (t-t')$, the Langevin equation becomes
$m\ddot{x} + (\partial U/\partial x)+\zeta \dot{x} - \sqrt{2\zeta k_BT } R(t)=0$
Here $R(t)$ refers to a Gaussian distributed sequence of random numbers with $⟨R(t)⟩ = 0$ and $⟨R(t) R(t′)⟩ = δ(t ‒ t′)$.
Brownian dynamics simulations are performed using this equation of motion in the diffusion-dominated, or strong friction limit $|m\ddot{x}|\ll |\zeta \dot{x}|$. Then, we can neglect inertial motion, and set the acceleration of the particle to zero to obtain an expression for the velocity of the particle
$\dot{x} (t) = \dfrac{\dfrac{\partial U}{\partial x}}{\zeta} -\sqrt{2k_BT/\zeta} R(t) \nonumber$
We then integrate this equation of motion in the presence of random perturbations to determine the dynamics $x(t)$.
Readings
1. R. Zwanzig, Nonequilibrium Statistical Mechanics. (Oxford University Press, New York, 2001).
2. B. J. Berne and R. Pecora, Dynamic Light Scattering: With Applications to Chemistry, Biology, and Physics. (Wiley, New York, 1976). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03%3A_Diffusion/13%3A_Friction_and_the_Langevin_Equation/13.01%3A_Langevin_Equation.txt |
• Diffusion equations, random walks, and the Langevin equation are useful for describing transport driven by random thermal forces under equilibrium conditions or not far from equilibrium (the linear response regime).
• Fluid Dynamics and hydrodynamics refer to continuum approaches that allows us to describe non-equilibrium conditions for transport in fluids. Hydrodynamics describes flow and transport of objects through a fluid experiencing resistance or friction.
14: Hydrodynamics
• Fluids described through continuum mechanics.
• Stress: Force applied to an object. Stress is force applied over a surface area, a. Force has normal (z) and parallel components (x).
• The stress can be decomposed it into the normal component perpendicular to the surface $\vec{f}_z/a$, and the sheer stress parallel to the surface $\vec{f}_x/a$.
• Strain: The deformation (change in dimension) of object as a result of the stress.
• Solids
• A solid is considered Newtonian if its behavior follows a linear relationship between elastic stress and strain, i.e. Hooke’s Law.
• Solids are stiff and will return to their original configuration when stressed, but can’t deform far (without rupture).
• Fluids
• Fluids cannot support a strain and remain at equilibrium. Conservation of momentum dictates that application of a force will induce a flow.
• Fluids resist flow (viscous flow).
• Newtonian fluids follow a linear relation between shear stress and the strain rate.
Viscosity
Viscosity measures the resistance to shear forces. A fluid is placed between two plates of area a separated along z, and one plate is moved relative to the other by applying a shear force along x. At contact, the velocity of the fluid at the interface with either plate is equal to the velocity of the plate as a result of intermolecular interactions: $\vec{v}_x(z=0)=0$. This is known as the no-slip boundary condition. The movement of one plate with respect to the other sets up a velocity gradient along z. This velocity gradient is equal to the strain rate.
The relationship between the shear velocity gradient and the force is
$\vec{f}_x = a \eta \dfrac{d \vec{v}_x}{dz} \nonumber$
where η, the dynamic viscosity (kg m−1 s−1), is the proportionality factor. For water at 25°C, the dynamic viscosity is η = 8.9×10–3 Pa s.
Stresses in a Dense Particle Fluid
A normal stress is a pressure (force per unit area), and these forces are transmitted through a fluid as a result of the conservation of momentum in an incompressible medium. This force transduction also means that a stress applied in one direction can induce a strain in another, i.e. a stress tensor is needed to describe the proportionality between the stress and strain vectors.
In an anisotropic particulate system, force transmission from one region of the fluid to another results from “force chains” involving steaming motion of particles that repel each other. These force chains are not simply unidirectional, but also branch into networks that bypass unaffected regions of the system.
Adapted from National Science Foundation, “Granular Materials”, June 15, 2012. Copyright 2012 National Science Foundation. https://www.youtube.com/watch?v=R7g6wdmYB78
14.02: Stokes Law
How is a fluid’s macroscopic resistance to flow related to microscopic friction originating in random forces between the fluid’s molecules? In discussing the Langevin equation, we noted that the friction coefficient $\zeta$ was the proportionality constant between the drag force experienced by an object and its velocity through the fluid: $f_d=-\zeta v$. Since this drag force is equal and opposite to the stress exerted on an object as it moves through a fluid, there is a relationship of the drag force to the fluid viscosity. Specifically, we can show that Einstein’s friction coefficient ζ is related to the dynamic viscosity of the fluid $\eta$, as well as other factors describing the size and shape of the object (but not its mass).
This connection is possible as a result of George Stokes’ description of the fluid velocity field around a sphere moving through a viscous fluid at a constant velocity. He considered asphere of radius R moving through a fluid with laminar flow: that in which the fluid exhibits smooth parallel velocity profiles without lateral mixing. Under those conditions, and no-slip boundary conditions, one finds that the drag force on a sphere is
$f_d = 6\pi \eta R_hv$
and viscous force per unit area is entirely uniform across the surface of the sphere. This gives us Stokes’ Law
$\zeta = 6\pi \eta R_h$
Here Rh is referred to as the hydrodynamic radius of the sphere, the radius at which one can apply the no-slip boundary condition, but which on a molecular scale may include water that is strongly bound to the molecule. Combining eq. (1) with the Einstein formula for diffusion coefficient, $D=k_BT/\zeta$ gives the Stokes–Einstein relationship for the translation diffusion constant of a sphere1
$D_{trans} = \dfrac{k_BT}{6\pi \eta R_h}$
One can obtain a similar a Stokes–Einstein relationship for orientational diffusion of a sphere in a viscous fluid. Relating the orientational diffusion constant and the drag force that arises from resistance to shear, one obtains
$D_{rot} = \dfrac{k_BT}{6V_h\eta } \nonumber$
________________________________________
1. B. J. Berne and R. Pecora, Dynamic Light Scattering: With Applications to Chemistry, Biology, and Physics. (Wiley, New York, 1976), pp. 78, 91.
14.03: Laminar and Turbulent Flow
• Laminar flow: Fluid travels in smooth parallel lines without lateral mixing.
• Turbulent flow: Flow velocity field is unstable, with vortices that dissipate kinetic energy of fluid more rapidly than laminar regime.
Reynolds Number
The Reynolds number is a dimensionless number is used to indicate whether flow conditions are in the laminar or turbulent regimes. It indicates whether the motion of a particle in a fluid is dominated by inertial or viscous forces.1
$\mathcal{R} = \dfrac{inertial\: forces}{viscous \: forces} \nonumber$
When $\mathcal{R}>1$, the particle moves freely, experiencing only weak resistance to its motion by the fluid. If $\mathcal{R}<1$, it is dominated by the resistance and internal forces of the fluid. For the latter case, we can consider the limit m → 0 in eq. Error! Reference source not found., and find that the velocity of the particle is proportional to the random fluctuations: $v(t)=f_r(t)/\zeta$.
We can also express the Reynolds number in other forms:
• In terms of the fluid velocity flow properties: $\mathcal{R} = \dfrac{v\rho (d \overline{v}/dz)}{\eta (d^2\overline{v}/dz^2)}$
• In terms of the Langevin variables: $\mathcal{R} = f_{in}/f_d$.
Hydrodynamically, for a sphere of radius r moving through a fluid with dynamic viscosity η and density ρ at velocity v,
$\mathcal{R} =\dfrac{rv\rho}{\eta} \nonumber$
Consider for an object with radius 1 cm moving at 10 cm/s through water: $\mathcal{R}=10^3$. Now compare to a protein with radius 1 nm moving at 10 m/s: $\mathcal{R}=10^{-2}$.
Drag Force in Hydrodynamics
The drag force on an object is determined by the force required to displace the fluid against the direction of flow. A sphere, rod, or cube with the same mass and surface area will respond differently to flow. Empirically, the drag force on an object can be expressed as
$f_d = \left[ \dfrac{1}{2} \rho C_d v^2 \right] a \nonumber$
This expression takes the form of a pressure (term in brackets) exerted on the cross-sectional area of the object along the direction of flow, a. Cd is the drag coefficient, a dimensionless proportionality constant that depends on the shape of the object. In the case of a sphere of radius r: a = πr2 in the turbulent flow regime ($\mathcal{R} >1000$) Cd = 0.44–0.47. Determination of Cd is somewhat empirical since it depends on $\mathcal{R}$ and the type of flow around the sphere.
The drag coefficient for a sphere in the viscous/laminar/Stokes flow regimes ($\mathcal{R}<1$) is $C_d=24/\mathcal{R}$. This comes from using the Stokes Law for the drag force on a sphere $f_d=6\pi \eta v r$ and the Reynolds number $\mathcal{R}=\rho vd/\eta$.
Reprinted with permission from Bernard de Go Mars, Drag coefficient of a sphere as a function of Reynolds number, CC BY-SA 3.0.
________________________________
1. E. M. Purcell, Life at low Reynolds number, Am. J. Phys. 45, 3–11 (1977). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/14%3A_Hydrodynamics/14.01%3A_Newtonian_Fluids.txt |
Passive transport is often synonymous with diffusion, where thermal energy is the only source of motion.
$\langle r(t) \rangle = 0 \qquad \qquad \qquad \langle r^2(t) \rangle^{1/2}=\sqrt{6Dt} \qquad \qquad \qquad r_{rms}\propto \sqrt{t} \nonumber$
In biological systems, diffusive transport may work on a short scale, but it is not effective for long-range transport. Consider:
$\langle r^2 \rangle^{1/2}$ for small protein moving in water
~10 nm →10–7 s
~10 μm → 10–1 s
Active transport refers to directed motion:
$\langle r(t) \rangle = \langle v \rangle t \qquad \qquad \qquad \qquad r \propto t \nonumber$
This requires an input of energy into the system, however, we must still deal with random thermal fluctuations.
How do you speed up transport?
We will discuss these possibilities:
• Reduce dimensionality: Facilitated diffusion
• Free energy (chemical potential) gradient: Diffusion in a potential
• Directional: Requires input of energy, which drives the switching between two conformational states of the moving particle tied to translation.
15: Passive Transport
One approach that does not require energy input works by recognizing that displacement is faster in systems with reduced dimensionality. Let’s think about the time it takes to diffusively encounter a small fixed target in a large volume, and how this depends on the dimensionality of the search. We will look at the mean first passage time to find a small target with radius b centered in a spherical volume with radius R, where R≫b. If the molecules are initially uniformly distributed within the volume the average time it takes for them to encounter the target (i.e., MFPT) is1
\begin{aligned} &\langle \tau_{3D} \rangle \simeq \dfrac{R^2}{3D_3} \left( \dfrac{R}{b} \right) \qquad \qquad R \gg b \ &\langle \tau_{2D} \rangle \simeq \dfrac{R^2}{2D_2} \ln \left( \dfrac{R}{b} \right) \qquad \: \: \: \: \: \: R \gg b \ &\langle \tau_{1D} \rangle \simeq \dfrac{R^2}{3D_1} \end{aligned}
Here Dn is the diffusion constants in n dimensions (cm2/sec). If we assume that the magnitude of D does not vary much with n, the leading terms in these expressions are about equal, and the big differences are in the last factor
$\left( \dfrac{R}{b} \right) > \ln \left( \dfrac{R}{b} \right) \gg 1$
$\langle \tau_{3D} \rangle > \langle \tau_{2D} \rangle \gg \langle \tau_{1D} \rangle$
Based on the volume that needs searching, there can be a tremendous advantage to lowering the dimensionality.
_______________________________________________
1. O. G. Berg and P. H. von Hippel, Diffusion-controlled macromolecular interactions, Annu. Rev. Biophys. Biophys. Chem. 14, 131-158 (1985); H. C. Berg and E. M. Purcell, Physics of chemoreception, Biophys. J. 20, 193-219 (1977).
15.02: Facilitated Diffusion
Facilitated diffusion is a type of dimensionality reduction that has been used to describe the motion of transcription factors and regulatory proteins looking for their binding target on DNA.1
E.coli Lac Repressor
Experiments by Riggs et al. showed that E. coli Lac repressor finds its binding site about one hundred times faster than expected by 3D diffusion.2 They measured ka=7×109 M−1 s−1, which is 100–1000 times faster than typical rates. The calculated diffusion-limited association rate from the Smoluchowski equation is ka≈108 M−1 s−1 using estimated values of D≈5×10−7 cm2 s−1 and R≈5×10−8 cm. Berg and von Hippel theoretically described the possible ways in which nonspecific binding to DNA enabled more efficient one-dimensional motion coupled to three-dimensional transport.3
Many Possibilities for Locating Targets Diffusively: Coupled 1D+3D Diffusion
1. Sliding (1D diffusion along chain as a result of nonspecific interaction)
2. Microhop (local translocation with free diffusion)
3. Macrohop (...to distal segment via free diffusion)
4. Intersegmental transfer at crossing—varies with DNA dynamics
Consider Coupled Sliding and Diffusion: The Steady‐State Solution
The transcription factor diffuses in 1D along DNA with the objective of locating a specific binding site. The association of the protein and DNA at all points is governed by a nonspecific interaction. Sliding requires a balance of nonspecific attractive forces that are not too strong (or the protein will not move) or too weak (or it will not stay bound). The nonspecific interaction is governed by an equilibrium constant and exchange rates between the bound and free forms:
$F \overset{k_a}{\underset{k_d}{\rightleftharpoons}} B \qquad \qquad \qquad K = \dfrac{k_a}{k_d} = \dfrac{\overline{\tau}_{1D}}{\overline{\tau}_{3D}} \nonumber$
We can also think of this equilibrium constant in terms of the average times spent diffusing in 1D or 3D. The protein stays bound for a period of time dictated by the dissociation rate kd. It can then diffuse in 3D until reaching a contact with DNA again, at a point which may be short range in distance but widely separated in sequence.
The target for the transcription factor search can be much larger that the physical size of the binding sequence. Since the 1D sliding is the efficient route to finding the binding site, the target size is effectively covered by the mean 1D diffusion length of the protein, that is, the average distance over which the protein will diffuse in 1D before it dissociates. Since one can express the average time that a protein remains bound as $\overline{\tau}_{1D}=k^{-1}_d$, the target will have DNA contour length of
$R^*=\left( \dfrac{4D_1}{k_d} \right)^{1/2} \nonumber$
If the DNA is treated as an infinitely long cylinder with radius b, and the protein is considered to have a uniform probability of nonspecifically associating with the entire surface of the DNA, then one can solve for the steady-state solution for the diffusion equation, assuming a completely absorbing target. The rate constant for specific binding to the target has been determined as
$\eta = \dfrac{D_1K'}{D_3b} \nonumber$
where K' is the equilibrium constant for nonspecific binding per unit surface area of the cylinder (M–1 cm–2 or cm). We can express the equilibrium constant per base-pair as $K=2\pi \ell bK'$, where $\ell$ is the length of a base pair along the contour of the DNA. The association rate will be given by the product of kbind and the concentration of protein.
_________________________________
1. P. H. von Hippel and O. G. Berg, Facilitated target location in biological systems, J. Biol. Chem. 264 (2), 675–678 (1989).
2. A. D. Riggs, S. Bourgeois and M. Cohn, The lac represser-operator interaction, J. Mol. Biol. 53 (3), 401–417 (1970); Y. M. Wang, R. H. Austin and E. C. Cox, Single molecule measurements of repressor protein 1D diffusion on DNA, Phys. Rev. Lett. 97 (4), 048302 (2006).
3. O. G. Berg, R. B. Winter and P. H. Von Hippel, Diffusion-driven mechanisms of protein translocation on nucleic acids. 1. Models and theory, Biochemistry 20 (24), 6929–6948 (1981). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/15%3A_Passive_Transport/15.01%3A_Dimensionality_Reduction.txt |
Consider a series of repetitive 1D and 3D diffusion cycles.1 The search time for a protein to find its target is
$t_s = \sum^k_{i=1} \left( \tau_{1D,i}+\tau_{3D,i} \right) \nonumber$
where k is the number of cycles. If the genome has a length of M bases and the average number of bases scanned per cycle is n, the average number of cycles k̅ = M/n̅, and the average search time can be written as
$\overline{t}_s = \dfrac{M}{\overline{n}} \left( \overline{\tau}_{1D,i}+\overline{\tau}_{3D,i} \right)$
$\overline{\tau}$ is the mean search time during one cycle. If we assume that sliding occurs through normal 1D diffusion, then we expect that $\overline{n} \propto \sqrt{D_{1D}\overline{\tau}_{1D}}$, where the diffusion constant is expressed in units of bp2/s. More accurately, it is found that if you executed a random walk with an exponentially weighted distribution of search times:
\begin{aligned} P(\tau_{1D}) &= \overline{\tau}_{1D}^{\: -1}\exp (-\tau_{1D}/\overline{\tau}_{1D}) \ \overline{n} &= \sqrt{4D_{1D}\overline{\tau}_{1D}} \end{aligned}
$\overline{t}_s = \dfrac{M}{\sqrt{4D_{1D}\overline{\tau}_{1D}}}(\overline{\tau}_{1D} +\overline{\tau}_{3D} \nonumber$
Let’s calculate the optimal search time, topt. In the limits that $\overline{\tau}_1$ or $\overline{\tau}_3\rightarrow 0$, you just have pure 1D or 3D diffusion, but this leads to suboptimal search times because a decrease in $\overline{\tau}_{1D}$ or $\overline{\tau}_{3D}$ leads to an increase in the other. To find the minimum search time we solve:
$\dfrac{\partial \overline{t}_s}{\partial \tau_{1D}} = 0 \nonumber$
and find that topt corresponds to the condition
$\overline{\tau}_{1D} = \overline{\tau}_{3D} \nonumber$
Using this in eq. (15.3.1) we have
\begin{aligned} t_{opt} &= \dfrac{2M}{\overline{n}} \overline{\tau}_{3D} = M \sqrt{\dfrac{\overline{\tau}_{3D}}{D_{1D}}} \ \overline{n}_{opt} &= \sqrt{4D_{1D}\overline{\tau}_{3D}} \end{aligned}
Now let’s find out how much this 1D + 3D search process speeds up over the pure 1D or 3D search.
• 3D only: $\overline{\tau}_{1D}\rightarrow 0 \qquad \therefore \overline{n} \rightarrow \sim 1$ leading to
$\overline{t}_{3D} = M\overline{\tau}_{3D} \nonumber$
Facilitated diffusion speeds up the search relative to pure 3D diffusion by a factor proportional to the average number of bases searched during the 1D sliding.
$\dfrac{\overline{t}_{3D}}{(\overline{t}_s)_{opt}}=\dfrac{\overline{n}}{2} \nonumber$
• 1D only: $\overline{\tau}_{3D}\rightarrow 0 \qquad \therefore \overline{n} \rightarrow M$, and
\begin{aligned} \overline{t}_1D &\approx \dfrac{M^2}{4D_{1D}} \ \dfrac{\overline{\tau}_{1D}}{(\overline{t}_s)_{opt}} &= \dfrac{M}{4} \sqrt{\dfrac{1}{D_{1D}\tau_{1D}}} = \dfrac{M}{\overline{n}} \end{aligned}
Facilitated diffusion speeds up the search over pure 1D diffusion by a factor or M/n̅.
Example: Bacterial Genome
\begin{aligned} &M \approx 5\mathrm{x}10^6 \: \text{bp} \ &\overline{n} \approx 200- 500 \: \text{bp} \ \text{Optimal facilitated diffusion is} \sim &10^2 \text{faster than 3D} \ \sim &10^4 \text{faster than 1D} \end{aligned}
Energetics of Diffusion
What determines the diffusion coefficient for sliding and $\overline{\tau}_1$? We need the non-specific protein interaction to be strong enough that it doesn’t dissociate too rapidly, but also weak enough that it can slide rapidly. To analyze this, we use a model in which the protein is diffusing on a modulated energy landscape looking for a low energy binding site.
Model2
• Assume each sequence can have different interaction with the protein.
• Base pairs in binding patch contribute additively and independently to give a binding energy En for each site, n.
• Assume that the variation in the binding energies as a function of site follow Gaussian random statistics, characterized by the average binding energy $\langle E \rangle$ and the surface energy roughness $\sigma$.
• The protein will attempt to move to an adjacent site at a frequency ν = Δτ-1. The rate of jumping is the probability that the attempt is successful times ν, and depends on the energy difference between adjacent sites, ΔE=En±1‒En. The rate is ν if ΔE<0, and ν$\cdot$exp[‒ΔE/kBT] for ΔE>0.
Calculating the mean first passage time to reach a target site at a distance of L base pairs from the original position yields
$\overline{\tau}_{1D} = L^2\Delta \tau \left( 1+\dfrac{1}{2} \left( \dfrac{\sigma}{k_BT} \right)^2 \right)^{-1/2} e^{-7\sigma^2/4(k_BT)^2} \nonumber$
Which follows a diffusive form with a diffusion constant
$D_{1D} = \dfrac{L^2}{2\overline{\tau}_{1D}}=\dfrac{1}{2\Delta \tau} \left( 1+\dfrac{1}{2} \left( \dfrac{\sigma}{k_BT} \right)^2 \right)^{1/2} e^{-7\sigma^2/4(k_BT)^2}$
Using this to find conditions for the fastest search time:
$t_{opt} = \dfrac{M}{2} \sqrt{\dfrac{\pi \overline{\tau}_{3D}}{4D_{1D}}} \qquad \qquad \overline{n}_{opt} = \sqrt{\dfrac{16}{\pi}D_{1D}\overline{\tau}_{3D}} \qquad \qquad \overline{\tau}_{1D} = \overline{\tau}_{3D} \nonumber$
Speed vs Stability Paradox
Speed: Fast speed $\rightarrow$ fast search in 1D. From eq. (15.3.2), we see that
$D_{1D} \propto \exp \left[ - \left( \dfrac{\sigma^2}{k_BT} \right) \right]$
With this strong dependence on σ, effective sliding with proper $\overline{n}$ requires
$\sigma < 2k_BT \nonumber$
Stability: On the other hand, we need to remain stably bound for proper recognition and activity. To estimate we argue that we want the equilibrium probability of having the protein bound at the target site be $P_{eq} \approx 0.25$. If E0 is minimum energy of the binding site, and the probability of occupying the binding site is the following. First we can estimate that
$E_0 \approx - \sigma \sqrt{2 \log M} \nonumber$
which suggests that for adequate binding:
$\sigma > 5 k_BT \nonumber$
Proposed Two‐State Sliding Mechanism
To account for these differences, a model has been proposed:
• While 1D sliding, protein is constantly switching between two states, the search and recognize conformations: $S \rightleftharpoons R$. S binds loosely and allows fast diffusion, whereas R interacts more strongly such that σ increases in the R state.
• These fast conformational transitions must have a rate faster than
$> \dfrac{\overline{n}}{\overline{\tau}_{1D}} \sim 10^4 s^{-1} \nonumber$
• Other Criteria:
\begin{aligned} \langle E_R \rangle &< \langle E_S \rangle \ \sigma_R&>\sigma_S \end{aligned}
Reprinted from M. Slutsky and L. A. Mirny, Kinetics of protein-DNA interaction: Facilitated target location in sequence-dependent potential, Biophys. J. 87 (6), 4021–4035 (2004), with permission from Elsevier.
Diffusion on rough energy landscape
The observation in eq. (15.3.3), relating the roughness of an energy landscape to an effective diffusion rate is quite general.3 If we are diffusing over a distance long enough that the corrugation of the energy landscape looks like Gaussian random noise with a standard deviation σ, we expect the effective diffusion coefficient to scale as
$D_{eff} = D_0 \exp \left[ - \left( \dfrac{\sigma^2}{k_BT} \right) \right]$
where D0 is the diffusion constant in the absence of the energy roughness.
Single‐Molecule Experiments
To now there still is no definitive evidence for coupled 1D + 3D transport, although there is a lot of data now showing 1D sliding. These studies used flow to stretch DNA and followed the position of fluorescently labelled proteins as they diffused along the DNA.
Austin: Lac Repression follow up $\rightarrow$ observed D1D varies by many orders of magnitude.4
\begin{aligned} D_{1D} &:10^2-10^5 \: \mathrm{nm}^2/\mathrm{s} \ \overline{n} &\approx 500 \: \mathrm{nm} \end{aligned}
Blainey and Xie: hOGG1 DNA repair protein:5
\begin{aligned} \Delta G^{\dagger}_{slide} &\approx 0.5 \text{kcal/mol} \approx k_BT \ D_{1D} &\sim 5\mathrm{x}10^6 \: \mathrm{bp}^2/\mathrm{s} \ \overline{n} &\approx 440 \: \mathrm{bp} \end{aligned}
Reprinted from A. Tafvizi, F. Huang, J. S. Leith, A. R. Fersht, L. A. Mirny and A. M. van Oijen, Tumor Suppressor p53 Slideson DNA with Low Friction and High Stability, Biophys. J. 95 (1), L01–L03 (2008), with permission from Elsevier.
$D_{1D} \qquad 10^6 - 10^7 \: \mathrm{bp}^2/\mathrm{s} \approx 10^{-1}-10^0 \: \mu \mathrm{m}^2/\mathrm{s} \nonumber$
______________________________________
1. (5)M. Slutsky and L. A. Mirny, Kinetics of protein-DNA interaction: Facilitated target location in sequence-dependent potential, Biophys. J. 87 (6), 4021–4035 (2004); A. Tafvizi, L. A. Mirny and A. M. van Oijen, Dancing on DNA: Kinetic aspects of search processes on DNA, Chemphyschem 12 (8), 1481–1489 (2011).
2. M. Slutsky and L. A. Mirny, Kinetics of protein-DNA interaction: Facilitated target location in sequence-dependent potential, Biophys. J. 87 (6), 4021–4035 (2004).
3. R. Zwanzig, Diffusion in a rough potential, Proc. Natl. Acad. Sci. U. S. A. 85 (7), 2029 (1988).
4. Y. M. Wang, R. H. Austin and E. C. Cox, Single molecule measurements of repressor protein 1D diffusion on DNA, Phys. Rev. Lett. 97 (4), 048302 (2006).
5. P. C. Blainey, A. M. van Oijen, A. Banerjee, G. L. Verdine and X. S. Xie, A base-excision DNA-repair protein finds intrahelical lesion bases by fast sliding in contact with DNA, Proc. Natl. Acad. Sci. U. S. A. 103 (15), 5752 (2006). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/15%3A_Passive_Transport/15.03%3A_Search_Times_in_Facilitated_Diffusion.txt |
In this section we will discuss the kinetics of association of a diffusing particle with a target. What is the rate at which a diffusing molecule reaches its target? These diffusion-to-capture problems show up in many contexts. For instance:
1. Molecule diffusing to fixed target(s). Binding of ligands to enzymes or receptors. Binding of transcription factors to DNA. Here the target may have complex topology or target configurations, but it is fixed relative to a diffusing small molecule $(D_{molec}\gg D_{target} )$. The diffusion may occur in 1, 2, and/or 3 dimensions, depending on the problem.
2. Bimolecular Diffusive Encounter. Diffusion limited chemical reactions. How do two molecules diffuse into proximity and react? Reaction–diffusion equations.
We will consider two approaches to dealing with these problems:
1. Steady-state solutions. The general strategy is to determine the flux of molecules incident on the target from the steady state solution to the diffusion equation with an absorbing boundary condition at the target to account for loss of diffusing molecules once they reach the target. Then the concentration gradient at the target surface can be used to calculate a flux or rate of collisions.
2. Mean-first passage time. This is a time-dependent representation of the rate in which you calculate the average time that it takes for a diffusing object to first reach a target.
Diffusion to Capture by Sphere
What is the rate of encounter of a diffusing species with a spherical target? We can find a steady-state solution by determining the steady-state radial concentration profile $C(r)$. Assume that reaction is immediate on encounter at a radius $a$. This sets the boundary condition, $C(a) = 0$. We also know the bulk concentration $C_0 = C(∞)$. From our earlier discussion, the steady state solution to this problem is
$C(r) = C_0\left( 1- \dfrac{a}{r} \right) \nonumber$
Next, to calculate the rate of collisions with the sphere, we first calculate the flux density of molecules incident on the surface of the sphere ($r = a$):
$J(a) = -D \dfrac{\partial C}{\partial r}\Bigg{|}_{r=a} = - \dfrac{DC_0}{a}$
$J$ is expressed as (molec area−1 sec−1) or [(mol/L) area−1 sec−1]. We then calculate the flux, or rate of collisions of molecules with the sphere (molec sec−1), by multiplying the flux density by the surface area of the sphere (A = 4πa2):
\begin{aligned} j &= \dfrac{dN}{dt} = JA = \left( \dfrac{DC_0}{a} \right) (4\pi a^2) \ &=4\pi DaC_0 \ &\equiv kC_0 \end{aligned}
We associate the constant or proportionality between rate of collisions and concentration with the pseudo first-order association rate constant, $k = 4πDa$, which is proportional to the rate of diffusion to the target and the size of the target.
React-Diffusion
The discussion above describes the rate of collisions of solutes with an absorbing sphere, which are applicable if the absorbing sphere is fixed. For problems involving the encounter between two species that are both diffusing in solution $(A+B \rightarrow X)$, you can extend this treatment to the encounter of two types of particles A and B, which are characterized by two bulk concentrations CA and CB, two radii RA and RB, and two diffusion constants DA and DB.
To describe the rate of reaction, we need to calculate the total rate of collisions between A and B molecules. Rather than describing the diffusion of both A and B molecules, it is simpler to fix the frame of reference on B and recognize that we want to describe the diffusion of A with respect to B. In that case, the effective diffusion constant is
$D=D_a+D_b$
Furthermore, we expand our encounter radius to the sum of the radii of the two spheres (RAB= rA + rB). The flux density of A molecules incident on a single B at an encounter radius of RAB is given by eq. (1)
$J_{a\rightarrow b} = \dfrac{DC_A}{R_{AB}} \nonumber$
Here J describes the number of molecules of A incident per unit area at a radius RAB from B molecules per unit time, [molec A] [area of B]−1 sec−1. If we treat the motion of B to be uncorrelated with A, then the total rate of collisions between A and B can be obtained from the product of JA→B with the area of a sphere of radius RAB and the total concentration of B:
\begin{aligned} \dfrac{dN_{A \leftrightarrow B}}{dt} &= J_{A\rightarrow B} A_{AB}C_B \ &=J_{A\rightarrow B}(4\pi R^2_{AB}) C_B \ &=4\pi DR_{AB}C_AC_B \end{aligned}
The same result is obtained if we begin with the flux density of B incident on A, JB→A, using the same encounter radius and diffusion constant. Now comparing this with expected second order rate law for a bimolecular reaction
$\dfrac{dN_{A \leftrightarrow B}}{dt} = k_aC_AC_B \nonumber$
we see
$k_a = 4\pi (D_A+D_B) R_{AB} \nonumber$
ka is the rate constant for a diffusion limited reaction (association). It has units of cm3 s−1, which can be converted to (L mol−1 s−1) by multiplying by Avagadro’s number.
Reactive patches
If you modify these expressions so that only part of the sphere is reactive, then similar results ensue, in which one recovers the same diffusion limited association rate (ka,0) multiplied by an additional factor that depends on the geometry of the surface area that is active: ka=ka,0∙[constant]. For instance if we consider a small circular patch on a sphere that subtends a half angle θ, the geometric factor should scale as sinθ. For small θ, sinθ≈θ. If you have small patches on two spheres, which must diffusively encounter each other, the slowing of the association rate relative to the case with the fully accessible spherical surface area is
$k_a /k_{a,0} = \theta_A\theta_B(\theta_A+\theta_B)/8$
For the association rate of molecules with a sphere of radius R covered with n absorbing spots of radius b:
$k_a = k_{a,0} = \left( 1 + \dfrac{\pi R}{nb} \right)^{-1}$
Additional configurations are explored in Berg.
______________________________________
1. D. F. Calef and J. M. Deutch, Diffusion-controlled reactions, Annu. Rev. Phys. Chem. 34 (1), 493-524 (1983). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/16%3A_Targeted_Diffusion/16.01%3A_Diffusion_to_Capture.txt |
What if the association is influenced by an additional potential for A-B interactions? Following our earlier discussion for diffusion in a potential, the potential UAB results in an additional contribution to the flux:
$J_U = -\dfrac{D_AC_A}{k_BT} \dfrac{\partial U_{AB}}{\partial r} \nonumber$
So the total flux of A incident on B from normal diffusion Jdiff and the interaction potential JU is
$J_{A \rightarrow B} = -D_A \left[ \dfrac{\partial C_A}{\partial r} + \dfrac{C_A}{k_BT} \dfrac{\partial U_{AB}}{\partial r} \right] \nonumber$
To solve this we make use of a mathematical manipulation commonly used in solving the Smoluchowski equation in which we rewrite the quantity in brackets as
$J_{A \rightarrow B} = -D_A \left[ e^{U_{AB}/k_BT} \dfrac{d\left[ C_Ae^{U_{AB}/k_BT}\right] }{dr} \right]$
Substitute this into the expression for the rate of collisions of A with B:
\begin{aligned} \dfrac{dn_{A \rightarrow B}}{dt} &= A_BJ_{A\rightarrow B} \ &=4\pi R^2_BJ_{A\rightarrow B} \end{aligned}
Separate variables and integrate from the surface of the sphere to $r = \infty$ using the boundary conditions: $C(R_B)=0, C(\infty )=C_A$:
$\left( \dfrac{dn_{A \rightarrow B}}{dt} \right) \underbrace{\int^{\infty}_{R_B} e^{U_{AB}/k_BT} \dfrac{dr}{r^2} }_{(R^*)^{-1}} = 4\pi D_A \underbrace{\int^{C_A}_0 d\left[ C_Ae^{U_{AB}/k_BT} \right] }_{C_A}$
Note that integral on the right is just the bulk concentration of A. The integral on the right has units of inverse distance, and we can write this in terms of the variable R*:
$(R^*)^{-1} = \int^{\infty}_{R_B} e^{U_{AB}/k_BT}r^{-2}dr \nonumber$
Note that when no potential is present, then UAB→ 0, and R* = RB. Therefore R* is an effective encounter distance which accounts for the added influence of the interaction potential, and we can express it in terms of f, a correction factor the normal encounter radius: R* = f RB. For attractive interactions R* > RB and f >1, and vice versa.1
Returning to eq. (16.2.2), we see that the rate of collisions of A with B is
$\dfrac{dn_{A\rightarrow B}}{dt} = 4\pi D_AR_B^*C_A \nonumber$
As before, if we account for the total number of collisions for two diffusing molecules A and B:
\begin{aligned} \dfrac{dn_{TOT}}{dt} &= J_{A\rightarrow B}A_{AB}C_B \ &=k_aC_AC_B \ k_a&=4\pi (D_A+D_B)R_{AB}^* \ R_{AB}^* &= R_A^* +R_B^* \end{aligned}
Example: Electrostatic potential2
Let’s calculate the form of the where the interaction is the Coulomb potential.3
$U_{AB}(r) = \dfrac{z_Az_Be^2}{4\pi \epsilon r} = k_BT\dfrac{\ell_B}{r} \nonumber$
where the Bjerrum length is $\ell_B = z_Az_Be^2/(4\pi \epsilon k_BT)$. Then
\begin{aligned} (R_{AB}^*)^{-1} &= \int^{\infty}_{R_{AB}} e^{U_{AB}/k_BT} \dfrac{dr}{r^2} \ &= \ell_B^{-1} \left[ \mathrm{exp}(\ell_B/R_{AB}-1 \right] \end{aligned}
and
$R_{AB}^* = \ell_B (e^{\ell_B/R_{AB}-1)^{-1}} \nonumber$
For $\ell_B \gg R_{AB}, R_{AB}^* \rightarrow R_{AB}$. For $\ell_B = R_{AB}, R_{AB}^* = 0.58R_{AB}$ if the charges have the same sign (repel), or $R_{AB}^* = 1.58R_{AB}$ if they are opposite charges (attract).
______________________________________
$4\pi r^2 J_{A\rightarrow B}=\dfrac{4\pi D_A \left[ C_A(\infty )e^{U_{AB}(\infty )/k_BT} -C_A(R_0)e^{U_{AB}(R_0)/k_BT} \right]}{\int^{\infty }_{R_0} r^{-2}e^{U_{AB}(r)/k_BT} dr } \nonumber$
$C_A(\infty )$ is the bulk concentration of A. For the perfectly absorbing sphere, the concentration of A at the boundary with B, CA(R0)=0. For a homogeneous solution we also assume that the interaction potential at long range $U_{AB}(\infty ) =0$.
1. A more general form for the flux, in which the boundary condition at the surface of the sphere CA(R0) is non-zero, for instance when there is an additional chemical reaction on contact, is
2. See also J. I. Steinfeld, Chemical Kinetics and Dynamics, 2nd ed. (Prentice Hall, Upper Saddle River, N.J., 1998), 4.2-4.4.
3. See M. Vijayakumar, K.-Y. Wong, G. Schreiber, A. R. Fersht, A. Szabo and H.-X. Zhou, Electrostatic enhancement of diffusion-controlled protein-protein association: comparison of theory and experiment on barnase and barstar, J. Mol. Biol. 278 (5), 1015-1024 (1998). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/16%3A_Targeted_Diffusion/16.02%3A_Diffusion_to_Capture_with_Interactions.txt |
Another way of describing diffusion-to-target rates is in terms of first passage times. The mean first passage time (MFPT), ⟨τ⟩, is the average time it takes for a diffusing particle to reach a target position for the first time. The inverse of ⟨τ⟩ gives the rate of the corresponding diffusion-limited reaction. A first passage time approach is particularly relevant to problems in which a description the time-dependent averages hide intrinsically important behavior of outliers and rare events, particularly in the analysis of single molecule kinetics.
To describe first passage times, we begin by defining the reaction probability R and the survival probability S. R is a conditional probability function that describes the probability that a molecule starting at a point $x_0=0$ at time t0 will reach a reaction boundary at x = xf for the first time after time t: R(xf,t|x0,t0). S is just the conditional probability that the molecule has not reached x = b during that time interval: S(xf,t|x0,t0). Therefore
$R+S=1 \nonumber$
Next, we define F(τ,xf|x0), the first passage probability density. F(τ)dτ is the probability that a molecule passes through x = xf for the first time between times τand τ+dτ. R, S, and F are only a function of time for a fixed position of the reaction boundary, i.e. they integrate over any spatial variations. To connect F with the survival probability, we recognize that the reaction probability can be obtained by integrating over all possible first passage times for time intervals τ < t. Dropping space variables, recognizing that (t‒t0) = τ, and setting x0 = 0,
$R(t) = \int^t_0 F(\tau )d\tau \nonumber$
This relation implies that the first passage time distribution can be obtained by differentiating S
$F(t) = \dfrac{\partial }{\partial t}R(t) = - \dfrac{\partial }{\partial t} S(t)$
Then the MFPT is obtained by averaging over F(t)
$\langle \tau \rangle = \int^{\infty }_0 \tau F(\tau ) d\tau$
To evaluate these quantities for a particular problem, we seek to relate them to the time-dependent probability density, P(x,t|x0,t0), which is an explicit function of time and space. The connection between P and F is not immediately obvious because evaluating P at x = xf without the proper boundary conditions includes trajectories that have passed through x = xf before returning there again later. The key to relating these is to recognize that the survival probability can be obtained by calculating a diffusion problem with an absorbing boundary condition at x = xf that does not allow the particle to escape: P(xf,t|x0) = 0. The resulting probability distribution Pa(x,t|x0,t0) is not conserved but gradually loses probability density with time. Hence, we can see that the survival probability is an integral over the remaining probability density that describes particles that have not yet reached the boundary:
$S(t) = \int^{x_f}_{-\infty }dxP_a(x,t)$
The mean free passage time can be written as
$\langle \tau \rangle = \int^{x_f}_{-\infty }dx \int^{\infty}_{0}dt \: P_a(x,t) \nonumber$
The next important realization is that the first passage time distribution is related to the flux of diffusing particles through xf. Combining eq. (16.3.1) and (16.3.3) shows us
$F(t) = -\int^{x_f}_{-\infty }dx \dfrac{\partial }{\partial t} P_a(x,t)$
Next we make use of the continuity expression for the probability density
$\dfrac{\partial P}{\partial t} = -\dfrac{\partial j}{\partial x} \nonumber$
j is a flux, or probability current, with units of s−1, not the flux density we used for continuum diffusion J (m−2 s−1). Then eq. (16.3.4) becomes
\begin{aligned} F(t) &= \int^{x_f}_{-\infty}dx \dfrac{\partial }{\partial x} j_a(x,t) \ &=j_a(x_f,t) \end{aligned}
So the first passage time distribution is equal to the flux distribution for particles crossing the boundary at time t. Furthermore, from eq. (16.3.2), we see that the MFPT is just the inverse of the average flux of particles crossing the absorbing boundary:
$\langle \tau \rangle = \dfrac{1}{\langle j_a(x_f) \rangle }$
In chemical kinetics, $\langle j_a (x_f) \rangle$ is the rate constant from transition state theory.
Calculating the First Passage Time Distribution
To calculate F one needs to solve a Fokker–Planck equation for the equivalent diffusion problem with an absorbing boundary condition. As an example, we can write these expressions explicitly for diffusion from a point source. This problem is solved using the Fourier transform method, applying absorbing boundary conditions at xf to give
$P_a (x,t) = P(x,t)-P(2x_f-x,t) \qquad \qquad (x \leq x_f) \nonumber$
which is expressed in terms of the probability distribution in the absence of absorbing boundary conditions:
$P(x,t) = (4\pi Dt)^{1/2}\mathrm{exp}\left[ \dfrac{-(x-x_0)^2}{4Dt} \right] \nonumber$
The corresponding first passage time distribution is:
$F(t) = \dfrac{x_f-x_0}{(4\pi Dt^3)^{1/2}} \mathrm{exp}\left[ -\dfrac{(x-x_0)^2}{4Dt} \right] \nonumber$
F(t) decays in time as t−3/2, leading to a long tail in the distribution. The mean of this distribution gives the MFPT
$\langle \tau \rangle = x^2_f/2D \nonumber$
and the most probable passage time is xf2/6D. Also, we can use eq. (16.3.3) to obtain the survival probability
$S(t) = \mathrm{erf}\left( \dfrac{x_f}{\sqrt{4Dt}} \right) = \mathrm{erf}\left( \sqrt{\dfrac{\langle \tau \rangle }{2t}} \right) \nonumber$
S(t) depends on the distance of the target and the rms diffusion length over time t. At long times S(t) decays as t−1/2.
It is interesting to calculate the probability that the diffusing particle will reach xf at any time. From eq. (16.3.4), we can see that this probability can be calculated from $\int^{\infty}_0 F(\tau )d\tau$. For the current example, this integral over F gives unity, saying that a random walker in 1D will eventually reach every point on a line. Equivalently, it is guaranteed to return to the origin at some point in time. This observation holds in 1D and 2D, but not 3D.
Calculating the MFPT From Steady‐State Flux
From eq. (16.3.6) we see that it is also possible to calculate the MFPT by solving for the flux at an absorbing boundary in a steady state calculation. As a simple example, consider the problem of releasing a particle on the left side of a box, $P(x, 0) = \delta (x,x_0)$, and placing the reaction boundary at the other side of the box x = b. We solve the steady-state diffusion equation $\partial^2P_a/\partial x^2=0$ with an absorbing boundary at x = b, i.e., $P(b,t)=0$. This problem is equivalent to absorbing every diffusing particle that reaches the right side and immediately releasing it again on the left side.
The steady-state solution is $P_a(x) = \dfrac{2}{b}\left( 1-\dfrac{x}{b} \right) \nonumber$
Then, we can calculate the flux of diffusing particles at x=b:
$j(b) = \left. -D\dfrac{\partial P}{\partial x} \right|_{x=b} = \dfrac{2D}{b^2}\nonumber$
and from the inverse we obtain the MFPT:
$\langle \tau \rangle = \dfrac{1}{j(b)} = \left( \dfrac{b^2}{2D} \right) \nonumber$
MFPT in a Potential
To extend this further, let’s examine a similar 1D problem in which a particle is released at x0 = 0, and diffuses in x toward a reaction boundary at x = b, but this time under the influence of a potential U(x). We will calculate the MFPT for arrival at the boundary. Such a problem could be used to calculate the diffusion of an ion through an ion channel under the influence of the transmembrane electrochemical potential.
From our earlier discussion of diffusion in a potential, the steady state flux is
$j=\dfrac{-D\left[ P(b) e^{U(b)/k_BT}-P(x)e^{U(x)/k_BT} \right]}{\int^b_x e^{U(x')/k_BT}dx'}$
Applying the absorbing boundary condition, P(b) = 0, the steady state probability density is
$P_a(x) = \dfrac{j}{D}e^{-U(x)/k_BT} \int^b_x e^{U(x')/k_BT} dx'$
Now integrating both sides over the entire box, the left side is unity, so we obtain an expression for the flux
$\dfrac{1}{j} = \dfrac{1}{D} \int^b_0 e^{-U(x)/k_BT}\left[ \int^b_x e^{U(x')/k_BT}/k_BT dx' \right] dx$
But j−1 is just the MFPT, so this expression gives us ⟨τ⟩. Note that if we set U to be a constant in eq. (16.3.8), that we recover the expressions for ⟨τ⟩, j, and Pa in the preceding example.
Diffusion in a linear potential
For the case of a linear external potential, we can write the potential in terms of a constant external force $U=-fx$. Solving this with the steady state solution, we substitute U into eq. (16.3.8) and obtain
$\langle \tau \rangle = \dfrac{1}{j} = \dfrac{1}{D\underset{\sim}{f }^2} \left[ e^{-\underset{\sim}{f }b}-1+\underset{\sim}{f }b \right]$
where $\underset{\sim}{f}=f/k_BT$ is the force expressed in units of thermal energy. Substituting into eq. (16.3.7) gives the steady state probability density
$P(x) = \dfrac{\underset{\sim}{f }\left( 1-e^{\underset{\sim}{f }(b-x)} \right) }{e^{-\underset{\sim}{f }b}-1+\underset{\sim}{f }b} \nonumber$
Now let’s compare these results from calculations using the first passage time distribution. This requires solving the diffusion equation in the presence of the external potential. In the case of a linear potential, we can solve this by expressing the constant force as a drift velocity
$v_x = \dfrac{f}{\zeta} = \dfrac{fD}{k_BT} = \underset{\sim}{f }D \nonumber$
Then the solution is obtained from our earlier example of diffusion with drift:
$P(x,t) = -\dfrac{1}{\sqrt{4\pi Dt}}\mathrm{exp} \left[ -\dfrac{(x-\underset{\sim}{f }Dt)^2}{4Dt} \right] \nonumber$
The corresponding first passage time distribution is
$F(t) = \dfrac{b}{\sqrt{4\pi Dt^3}}\mathrm{exp}\left[ -\dfrac{(b-\underset{\sim}{f }Dt)^2}{4Dt} \right] \nonumber$
and the MFPT is given by eq. (16.3.9).
__________________________________
1. A. Nitzan, Chemical Dynamics in Condensed Phases: Relaxation, Transfer and Reactions in Condensed Molecular Systems. (Oxford University Press, New York, 2006); S. Iyer-Biswas and A. Zilman, First-Passage Processes in Cellular Biology, Adv. Chem. Phys. 160, 261–306 (2016).
2. H. C. Berg, Random Walks in Biology. (Princeton University Press, Princeton, N.J., 1993). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/16%3A_Targeted_Diffusion/16.03%3A_Mean_First_Passage_Time.txt |
Many proteins act as molecular motors using an energy source to move themselves or cargo in space. They create directed motion by coupling energy use to conformational change.
Motor Classes
Translational
• Cytoskeletal motors that step along filaments (actin, microtubules)
• Helicase translation along DNA
Rotary
• ATP synthase
• Flagellar motors
Polymerization
• Cell motility
Translocation
• DNA packaging in viral capsids
• Transport of polypeptides across membranes
Translational Motors
Processivity
• Some motors stay on fixed track for numerous cycles
• Others bind/unbind often—mixing stepping and diffusion
Cytoskeletal motors
• Used to move vesicles and displace one filament relative to another
• Move along filaments—tracks have polarity (±)
• Steps of fixed size
Classes
• Dynein moves on Microtubules (+ → ‒)
• Kinesin Microtubules (mostly ‒ → +)
• Myosin Actin
Molecular Motors
We can make a number of observations about common properties of translational and rotational motor proteins.
Molecular motors are cyclical
• They are “processive” involving discrete stepping motion
• Multiple cycles lead to directional linear or rotary motion
Molecular motors require an external energy source
• Commonly this energy comes from ATP hydrolysis
• ~50 kJ/mol or ~20 kBT or ~80 pN/nm
• ATP consumption correlated with stepping
• Or from proton transfer across a transmembrane proton gradient
Protein motion is strongly influenced by thermal fluctuations and Brownian motion
• Molecular motors work at energies close to kBT
• Short range motions are diffusive—dominated by collisions
• Inertial motion does not apply
17.02: Passive vs Active Transport
Directed motion of molecules in a statistically deterministic manner (i.e., x̅(t) = v̅t) in a thermally fluctuating environment cannot happen spontaneously. It requires a free energy source, which may come from chemical bonds, charge transfer, and electrochemical gradients. From one perspective, displacing a particle requires work, and the force behind this work originates in free energy gradients along the direction of propagation
$w =-\int_{path}f dx \qquad \qquad f_{rev} = \dfrac{\partial G}{\partial x} \nonumber$
An example of this is steady-state diffusion driven by a spatial difference in chemical potential, for instance the diffusion of ions through a membrane channel driven by a transmembrane potential. This problem is one of passive transport. Although an active input of energy was required to generate the transmembrane potential and the net motion of the ion is directional, the ion itself is a passive participant in this process. Such processes can be modeled as diffusion within a potential.
Active transport refers to the direct input of energy into the driving the moving object in a directional manner. At a molecular scale, even with this input of energy, fluctuations and Brownian motion remain very important.
Even so, there are multiple ways in which to conceive of directed motion. Step-wise processive motion can also be viewed as a series of states along a free energy or chemical potential gradient. Consider this energy landscape:
Under steady state conditions, detailed balance dictates that the ratio of rates for passing forward or reverse over a barrier is dictated by the free energy difference between the initial and final states:
$\dfrac{k_+}{k_-} = e^{-\Delta G/k_BT} \nonumber$
and thus the active driving force for this downhill process is
$f \approx -\dfrac{\Delta G}{\Delta x} = \dfrac{k_BT}{\Delta x } \ln{\dfrac{k_+}{k_-} } \nonumber$
This perspective is intimately linked with a biased random walk model when we remember that
$\dfrac{k_+}{k_-} =\dfrac{P_+}{P_-} \nonumber$
If our free energy is the combination of a chemical process ($\Delta G_0$) and an external force, then we can write
$\dfrac{k_+}{k_-} = \mathrm{exp}[-(\Delta G_0 +f\Delta x)/k_BT] \nonumber$
Feynman’s Brownian Ratchet
Feynman used a thought experiment to show you cannot get work from thermal noise.1 Assume you want to use the thermal kinetic energy from the molecules in a gas, and decide to use the collisions of these molecules with a vane to rotate an axle. The direction or rotation will be based on the velocity of the molecules hitting the vane, so to assure that this rotation proceeds only one way, we use a ratchet with a pawl and spring to catch the ratchet when it advances in one direction.
This is the concept of rectified Brownian motion.
At a microscopic level, this reasoning does not hold, because the energy used to rotate the ratchet must be enough to lift the pawl against the force of the spring. If we match the thermal energy of gas $T =\frac{1}{2}m\langle v^2_x \rangle$ to the energy needed to raise the pawl $U=\frac{1}{2}\kappa x^2$ we find that the pawl will also be undergoing fluctuations in x with similar statistics to the bombardment of the vane $\kappa = \sqrt{mk_BT/\langle x^2\rangle}$. Therefore, the ratchet will instead thermally diffuse back and forth as a random walk. Further, Feynman showed that if you imbedded the vane and ratchet in reservoirs of temperature T1 and T2, respectively, then the ratchet will advance as desired if T1 > T2, but will move in reverse if T1 < T2. Thus, one cannot extract useful work from thermal fluctuations alone. You need some input of energy—any source of free energy.
_________________________________ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/17%3A_Directed_and_Active_Transport/17.01%3A_Motor_Proteins.txt |
The Brownian ratchet refers to a class of models for directed transport using Brownian motion that is rectified through the input of energy. For a diffusing particle, the energy is used to switch between two states that differ in their diffusive transport processes. This behavior results in biased diffusion. It is broadly applied for processive molecular motors stepping between discrete states, and it therefore particularly useful for understanding translational and rotational motor proteins.
One common observation we find is that directed motion requires the object to switch between two states that are coupled to its motion, and for which the exchange is driven by input energy. Switching between states results in biased diffusion. The interpretation of real systems within the context of this model can vary. Some people consider this cycle as deterministic, whereas others consider it quite random and noisy, however, in either case, Brownian motion is exploited to an advantage in moving the particle.
We will consider an example relevant to the ATP-fueled stepping of cytoskeletal motors along a filament. The motor cycles between two states: (1) a bound state (B), for which the protein binds to a particular site on the filament upon itself binding ATP, and (2) a free state (F) for which the protein freely diffuses along the filament upon ATP hydrolysis and release of ADP + Pi. The bound state is described by a periodic, spatially asymmetric energy profile $U_B(x)$, for which the protein localizes to a particular energy minimum along the filament. Key characteristics of this potential are a series of sites separated by a barrier $ΔU > k_BT$, and an asymmetry in each well that biases the system toward a local minimum in the direction of travel. In the free state, there are no barriers to motion and the protein diffuses freely. When the free protein binds another ATP, it returns to $U_B(x)$ and relaxes to the nearest energy minimum.
Let’s investigate the factors governing the motion of the particle in this Brownian ratchet, using the perspective of a biased random walk. The important parameters for our model are:
• The distance between adjacent binding sites is $Δx$.
• The position of the forward barrier relative to the binding site is $x_f$. A barrier for reverse diffusion is at $–x_r$, so that
$x_f+x_r = \Delta x$
The asymmetry of $U_B$ is described by
$\alpha =(x_f-x_r)/\Delta x$
• The average time that a ratchet stays free or bound are $\tau_F$ and $\tau_B$. Therefore, the average time per bind/release cycle is
$\Delta t = \tau_F+\tau_B \nonumber$
• We define a diffusion length $\ell$ which is dependent on the time that the protein is free
$\ell_0(\tau_F)=\sqrt{4D\tau_F} \nonumber$
Conditions For Efficient Transport
Let’s consider the conditions to maximize the velocity of the Brownian ratchet.
1. While in $F$: the optimal period to be diffusing freely is governed by two opposing concerns. We want the particle to be free long enough to diffuse past the forward barrier, but not so long that it diffused past the reverse barrier. Thus we would like the diffusion length to lie between the distances to these barriers:
$\ell_0=\sqrt{4D\tau_F} \nonumber$
$x_r > l_0 > x_F \nonumber$
Using the average value as a target:
\begin{aligned} \ell_0 &\approx \dfrac{x_r+x_F}{2}= \dfrac{\Delta x}{2} \ \tau_F &\approx \dfrac{\Delta x^2}{16D} \end{aligned}
2. While in B: After the binding ATP, we would like the particle to stay with ATP bound long enough to relax to the minimum of the asymmetric energy landscape. Competing with this consideration, we do not want it to stay bound any longer than necessary if speed is the issue.
We can calculate the time needed to relax from the barrier at xr forward to the potential minimum, if we know the drift velocity vd of this particle under the influence of the potential.
$\tau_B \approx x_r/ \nu_d \nonumber$
The drift velocity is related to the force on the particle through the friction coefficient, $\nu_d = f/\zeta$, and we can obtain the magnitude of the force from the slope of the potential:
$|f| = \dfrac{\Delta U}{x_r} \nonumber$
So the drift velocity is $\nu_d = \dfrac{fD}{k_BT}= \dfrac{\Delta UD}{x_rk_BT}$ and the optimal bound time is
$\tau_B \approx \dfrac{x_r^2k_BT}{\Delta UD} \nonumber$
Now let’s look at this a bit more carefully. We can now calculate the probability of diffusing forward over the barrier during the free interval by integrating over the fraction of the population that has diffused beyond xf during τF. Using the diffusive probability distribution with x0→0,
\begin{aligned} P_+ &= \dfrac{1}{\sqrt{4\pi D\tau_F}} \int_{x_f}^{\infty} e^{-x^2/4D\tau_F} dx \ &=\dfrac{1}{2} erfc\left( \dfrac{x_f}{\ell_0} \right) \end{aligned}
Similarly, the probability for diffusing backward over the barrier at x = ‒xr is
$P_- = \dfrac{1}{2} erfc \left( \dfrac{x_r}{\ell_0} \right) \nonumber$
Now we can determine the average velocity of the protein by calculating the average displacement in a given time step. The average displacement is the difference in probability for taking a forward versus a reverse step, times the step size. This displacement occurs during the time interval Δt. Therefore,
\begin{aligned} \nu &= \dfrac{\Delta P \Delta x}{\Delta t}\ &=\dfrac{(P_+-P_-)\Delta x}{(\tau_B + \tau_F)} \ &=\dfrac{\Delta x}{2\Delta t} \left[ erf\left( \dfrac{x_r}{\ell_0 (\tau_F)}\right) - erf \left( \dfrac{x_f}{\ell_0 (\tau_F)} \right) \end{aligned}
It is clear from this expression that the velocity is zero when the asymmetry of the potential is zero. For asymmetric potentials, P+ and P are dependent on τF, with one rising in time faster than the other. As a result, the velocity, which depends on the difference of these reaches a maximum in the vicinity of $\tau_F=x^2_f/D$.
So how does the ATP hydrolysis influence the free energy gradient? Here free energy gradient is
$\dfrac{\Delta G_{Hyd.}}{\Delta x} \nonumber$
$k_+ = A_+e^{-(\Delta G_{barrier}-\Delta G_{hydrolysis})/kT}$
$k_- = A_-e^{-(\Delta G_{barrier})/kT}$
$\nu = (k_+-k_-) \Delta x$
___________________________________________
K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010); R. Phillips, J. Kondev, J. Theriot and H. Garcia, Physical Biology of the Cell, 2nd ed. (Taylor & Francis Group, New York, 2012). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/17%3A_Directed_and_Active_Transport/17.03%3A_Brownian_Ratchet.txt |
Polymerization and translocation ratchets refer to processes that result in directional displacements of a polymer or oligomer chain rather than a specific protein. The models for these ratchets also involve rectified Brownian motion, in which a binding unit is added to a diffusing chain to bias the diffusive motion in a desired direction. Once the displacement reaches a certain diffusion length, a monomer or binding protein can add to the chain, locking in the forward diffusion of the chain. In this case, it is the binding or attachment of protein units that consumes energy, typically in the form of ATP or GTP hydrolysis.
Translocation Ratchet1
Protein translocation across cell membranes is a ubiquitous process for transporting polypeptide chains across bacterial and organelle membranes through channels with the help of chaperone proteins on the inner side of the membrane. The translocation ratchet refers to a model in which the transport of the chain occurs through Brownian motion which is rectified by the binding of proteins to the chain on one side of the pore as it is displaced. Once the chain diffuses through the pore for a distance Δx, a protein can bind to the chain, stopping backward diffusion. At each step, energy is required to drive the binding of the chaperone protein.
The translocation ratchet refers to a continuum model for the diffusion of the chain. It is possible to map this diffusion problem onto a Smoluchowski equation, but it would be hard to solve for the probability density. It is easier if we are just interested in describing the average velocity of the chain under steady state conditions, we can solve for the steady-state chain flux across the pore:
$J(x) = -D \left( \dfrac{\partial P}{\partial x} + \dfrac{f}{k_BT}P \right)$
where f is the force acting against the chain displacement. Steady state behavior corresponds to $\partial P/\partial t =0$ , so from the continuity equation
$\dfrac{\partial P}{\partial t} = -\dfrac{\partial J}{\partial x} \nonumber$
we know that $\partial J/\partial x = 0$. Therefore J is a constant. To find P, we want to solve
$\dfrac{\partial P}{\partial x} + \dfrac{f}{k_BT}P+\dfrac{J}{D} = 0 \nonumber$
for which the general solution is $P=A_1e^{-fx/k_BT}+A_2$. We find the integration constants using the boundary condition $P(\Delta x,t )=0$, which reflects that a protein will immediately and irreversibly bind once the diffusing chain reaches an extension $\Delta x$. (No back-stepping is allowed.) And we use the conservation statement:
$\int_0^{\Delta x} dx P(x) = 1 \nonumber$
which says that a protein must be bound within the interval 0 to Δx. The steady-state probability distribution with these two boundary conditions is
$P(x) = \dfrac{ \underset{ \sim }{f} \left[ \exp \left( \underset{ \sim }{f} (1-x/\Delta x) \right) -1 \right] }{\Delta x\left( 1+ \underset{ \sim }{f} - e^{ \underset{ \sim }{f} } \right) }$
$\underset{ \sim }{f} = \dfrac{f \Delta x}{k_BT} \nonumber$
$\underset{\sim}{f}$ is a dimensionless constant that expresses the load force in units of kBT opposing ratchet displacement by Δx.
Substituting eq. (17.4.2) into eq. (17.4.1) allows us to solve for J.
$J(x) =\dfrac{-D\underset{ \sim }{f}^2 }{\Delta x^2\left( 1+ \underset{ \sim }{f}-e^{\underset{ \sim }{f}} \right) } \left( 1-2\exp \left[ \underset{ \sim }{f} \left( \dfrac{x}{\Delta x} -1 \right) \right] \right) \nonumber$
Now, the average velocity can be determined from $\langle \nu \rangle = J\Delta x$. Evaluating the flux at x = Δx:
$\langle \nu \rangle =\dfrac{2D}{\Delta x} \left[ \dfrac{\underset{ \sim }{f}^2/2}{e^{\underset{ \sim }{f}}-\underset{ \sim }{f}-1} \right] \nonumber$
Now look at low force limit $f \rightarrow 0$. Expand $e^{\underset{ \sim }{f}}=1+\underset{ \sim }{f}+\underset{ \sim }{f}^2/2$:
$\langle \nu \rangle \rightarrow \dfrac{2D}{\Delta x} = v_{max} \nonumber$
Note that this is the maximum velocity for ideal ratchet, and it follows the expected behavior for pure diffusive motion.
Now consider probability of the protein binding is governed by equilibrium between free and bound forms:
$F \overset{k_a}{\underset{k_d} \rightleftharpoons} B \qquad \qquad K= \dfrac{k_a}{k_d} = \dfrac{\tau_B}{\tau_F} \nonumber$
Here ka refers to the effecting quasi-first-order rate constant for binding at a chaperone concentration [chap]: $k_a = k_a' [ chap ]$.
Fast kinetics approximation
\begin{aligned} &\langle \nu \rangle = \dfrac{2D}{\Delta x} \left[ \dfrac{\underset{ \sim }{f}^2/2}{\dfrac{e^{\underset{ \sim }{f}}-1}{1-K(e^{\underset{ \sim }{f}}-1)}-\underset{ \sim }{f}} \right] \ &\langle \nu \rangle_{max} = \dfrac{2D}{\Delta x} \left( \dfrac{1}{1+2K} \right) \end{aligned}
Stall Load
$f_0 = \dfrac{k_BT}{\Delta x} \ln \left( 1+ \dfrac{1}{K} \right) \nonumber$
________________________________
1. C. S. Peskin, G. M. Odell and G. F. Oster, Cellular motions and thermal fluctuations: the Brownian ratchet, Biophys. J. 65 (1), 316–324 (1993). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/04%3A_Transport/17%3A_Directed_and_Active_Transport/17.4%3A_Polymerization_Ratchet_and_Translocation_Ratchet.txt |
It is often observed in molecular biology that nanoscale structures with sophisticated architectures assemble spontaneously, without the input of external energy. The behavior is therefore governed by physical principles that we can describe with thermodynamics and statistical mechanics. Examples include:
• Protein and RNA folding
• DNA hybridization
• Assembly of protein complexes and viral capsids
• Micelle and vesicle formation
Although each of these processes has distinct characteristics, they can be broadly described as self-assembly processes.
A characteristic of self-assembly is that it appears thermodynamically and kinetically as a simple “two-state transition”, even if thousands of atomic degrees of freedom are involved. That is, as one changes thermodynamic control variables such as temperature, one experimentally observes an assembled state and a disassembled state, but rarely an intermediate, partially assembled state. Furthermore, small changes in these thermodynamic variables can lead to dramatic changes, i.e., melting of DNA or proteins over a few degrees. This binary or switch-like behavior is very different from the smoothly varying unfolding curves we derived for simple lattice models of polymers.
Phase transitions and phase equilibria are related phenomena described by the presence (or coexistence) of two states. These manifest themselves as a large change in the macroscopic properties of the system with only small changes in temperature or other thermodynamic variables. Heating liquid water from 99 °C to 101 °C has a profound effect on the density, which a 2° change at 25 °C would not have.
Such a “first-order” phase transition arises from a discontinuity in the free energy as a function of an intensive thermodynamic variable.1 The thermodynamic description of two-state behavior governing a phase transition is illustrated below for the equilibrium between phases A and B. The free-energy profile is plotted as a function of an order parameter, a variable that distinguishes the physical characteristics relevant to the change of phase. For instance for a liquid–gas-phase transition, the volume or density are order parameters that change dramatically. As the temperature is increased the free energy of each state, characterized by its free energy minimum (Gi), decreases smoothly and continuously. However, state B decreases more rapidly that state A. While state A is the global free-energy minimum at low temperatures, state B is at high temperature. The phases are at equilibrium with each other at the temperature where GA = GB.
The presence of a phase transition is dependent on all molecules of the system changing state together, or cooperatively. In a first-order phase transition, this change is infinitely sharp or discontinuous, but the helix–coil transition and related cooperative phenomena can be continuous. Cooperativity is a term that can refer both to macroscopic phenomena and to a molecular scale. We use it to refer to many degrees of freedom changing concertedly. The size or number of particles or molecules participating in a cooperative process is the cooperative unit. In the case of a liquid–gas-phase transition, the cooperative unit is the macroscopic sample, whereas for protein folding it may involve most of the molecule.
What underlies cooperativity? We find that the free energy of the system is not simply additive in the parts. The energy of a particular configurational state depends on the configuration of its neighbors. For instance, the presence of one contact or molecular interaction increases or decreases the propensity for a second contact or interaction. We refer to this as positive or negative cooperativity. Beyond self-assembly, cooperativity plays a role in the binding of multiple ligands and allostery. Here we want to discuss the basic concepts relevant to cooperativity and its relationship to two-state behavior.
Based on observations we have previously made in other contexts, we can expect that cooperative behavior must involve competing thermodynamic effects. Structure is formed at the expense of a large loss of entropy, but the numerous favorable contacts that are formed lower the enthalpy even more. The free-energy change may be small, but this results from two opposing effects of large magnitude and opposite sign (H vs. TS). A small tweak in temperature can completely change the system.
________________________________________________
1. A first order transition is described as a discontinuity in ∂G/∂S or ∂G/∂V. A second order transition is one in which two phases merge into one at a critical point and is described by a discontinuity in the heat capacity or expansivity/compressibility of the system (∂S/∂T, ∂S/∂P, ∂V/∂T, or ∂V/∂P).
18: Cooperativity
Cooperativity plays an important role in the description of the helix–coil transition, which refers to the reversible transition of macromolecules between coil and extended helical structures. This phenomenon was observed by Paul Doty in the 1950s for the conversion of polypeptides between a coil and α-helical form,2 and for the melting and hybridization of DNA.3 Bruno Zimm developed a statistical theory with J. Bragg that described the helix–coil transition, which forms the basis of our discussion.4
One of the observations that motivated this work is shown in the figure below. The fraction of helical structure observed in the polypeptide poly-benzylglutamate showed a temperature-dependent melting behavior in which the steepness of the transition increased with polymer chain length. This length dependence indicates a higher probability of forming helices when more residues are present, and that the linkages do not act independently. This suggests a two-step mechanism. The rate-limiting step of forming an $α$ helix is the nucleation of a single hydrogen bonded residue $i → i+4$ loop. Once this occurs, the addition of further hydrogen bonds to extend this helix is much easier and occurs in rapid succession.
To model this behavior, we imagine that the polypeptide consists of a chain of segments that can take on two configurations, H or C.
H: helix (decreases entropy but also lower enthalpy)
C: coil (raises entropy)
To specify the state of a conformation through a sequence, i.e.,
...HCHHHHCCCCHHH...
Remember to not take this too literally, and be flexible in the interpretation of your model. Although this model was derived with an $α$-helix formation in polypeptides in mind, in a more general sense $H$ and $C$ do not necessarily refer explicitly to residues of a sequence, but just for independently interacting regions.
If there are $n$ segments, these can be divided into $n_H$ helical and $n_C$ coil segments.
$n_H + n_C = n \nonumber$
The segments need not correspond directly to amino acids, but structurally and energetically distinct regions. Our goal will be to calculate the fractional helicity of this system $\theta_H$ as a function of temperature, by calculating the conformational partition function, qconf, by an explicit summation over i microstates, Boltzmann weighed by the microstate energy Ei:
$q_{\mathrm{conf}} (n) = \sum_{i\, \mathrm{config.}} e^{-E_i/k_BT}$
Non‐cooperative Model
We start our analysis by discussing a non-cooperative model. We assume:
• Each segment can switch conformation between H and C independently of the others.
• The formation of H from C lowers the configurational energy by $\Delta \epsilon .\, \Delta \epsilon = E_H - E_C$ is a free-energy change per residue, where $\Delta \epsilon < 0$. We will take the coil state to be the reference energy EC = 0.
• Therefore the energy of the system is determined from the number of H residues present, not the specific sequence of H and C segments.
$E_i = E(n_H) = n_H\Delta \epsilon \nonumber$
Then, we can calculate qconf using g(n,nH), the degeneracy of distinguishable states for a polymer of length n with nH helical segments. The conformational partition function is obtained by
$q_{\mathrm{conf}}(n) = \sum_{n_H=0}^n g(n,n_H) e^{-n_H \Delta \epsilon / k_BT}$
In evaluating the partition functions in helix–coil transition models, it is particularly useful to define a “statistical weight” for the helical configuration. It describes the influence of having an H on the probability of observing a particular configuration at kBT:
$s = e^{-\Delta \epsilon / k_BT}$
For the present model, we can think of s as an equilibrium constant for the process of adding a helical residue to a sequence:
$s = \dfrac{P(n_H+1)}{P(n_H)} \nonumber$
This equilibrium constant is related to the free energy change for adding a helical residue to the growing chain. Then we can write eq. (18.1.2) as
$q_{\mathrm{conf}}(n) = \sum_{n_H=0}^n g(n,n_H) s^{n_H} \nonumber$
Since there are only two possible configurations (H and C), the degeneracy of configurations with $n_H$ helical segments in a chain of length n is given by the binomial coefficients:
$g(n,n_H) = \dfrac{n!}{n_H!n_C!} = \begin{pmatrix} n \ n_H \end{pmatrix}$
since $n_C=n-n_H$. Then using the binomial theorem, we obtain
$q_{\mathrm{conf}}(n) = (1+s)^n$
Also, the probability of a chain with n segments having nH helical linkages is
$P(n,n_H) = \dfrac{g(n,n_H)e^{-E(n_H)/k_BT}}{q_{\mathrm{conf}}} = \begin{pmatrix} n \ n_H \end{pmatrix} \dfrac{s^{n_H}}{(1+s)^n}$
Example: n = 4
The conformations available are at right. The molecular conformational partition function is
\begin{aligned} q_{\mathrm{conf}} &= 1+4e^{-\Delta \epsilon /k_BT} +6e^{-2\Delta \epsilon /k_BT} +4e^{-3\Delta \epsilon /k_BT} +e^{-4\Delta \epsilon /k_BT} \ &= 1+4s+6s^2+4s^3+s^4 \ &= (1+s)^4 \end{aligned}
The last step follows from Pascal’s Rule for binomial coefficients. From eq. (18.1.6), the probability of having two helical residues in a four-residue sequence is:
$P(4,2) = \dfrac{6s^2}{(1+s)^4} \nonumber$
To relate this to an observable quantity, we define the fractional helicity, the average fraction of residues that are in the H form.
$\theta_H = \dfrac{\langle n_H \rangle}{n}$
$\langle n_H \rangle = \sum_{n_H = 0}^{n} n_HP(n,n_H)$
Using this amazing little identity, which we derive below,
$\langle n_H \rangle = \dfrac{s}{q} \dfrac{\partial q}{\partial s}$
You can use eq. (18.1.5) to show:
$\langle n_H \rangle = \dfrac{ns}{1+s}$
and
$\theta_H = \dfrac{s}{1+s}$
This takes the same form as one would expect for the simple chemical equilibrium of an $C \rightleftharpoons H$ molecular reaction. If we define the equilibrium constant KHC = [H]/[C], then the fraction of molecules in the H state is $\theta_H = [H]/([C]+[H]) = K_{HC}/(1+K_{HC})$. In this limit s = KHC.
Below we plot eq. (18.1.11), choosing Δε to be independent of temperature. θH is a smooth and slowly varying function of T and does not show cooperative behavior. Its high temperature limit is θH= 0.5, reflecting the fact that in the absence of barriers, the H and C configurations are equally probable for every residue.
We can look a bit deeper at what is happening with the structures present by plotting the probability distribution function for finding nH helical segments within a chain of length n, eq. (18.1.6), and the associated energy landscape (a potential of mean force):
$F(n,n_H) = -Nk_BT\ln{[P(n,n_H)]} \approx -Nk_BT \ln{[g(n,n_H)s^{n_H}]} \nonumber$
The maximum probability and free-energy minimum is located at full helix content at the lowest temperature, and gradually shifts toward nH/n = 0.5 with increasing temperature. The probability density appears Gaussian, and the corresponding free energy appears parabolic. Using similar methods to that described above, we can show that the variance in this distribution scales as n−1/2.The presence of a single shifting minimum is referred to as a transition in a one-state system, rather than two-state behavior expected for phase transitions. Here nH is the order parameter that characterizes the extend of folding of the helix.
Where does eq. (18.1.9) come from? For the moment, we will drop the “conf” and “H” subscripts, mainly to write things more compactly, but also to emphasize the generality of this method to all polynomial expansions. Using eq. (18.1.2), $q= \sum_n gs^n$, and recognizing that g is not a function of s:
\begin{aligned} \dfrac{\partial q}{\partial s} &= \sum_n ngs^{n-1} \ &= \dfrac{1}{s}\sum_nngs^n \end{aligned}
From eq. (18.1.6), $P_n = gs^n/1$, we can write this in terms of the helical segment probability
$\dfrac{1}{q} \dfrac{\partial q}{\partial s} = \dfrac{1}{s}\sum_nnP_n$
Comparing eq. (18.1.13) with eq. (18.1.12), $\boldsymbol{\langle} n \boldsymbol{\rangle} = \sum_nnP_n$, we see that
$\dfrac{s}{q} \dfrac{\partial q}{\partial s} = \langle n \rangle \quad \mathrm{or} \quad \dfrac{\partial \ln{q}}{\partial \ln{s}} = \boldsymbol{\langle} n \boldsymbol{\rangle}$
This method of obtaining averages from derivatives of a polynomial appears regularly in statistical mechanics.5
Cooperative Zimm–Bragg Model
Let’s modify the model to add an element of cooperativity to the segments in the chain. In order to form a helix, you need to nucleate a helical turn and then adding adjacent helical segments is easier. The probability of forming a turn is relatively low, meaning the free energy barrier for nucleation of one H in a sequence of C is relatively high: $\Delta G_{nuc}>0$. However the free-energy change per residue for forming H from C within a helical stretch, $\Delta G_{HC}$, stabilizes the growing helix. Based on these free energies, we define statistical weights:
$s = e^{-\Delta G_{HC}/k_BT} \nonumber$
$\sigma = e^{-\Delta{nuc}/k_BT}\nonumber$
s and σ are also known as the Zimm–Bragg parameters. Here, s is the statistical weight to add one helical segment to an existing continuous sequence (or stretch) of H, which we interpret as an equilibrium constant:
$s = \dfrac{[...CHHHHCC...]}{[...CHHHCCC...]}= \dfrac{P_H(n_H+1)}{P_H(n_H)} \nonumber$
σ is the statistical weight for each stretch of H. This is purely to reflect the probability of forming a new helical segment within a stretch of C. The energy benefit of making the helical form is additional:
$\sigma s = \dfrac{[...CCCHCC...]}{[...CCCCCC...]}= \dfrac{P_H(\nu_H+1)}{P_H(\nu_H)} \nonumber$
$\nu$ is the number of helical stretch segments in a chain. Note that the formation of the first helical segment has a contribution from both the nucleation barrier (σ) and the formation of the first stabilizing interaction (s). The statistical weight for a particular microstate is then $e^{-E_i/k_BT } = s^{n_H}\sigma^{\nu_H}$ Since \Delta G_{nucl} will be large and positive, σ≪ 1. Also, we take s > 1, and the presence of cooperativity will mainly hinge on σ ≪ s. Example A 35 segment chain has 235 = 3.4×1010 possible configurations. This particular microstate has fifteen helical segments (nH = 16) partitioned into three helical stretches (νH = 3): $CCCCCC \underbrace{HHHHH}_5 CCC \underbrace{H}_1 CCCCCCCC \underbrace{HHHHHHHHHH}_{10}CC \nonumber$ We ignore all Cs since the C state is the ground state and their statistical weight is 1. $e^{-E_i/k_BT} = s^{n_H}\sigma^{\nu_H} = s^{16}\sigma^3 \nonumber$ Now the partition function involves a sum over all possible helical segments and stretches: $q_{conf}(n) = \sum_{n_H=0}^n \sum_{\nu_H = 0}^{\nu_{max}} g(n,n_H,\nu_H )s^{n_H}\sigma^{\nu_H}$ Since the all-coil state (nH= 0) is the reference state, it contributes a value of 1 to the partition function (the leading term in the summation). Therefore, the probability of observing the all-coil state is $P(n,n_H = 0) = q_{conf}^{-1}$ From eq. (18.1.15), the mean number of helical residues is $\langle n_H \rangle =\dfrac{1}{q_{conf}} \sum_{n_H=0}^n \sum_{\nu_H = 0}^{\nu_{max}} n_H g(n,n_H,\nu_H) s^{n_H}\sigma^{\nu_H} \nonumber$ In these equations, νmax refers to the maximum number of helical stretches for a given nH, nH/2 for even nH and (nH/2)+1 for odd nH. Zipper model As a next step, we examine what happens with the simplifying assumption that one helical stretch is allowed. This is the single stretch approximation or the zipper model, in which conversion to a helix proceeds quickly once a single turn has been nucleated. This is reasonable for short chains in which two stretches are unlikely due to steric constraints. For the single stretch case, we only need to account for νH= 0 and 1. For νH = 0 the system is all coil (nH = 0) and there is only one microstate to count, g(n,0,0) = 1. For a single helical stretch we need to accounts for the number of ways of positioning a single helical stretch of nH residues on a chain of length n: g(n,nH,1) = n-nH+1. Then the partition function, eq. (18.1.15), is $q_{zip}(n) = 1+\sigma \sum_{n_H=1}^n (n-n_H+1)s^{n_H}$ We can evaluate these sums using the relations \begin{aligned} \sum_{n_H=1}^n &= \dfrac{s^{n+1}-s}{s-1} \ \sum_{n_H=1}^n n_H s^{n_H} &= \dfrac{s}{(s-1)^2} \left[ ns^{n+1}-(n+1)s^n +1 \right] \end{aligned} which leads to $q_{zip}(n) = 1+ \dfrac{\sigma s^2}{(s-1)^2} \left( s^n + \dfrac{n}{s}-(n+1) \right) \nonumber$ Following the general expression in eq. (18.1.6), and counting the degeneracy of ways to place a stretch of nH segments, the probability distribution of helical segments is $P_H(n,n_H) = \dfrac{(n-n_H+1)\sigma s^{n_H}}{q_{conf}} \qquad \qquad 1\leq n_H \leq n$ This expression does not apply to the case nH = 0, for which we turn to eq. (18.1.16). The helical fraction is obtained from \( \theta_H = \frac{s}{n}(\partial \ln {q_{zip}}/\partial s) :
$\theta_H = \dfrac{\sigma s}{(s-1)^3} \left( \dfrac{ns^{n+2}-(n+2)s^{n+1}+(n+2)s - n}{n \{ 1+\left( \sigma s/(s-1)^2 \right) \left( s^{n+1} + n - (n+1)s \right) \}} \right) \nonumber$
Multiple stretches
Expressions for the full partition function of chains with length n, eq. (18.1.15), can be evaluated for one-dimensional models that account for nearest neighbor interactions (Ising model) using an approach based on a statistical weight matrix, M. You can show that the Zimm–Bragg partition function can be written as a product of matrices of the form
\begin{aligned} q_{conf} (n) &= \begin{pmatrix} 1 &0 \end{pmatrix} \bf{M}^n \begin{pmatrix} 1 \ 1 \end{pmatrix} \ \bf{M} &= \begin{pmatrix} 1&\sigma s \ 1&s \end{pmatrix} \end{aligned}
Each matrix represents possible configurations of two adjoining partners, and M raised to the nth power gives all configurations for a chain of length n. This form also indicates that we can obtain a closed form for qconf from the eigenvalues of M raised to the nth power. If T is the transformation that diagonalizes M, Λ = T‒1MT, then Mn = nT‒1. This approach allows us to write
$q_{conf} = \underset{ \sim }{\lambda}^{-1} \left( \lambda^{n+1}_+ (1-\lambda_-)-\lambda^{n+1}_-(1-\lambda_+)\right) \nonumber$
\mathrm{with} \begin{aligned} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad &\lambda_{\pm} = \frac{1}{2} \left( (1-s) \pm \underset{ \sim }{\lambda} \right) \ &\underset{ \sim }{\lambda} = \lambda_+ - \lambda_- = \left( (1-s)^2+4\sigma s\right)^{-1/2} \end{aligned}
and the fractional helicity is obtained from
$\theta_H = \dfrac{\langle n_H \rangle}{n}= \dfrac{s}{n} \dfrac{\partial \ln{q_{conf}}}{\partial s}$
Simplifying these expressions for the limit of long chains $(n \rightarrow \infty , \lambda^{n+1}_+ \gg \lambda_-^{n+1} )$, one finds
$q_{conf} \approx \left( \dfrac{1+s+\underset{ \sim }{\lambda}}{2} \right)^n \nonumber$
and $\theta_H = s \left( \dfrac{1+\frac{1}{\underset{ \sim }{\lambda}}(s-1+2\sigma }{1+s+\underset{ \sim }{\lambda}} \right)$
Note that when you set σ =1, you recover the noncooperative expression, eq. (18.1.11). When s→1, θH→0.5.
Below, we examine the transition behavior in the large n limit from eq. (18.1.20) as a function of the cooperativity parameter σ. We note that a sharp transition between an ensemble that is mostly coil to one that is mostly helix occurs near s = 1, the point where these states exist with equal probability. When the $C\rightleftharpoons H$ equilibrium shifts slightly to favor H (s slightly greater than 1), most of the sample quickly converts to helical form. When the equilibrium shifts slightly toward C, most of the sample follows. As σ decreases, the steepness of this transition grows as $(d\theta / ds)_{s=1}=1/4\sigma^{1/2}$. Therefore, we conclude that highly cooperative transitions will have s ≈ 1 and σ≪ s. In practice for polypeptides, we find that σ/s lies between 5×10–3 and 5×10–5.
Next, we explore the chain-length dependence for finite chains. We find that the cooperativity of this transition, observed through the steepness of the curve at θH = 0.5 increases with n. We also observe that the observed midpoint (θH = 0.5) lies at s > 1, where the single linkage equilibrium favors the H form. This reflects the constraints on the length of helical stretches available a given chain.
Temperature Dependence
Now let’s describe the temperature dependence of the cooperative model. The helix–coil transition shows a cooperative melting transition, where heating the sample a few degrees causes a dramatic change from a sample that is primarily in the C form to one that is primarily H. Multiple temperature-dependent factors make this a bit difficult to deal with analytically, therefore we focus on the behavior at the melting temperature Tm, which we define as the point where θH(TM) = 0.5.
Look at the slope of θ at Tm. From chain rule:
$\dfrac{d\theta }{dT} = \dfrac{d\theta}{ds} \cdot \dfrac{ds}{dT} = \dfrac{d\theta}{ds} \cdot s\dfrac{d\ln{s}}{dT} \nonumber$
Since we interpret s as an equilibrium constant for the addition of one helical residue to a stretch, we can write a van’t Hoff relation
$\dfrac{d\ln{s}}{dT} = \dfrac{\Delta H^0_{HC}}{k_BT^2} \nonumber$
Note that this relation assumes that ΔH0 is independent of temperature, which generally is a concern, but we will not worry too much since we are just evaluating this at TM. Next we focus our discussion on the high n limit. From the Zimm–Bragg model:
$\left( \dfrac{d\theta}{ds} \right)_{s=1} = \dfrac{1}{4\sigma^{1/2}} \nonumber$
Then, we set s(Tm) = 1, and combine these results to give the slope of the melting curve at Tm:
$\left( \dfrac{d\theta }{dT} \right)_{T=T_m} = \dfrac{\Delta H^0_{HC}}{4\sigma^{1/2}k_BT^2_m} \nonumber$
The slope of $\theta \mathrm{at} T_m$ has units of inverse temperature, so we can also express this as a transition width: $\Delta T_m = (d\theta / dT)^{-1}_{T_m}$.
Keep in mind this van’t Hoff analysis comes with some real limitations when applied to experimental data. It does not account for the finite size of the system, which we have seen shifts s(Tm) to be >1, and the knowledge of parameters at Tm does not necessarily translate to other temperatures. To the extent that you can apply the assumptions, the van’t Hoff expression can also be used to predict the helical fraction as a function of temperature in the vicinity of TM using
$\ln{s} = \dfrac{\Delta H^0_{HC}}{k_B}\left( \dfrac{1}{T_M}-\dfrac{1}{T} \right)$
and assuming that σ is independent of temperature.
Below we show the length dependence of the melting temperature. As the length of the chain approaches infinite, the helix/coil transition becomes a step function in temperature. This trend matches the expectations for a phase transition: in the thermodynamic limit, the infinite system, will show discontinuous behavior. For finite lengths, the melting temperature Tm is lower that for the infinite chain (Tm,∞), but approaches this value for n>300.
Calorimetric parameters for polypeptide chains
Side-chain only has a small effect on the helix–coil propagation parameter:
Sample
$\Delta H_{HC}^0$
(kcal mol-1 residue-1)
$\sigma$ Other
Alanine-rich peptides
Ac-Y(AEAAKA)8F-NH2 Ac-(AAKAA)kY-NH2
-0.95 to -1.3 0.002
Poly(L-lysine)
Poly(L-glutamate)
-1.1 0.0025
Poly-alanine -0.95 0.003 s(0°C)=1.35;
Alanine oligomers -0.85 ΔS0=3 cal mol-1 res-1 K-1
Various homopolypeptides ~4 kJ ΔCP=-32 J/mol K res-1
Free‐Energy Landscape
Finally, we investigate the free-energy landscape for the Zimm–Bragg model of the helix–coil transition. The figure below shows the helical probability distribution and corresponding energy landscape for different values of the reduced temperature kBT/Δε for a chain length of n=40 and σ=10-3. Note that P(nH) is calculated from eq. (18.1.18) for all but the all-coil state, which comes from eq. (18.1.16).
The cooperative model shows two-state behavior. At low temperature and high temperature, the system is almost entirely in the all-helix or all-coil configuration, respectively; however, at intermediate temperatures, the distribution of helical configurations can be very broad. The least probable configuration is a chain with only one helical segment.
This behavior looks much closer to the two-state behavior expected from phase-transition behavior. The free energy has minima for nH= 0 and for nH> 1, and the free energy difference between these states shifts with temperature to favor one or the other minimum.
___________________________________________________________________
1. C. R. Cantor and P. R. Schimmel, Biophysical Chemistry Part III: The Behavior of Biological Macromolecules. (W. H. Freeman, San Francisco, 1980), Ch. 20; D. Poland and H. A. Scheraga, Theory of Helix–Coil Transitions in Biopolymers. (Academic Press, New York, 1970).
2. P. Doty, A. M. Holtzer, J. H. Bradbury and E. R. Blout, POLYPEPTIDES. II. THE CONFIGURATION OF POLYMERS OF γ-BENZYL-L-GLUTAMATE IN Solution, J. Am. Chem. Soc.76 (17), 4493-4494 (1954); P. Doty and J. T. Yang, POLYPEPTIDES. VII. POLY-γ-BENZYL-L-GLUTAMATE: THE HELIX-COIL TRANSITION IN Solution, J. Am. Chem. Soc. 78 (2), 498-500 (1956).
3. J. Marmur and P. Doty, Heterogeneity in Deoxyribonucleic Acids: I. Dependence on Composition of the Configurational Stability of Deoxyribonucleic Acids, Nature 183 (4673), 1427-1429 (1959).
4. B. H. Zimm and J. K. Bragg, Theory of the phase transition between helix and random coil in polypeptide chains, J. Chem. Phys. 31, 526-535 (1959).
5. K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010), Appendix C p. 705.
18.02: Two-State Thermodynamics
Here we describe the basic thermodynamics of two-state systems, which are commonly used for processes such as protein folding, binding, and DNA hybridization. Working with the example of protein folding analyzed through the temperature-dependent folded protein content.
$K=\dfrac{k_f}{k_u} = \dfrac{[F]}{[U]} = \dfrac{\phi_F}{1-\phi_F} \nonumber$
where φF is the fraction of protein that is folded, and the fraction that is unfolded is (1 ‒ φF).
\begin{aligned} &\phi_F =\dfrac{K}{K+1} \ &K= e^{-\Delta G^0/RT} \ &\phi_F = \dfrac{1}{1+e^{-\Delta G^0/RT}} = \dfrac{1}{1+e^{\Delta H^0/RT} e^{-\Delta S^0/R}} \end{aligned}
Define the melting temperature Tm as the temperature at which φF= 0.5. Then at Tm, $\Delta G^0=0 \mathrm{or} T_m = \Delta H^0/\Delta S^0$. Characteristic melting curves for Tm= 300 K are below:
We can analyze the slope of curve at Tm using a van’t Hoff analysis:
\begin{aligned} \dfrac{d\phi_F}{dT}&=\dfrac{d\phi_F}{dK} \cdot \dfrac{dK}{dT}=\dfrac{d\phi_F}{dK}\cdot K \dfrac{d\ln{K}}{dT}\ \dfrac{d\ln{K}}{dT} &= \dfrac{\Delta H^0}{RT^2} \ \dfrac{d\phi_F}{dK} &=K^{-2}(1+K)^{-2} \ \left( \dfrac{d\phi_F}{dT} \right)_{T=T_m} &= \dfrac{\Delta H^0}{4RT^2_m} \quad \mathrm{since} \; K=1 \; \mathrm{at} \; T_m \end{aligned}
This analysis assumes that there is no temperature dependence to ΔH, although we know well that it does from our earlier discussion of hydrophobicity. A more realistic two-state model will allow for a change in heat capacity between the U and F states that describes the temperature dependence of the enthalpy and entropy.
$\Delta G^0(T) = \Delta H^0(T_m)-T\Delta S^0(T_m)+ \Delta C_p \left[ T-T_m-T \ln{\dfrac{T}{T_m}} \right] \nonumber$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/05%3A_Cooperativity/18%3A_Cooperativity/18.01%3A_HelixCoil_Transition.txt |
Cooperative self-assembly refers to the the spontaneous formation of sophisticated structures from many molecular units. Generally, we think of this as involving many molecules (cooperative units), although single- and bi-molecular problems can be wrapped into this description, as in the helix–coil transition. Examples include:
• Peptides and proteins
• Protein folding, binding, and association
• Amyloid fibrilization
• Assembly of multi-protein complexes
• Viral capsid self-assembly
• Nucleic acids
• DNA hybridization, DNA origami
• Folding and association of RNA structures: pseudoknots, ribozym es
• Lipids
• Bilayer structures
• Micelle formation
Although molecular structures also assemble with the input of energy, the emphasis here in on spontaneous self-assembly in the absence of external input.
19: Self-Assembly
In particular, we will focus on micellar structures formed from a single species of amphiphilic molecule in aqueous solution. These are typically lipids or surfactants that have a charged or polar head group linked to one or more long hydrocarbon chains.
Such amphiphiles assemble into a variety of structures, the result of which depends critically on the concentration, composition, and temperature of the system. For SDS surfactant, micelles are favored. These condense hydrophobic chains into a fluid like core and present the charged head groups to the water. The formation of micelles is observed above a critical micelle concentration (CMC).
As the surfactant is dissolved, the solution is primarily monomeric at low concentration, but micelles involving 30–100 molecules suddenly appear for concentrations greater than the CMC.
Reprinted from http://swartz-lab.epfl.ch/page-20594-en.html.
To begin investigating this phenomenon, we can start by simplifying the equilibrium to a two-state form:
$nA \rightleftharpoons A_n$
$K_n$ is the equilibrium constant for assembling a micelle with $n$ amphiphiles from solution. $n$ is the called the aggregation number.
$K_n = \dfrac{[A_n]}{[A]^n} = e^{-\Delta G^0_micelle / k_BT} \label{1}$
The total number of $A$ molecules present is the sum of the free monomers and those monomers present in micelles:
$CTOT = [A] + n[A_n].$
The fraction of monomers present in micelles:
$\phi_mi = \dfrac{n[A_n]}{C_{TOT}} = \dfrac{n[A_n]}{[A]+n[A_n]} = \dfrac{nK_n[A]^{n-1}}{1+nK_n[A]^{n-1}}$
This function has an inflection point at the CMC, for which the steepness of the transition increases with $n$. Setting $φ_{mi} = 0.5$, we obtain the CMC ($c_0$) as
$c_0 = [A]_{cmc} = (nK_n)^{\dfrac{-1}{n-1}}$
Function steepens with aggregation number $n$:
Thus for large n, and cooperative micelle formation:
$\Delta G^0_{micelle} = -RT\ln{c_0}$
Note the similarity of Equation \ref{1} to the results for fractional helicity in the helix-coil transition:
$\dfrac{s^n}{1+s^n}$
This similarity indicates that a cooperative model exists for micelle formation in which the aggregation number reflects the number of cooperative units in the process. Cooperativity can be obtained from models that require surmounting a high nucleation barrier before rapidly adding many more molecules to reach the micelle composition.The simplest description of such a process would proceed in a step-wise growth form (a zipper model) for $n$ copies of monomer $A$ assembling into a single micelle $A_n$.
$nA \rightleftharpoons A_2 +(n+2)A \rightleftharpoons A_3 +(n-3)A \rightleftharpoons ... \rightleftharpoons A_n$
$K_n = \prod_{i=1}^{n-1} K_i \qquad K_i = \dfrac{k_f(i \rightarrow i+1}{k_r(i+1 \rightarrow i}$
Examples of how the energy landscape looks as a function of oligomerization number ν are shown below. However, if you remove the short-range correlation, overall we expect the shape of the energy landscape to still be two-state depending on the nucleation mechanism.
This picture is overly simple though, since it is not a one-dimensional chain problem. Rather, we expect that there are equilibira connecting all possible aggregation number clusters to form larger aggregates. A more appropriate description of the free energy barrier for nucleating a micelle is similar to classical nucleation theory for forming a liquid droplet from vapor.
____________________________________________________________
D. H. Boal, Mechanics of the Cell, 2nd ed. (Cambridge University Press, Cambridge, UK, 2012), p. 250.
19.02: Classical Nucleation Theory
Let’s summarize the thermodynamic theory for the nucleation of a liquid droplet by the association of molecules from the vapor. The free energy for forming a droplet out of $n$ molecules (which we refer to as monomers) has two contributions: a surface energy term that describes the energy needed to make droplet interface and a volume term that describes the cohesive energy of the monomers.
$\Delta G_n = \gamma a - \Delta \epsilon V$
Note the similarity to our discussion of the hydrophobic effect, where $γ$ was just the surface tension of water. $Δε$ is the bulk cohesive energy—a positive number. Since this is a homogeneous cluster, we expect the cluster volume $V$ to be proportional to $n$ and, for a spherical droplet, the surface area a to be proportional to $V^{2/3}$ and thus $n^{2/3}$ (remember our discussion of hydrophobic collapse). To write this in terms of monomer units, we can express the total area in terms of the average surface area per molecule in the droplet:
$\alpha = a/n$
and as the monomer volume $V_0$. Then the free energy is
$\Delta G_n = \gamma \alpha n^{2/3} - \Delta \epsilon V_0 n \label{2}$
and the chemical potential of the droplet as
\begin{align*} \Delta \mu_n &= \dfrac{\partial \Delta G_m}{\partial n} \[4pt] &= \dfrac{2}{3}\gamma_0 \alpha n ^{-1/3} + \Delta \epsilon V_0 \label{3} \end{align*}
These competing effects result in a maximum in $ΔG$ versus $n$, which is known as the critical nucleation cluster size $n^{*}$. The free energy at $n^{*}$ is positive and called the nucleation barrier ΔG*. We find $n^{*}$ by setting Equation \ref{3} equal to zero:
$n^* = \left( \dfrac{2\gamma_0 \alpha}{3\Delta \epsilon V_0} \right)^3$
and substituting into Equation \ref{2}
$G^* = \dfrac{4}{27}\dfrac{(\gamma_0 \alpha )^3}{(\Delta \epsilon V_0)^2}$
For nucleation of a liquid droplet from vapor, if fewer than n* monomers associate, there is not enough cohesive energy to allow the growth of a droplet and the nucleus will dissociate. If more than $n^{*}$ monomers associate, the droplet is still unstable, but the direction of spontaneous change will increase the size of the droplet and a liquid phase will grow from the nucleus. The process of micelle formation requires a balance of attractive and repulsive forces that stabilize an aggregate, which can depend on surface and volume terms. Thus the ΔGmicelle has a similar form, but the signs of different factors may be positive or negative.
_____________________________________
P. S. Richard, Nucleation: theory and applications to protein solutions and colloidal suspensions, J. Phys.: Condens. Matter 19 (3), 033101 (2007). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/05%3A_Cooperativity/19%3A_Self-Assembly/19.01%3A_Micelle_Formation.txt |
Micelles are formed by amphiphiles that want to bury hydrophobic chains and expose charged head groups to water. Since a cavity must be formed for the micelle, the resulting surface tension of the cavity (the hydrophobic effect) results in the system trying to minimize its surface area, and thereby the number of molecules in the micelle. At the same time, the electrostatic repulsion between headgroups results in driving force to increase the surface area per headgroup. These competing effects result in an optimal micelle size.
We start by defining the chemical potential Δμn, which is the free energy per mole of amphiphilic molecule A to assemble a micelle with n molecules. Instead of using n, we will try to express the size of the micelle in terms of its surface area a and assume that it is spherical. Then, the free energy for forming a cavity for the micelle grows as γa, where γ is the surface tension. The surface area is expressed as an average surface area spanned by the charged headgroup of a monomer unit:
$a_e = a/n$
The repulsion term is hard to predict and depends on many variables. There are the electrostatic repulsions between head groups, but there is also the entropic penalty for forming the micelle that depends on size. As an approximation, we anticipate that the free energy should be inversely proportional to surface area. Then the free energy for forming a micelle with n molecules is
\begin{aligned} \Delta G_n &= \gamma a + \dfrac{x}{a} \ &= \gamma n a_e + \dfrac{x}{na_e} \end{aligned}
where x is a constant.
Solving for Δμ=∂ΔG/∂n, differentiating it with respect to ae, and setting to zero, we find the optimal micelle size, a0, is
$a_0 = \sqrt{\dfrac{x}{\gamma n^2}}$
Solving for x and substituting in eq. (4), we obtain the chemical potential as:
$\Delta \mu = \dfrac{\gamma}{a_e}(a^2_e + a^2_0 ) = 2\gamma a_0 +\dfrac{\gamma}{a_e}(a_e-a_0)^2$
It has a parabolic shape with a minimum at a0.
Next, we can obtain the probability distribution for the micelle size as a function of head group surface area and aggregation number
$P_n = \exp (-n\Delta \mu /k_BT)$
$P_n(a_e) ~\exp \left( - \dfrac{n\gamma (a_e - a_0)^2}{a_ek_BT} \right)$
The relative populations of micelles are distributed in a Gaussian distribution about a0. The distribution of sizes has a standard deviation (or polydispersity) given by
$\sigma = \sqrt{\dfrac{na_ek_BT}{2\gamma}}$
From a = 4πr2 = nae, we predict that the breadth of the micelle size distribution will scale linearly in the micelle radius, and as the square root of temperature and molecule number.
___________________________________________
K. Dill and S. Bromberg, Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. (Taylor & Francis Group, New York, 2010); J. N. Israelachvili, Intermolecular and Surface Forces, 3rd ed. (Academic Press, Burlington, MA, 2011), Ch. 20.
19.04: Shape of Self-Assembled Amphiphiles
Empirically it is observed that certain features of the molecular structure of amphiphilic molecules and surfactants are correlated with the shape of the larger structures that they self-assemble into. For instance, single long hydrocarbon tails with a sulfo- group (like SDS) tend to aggregate into spherical micelles, whereas phosopholipids with two hydrocarbon chains (like DMPC) prefer to form bilayers. Since structure formation is largely governed by the hydrophobic effect, condensing the hydrophobic tails and driving the charged groups to a water interfaces, this leads to the conclusion that the volume and packing of the hydrophobic tail plays a key role in shape. While the molecular volume and the head group size and charge are fixed, the fluid nature of the hydrocarbon chain allows the molecule to pack into different configurations.
This structural variability is captured by the packing parameter:
$p = \dfrac{V_0}{a_e \ell_0}$
where V0 and $\ell_0$ are the volume and length of the hydrocarbon chain, and ae is the average surface area per charged head group. $V_0 / \ell_0$ is relatively constant at ~0.2 nm2, but the shape of the chain may vary from extended (cylindrical) to compact (conical), which will favor a particular packing.
Empirically it is found that systems with p < ⅓ typically form micelles, for cylindrical structures for ⅓ < p < ½, and for bilayer structures for ½ < p < 1. Simple geometric arguments can be made to rationalize this observation. Taking a spherical aggregate with radius R and aggregation number n as an example, we expect the ratio of the volume to the surface area to be
$\dfrac{V}{A} = \dfrac{nV_0}{na_e} = \dfrac{R}{3} \quad \rightarrow \quad V_0 = \dfrac{a_eR}{3}$
Substituting into the packing parameter:
$p = \dfrac{V_0}{a_e \ell_0} = \dfrac{R}{3\ell_0}$
Now, even though the exact conformation of the hydrocarbon chain is not known, the length of the hydrocarbon tail will not be longer than the radius of the micelle, i.e., $\ell_0 \geq R$. Therefore
$\therefore p \leq \dfrac{1}{3} \qquad (spheres)$
Similar arguments can be used to explain why extended lipid bilayers have $p \approx 1$ and cylinders for p ≈ ½. In a more general sense, we note that the packing parameter is related to the curvature of the aggregate surface. As p decreases below one, the aggregate forms an increasingly curved surface. (Thus vesicles are expected to have ½ < p < 1). It is also possible to have p > 1. In this case, the curvature also increases with increasing p, although the sign of the curvature inverts (from convex to concave). Such conditions result in inverted structures, such as reverse micelles in which water is confined in a spherical pool in contact with the charged headgroups, and the hydrocarbon tails are project outward into a hydrophobic solvent.
_____________________________________________________________________
Readings
J. N. Israelachvili, Intermolecular and Surface Forces, 3rd ed. (Academic Press, Burlington, MA, 2011). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/05%3A_Cooperativity/19%3A_Self-Assembly/19.03%3A_Why_Are_Micelles_Uniform_in_Size.txt |
• Composed of 50–500 amino acids linked in 1D sequence by the polypeptide backbone
• The amino acid physical and chemical properties of the 20 amino acids dictate an intricate and functional 3D structure.
• Folded structure is energetic ground state (Anfinsen)
Many proteins spontaneously refold into native form in vitro with high fidelity and high speed.
Different approaches to studying this phenomenon:
• How does the primary sequence encode the 3D structure?
• Can you predict the 3D fold from a primary sequence?
• Design a polypeptide chain that folds into a known structure.
• What is the mechanism by which a disordered chain rapidly adopts its native structure?
Our emphasis here is mechanistic. What drives this process? The physical properties of the connected pendant chains interacting cooperatively give rise to the structure.
It is said that the primary sequence dictates the three-dimensional structure, but this is not the whole story, and it emphasizes a certain perspective. Certainly we need water, and defined thermodynamic conditions in temperature, pH, and ionic strength. In a sense the protein is the framework and the solvent is the glue. Folded proteins may not be as structured from crystal structures, as one is led to believe.
Kinetics and Dynamics
Observed protein folding time scales span decades. Observations for protein folding typically measured in ms, seconds, and minutes. This is the time scale for activated folding across a free-energy barrier. The intrinsic time scale for the underlying diffusive processes that allow conformations to evolve and local contacts to be formed through free diffusion is ps to μs. The folding of small secondary structure happens on 0.1–1 μs for helices and ~1–10 μs for hairpins. The fastest folding mini-proteins (20–30 residues) is ~1 μs.
Cooperativity
What drives this? Some hints:
Levinthal’s paradox1
The folded configuration cannot be found through a purely random search process.
• Assume: o3 states/amino acid linkage o100 linkages
• 3100 = 5 x 1047 states oSample 10-13sec/state
• 1027 years to sample
Two‐state thermodynamics
To all appearances, the system (often) behaves as if there are only two thermodynamic states.
Entropy/Enthalpy
ΔG is a delicate balance of two large opposing energy contributions ΔH and TΔS.
Reprinted with permission from N. T. Southall, K. A. Dill and A. D. J. Haymet, J. Phys. Chem. B 106, 521-533 (2002). Copyright 2002 American Chemical Society. Reprinted from James Chou (2008). http://cmcd.hms.harvard.edu/activiti...1/lecture7.pdf.
Cooperativity underlies these observations
Probability of forming one contact is higher if another contact is formed.
• Zipping
• Hydrophobic collapse
Reprinted from K. A. Dill, K. M. Fiebig and H. S. Chan, Proc. Natl. Acad. Sci. U. S. A. 90,1942-1946 (1993). Copyright 1993 PNAS.
Protein Folding Conceptual Pictures
Traditional pictures rooted in classical thermodynamics and reaction kinetics.
• Postulate particular sequence of events.
• Focus on importance of a certain physical effect.
1. Framework or kinetic zipper
2. Hydrophobic collapse
3. Nucleation–condensation
Framework/Kinetic Zipper Model
• Observation from peptides: secondary structures fold rapidly following nucleation.
• Secondary structure formation precedes tertiary organization.
• Emphasis:
• Hierarchy and pathway
• Focus on backbone, secondary structure
Hydrophobic Collapse
• Observation: protein structure has hydrophobic residues buried in center and hydrophilic groups near surface.
• An extended chain rapidly collapses to bury hydrophobic groups and thereby speeds search for native contacts.
• Collapsed state: molten globule
• Secondary and tertiary structure form together following collapse.
Nucleation–Condensation
Nucleation of tertiary native contacts is important first step, and structure condenses around that.
Some observations so far:
• Importance of collective coordinates
• Big challenge: We don’t know much about the unfolded state.
______________________________________________________
1. C. Levinthal, Are there pathways for protein folding?, J. Chim. Phys. Phys.-Chim. Biol. 65, 44-45 (1968).
• 20.1: Models for Simulating Folding
Our study of folding mechanism and the statistical mechanical relationship between structure and stability have been guided by models. Of these, simple reductionist models guided the conceptual development from the statistical mechanics side, since full atom simulations were initially intractable. We will focus on the simple models.
• 20.2: Perspectives on Protein Folding Dynamics
20: Protein Folding
Our study of folding mechanism and the statistical mechanical relationship between structure and stability have been guided by models. Of these, simple reductionist models guided the conceptual development from the statistical mechanics side, since full atom simulations were initially intractable. We will focus on the simple models.
• Reductionist Mod
• Lattice Models
• Gō Models
• Coarse Grained
• Atomistic
• Force fields
HP Model1
• Chain of beads. Self-avoiding walk on square lattice.
• 2 types of beads: Hydrophobic (H) and polar (P).
• H-H contacts are energetically favorable to H-P contacts. more H $\rightarrow$ collapse to compact state, but many collapsed structures more P $\rightarrow$ well-solvated, doesn’t fold ~1:1 H:P optimal
• Can be used for folding mechanism using Monte Carlo.
Coarse‐Grained Models2
Hierarchy of various models that reduce protein structure to a set of interacting beads.
Gō Models3
Gō models and Gō-like models refer to a class of coarse-grained models in which formation of structure is driven by a minimalist interaction potential that drives the system to its native structure. The folded state must be known
• Coarse grained
• Original: one bead per AA
• “Off-lattice model”
• Native-state biasing potential
• Multiple forces in single interaction potential
• Need to know folded structure
• Increased simulation speed
• Doesn’t do well metastable intermediates or non-native contacts
_________________________________________
1. K. F. Lau and K. A. Dill, A lattice statistical mechanics model of the conformational and sequence spaces of proteins, Macromolecules 22, 3986-3997 (1989).
2. V. Tozzini, Coarse-grained models for proteins, Curr. Opin. Struct. Biol. 15, 144-150 (2005).
3. Y. Ueda, H. Taketomi and N. Gō, Studies on protein folding, unfolding, and fluctuations by computer simulation. II. A. Three-dimensional lattice model of lysozyme, Biopolymers 17, 1531-1548 (1978).
20.02: Two-State Thermodynamics
These models have helped drive theoretical developments that provide alternate perspectives on how proteins fold:
State Perspective
• Interchange between states with defined configurations
• What are the states, barriers and reaction coordinates?
Statistical Perspective
• Change in global variables
• Configurational entropy
Networks
• Characterize conformational variation and network of connectivity between them.
Reprinted with permission from V. A. Voelz, G. R. Bowman, K. Beauchamp and V. S. Pande, J. Am. Chem. Soc. 132, 1526-1528 (2010). Copyright 2010 American Chemical Society. Reprinted with permission from C. R. Baiz, Y.-S. Lin, C. S. Peng, K. A. Beauchamp, V. A. Voelz, V. S. Pande and A. Tokmakoff, Biophys. J. 106, 1359-1370 (2014). Copyright Elsevier 2014.
The statistical perspective is important. The standard ways of talking about folding is in terms of activated processes, in which we describe states that have defined structures, and which exchange across barriers along a reaction coordinate. And the emphasis is on molecularly interpreting these states. There is nothing formally wrong with that except that it is an unsatisfying way of treating problems where one has entropic barriers.
Folding Funnels and Configurational Entropy
Helps with entropic barriers1
Reprinted with permission from K. A. Dill, Protein Sci. 8, 1166-1180 (1999). John Wiley and Sons 1999.
Transition State vs Ensemble Kinetics
Reprinted with permission from K. A. Dill, Protein Sci. 8, 1166-1180 (1999). John Wiley and Sons 1999.
___________________________________________
1. K. A. Dill, Polymer principles and protein folding, Protein Sci. 8, 1166-1180 (1999). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/20%3A_Protein_Folding/20.01%3A_HelixCoil_Transition.txt |
Molecular associations are at the heart of biological processes. Specific functional interactions are present at every level of cellular activity. Some of the most important:
1)Proteins Interacting with Small Molecules and Ions
• Enzyme/substrate interactions and catalysis
• Ligand/receptor binding
• Chemical energy transduction (for instance ATP)
• Signaling (for instance neurotransmitters, cAMP)
• Drug or inhibitor binding
• Antibody binding antigen
• Small molecule and ion transport
• Mb + O2 → MbO2
• Ion channels and transporters
2) Protein–Protein Interactions
• Signaling and regulation networks
• Receptors binding to ligands activate receptors
• GPCRs bind agonist/hormone for transmembrane signal transduction
• Assembly and function of multi-protein complexes
• Replication machinery in replisome consists of multiple proteins including DNA polymerase, DNA ligase, topoisomerase, helicase
• Kinetochore orchestrate interactions of chromatin and the motor proteins that separate sister chromatids during cell division
3) Protein–Nucleic Acid Interactions
• All steps in the central dogma
• Transcription factor binding
• DNA repair machinery
• Ribozymes
In all of these examples, the common thread is a macromolecule, which typically executes a conformational change during the interaction process. Conformational flexibility and entropy changes during binding play an important role in describing these processes.
21: Binding and Association
To begin, we recognize that binding and association processes are bimolecular reactions. Let’s describe the basics of this process. The simplest kinetic scheme for bimolecular association is
$A+B \rightleftharpoons C$
A and B could be any two molecules that interact chemically or physically to result in a final bound state; for instance, an enzyme and its substrate, a ligand and receptor, or two specifically interacting proteins. From a mechanistic point of view, it is helpful to add an intermediate step:
$A+B \rightleftharpoons AB \rightleftharpoons C \nonumber$
Here AB refers to transient encounter complex, which may be a metastable kinetic intermediate or a transition state. Then the initial step in this scheme reflects the rates of two molecules diffusing into proximity of their mutual target sites (including proper alignments). The second step is recognition and binding. It reflects the detailed chemical process needed to form specific contacts, execute conformational rearrangements, or perform activated chemical reactions. We separate these steps here to build a conceptual perspective, but in practice these processes may be intimately intertwined.
Equilibrium Constant
Let’s start by reviewing the basic thermodynamics of bimolecular reactions, such as reaction scheme (21.1.1). The thermodynamics is described in terms of the chemical potential for the molecular species in the system (i = A,B,C)
$\mu_i = \left( \dfrac{\partial G}{\partial N_i} \right)_{p,T,\{ N_j,j\neq \}} \nonumber$
where Ni is the number of molecules of species i. The dependence of the chemical potential on the concentration can be expressed as
$\mu_i = \mu_i^0 +RT\ln \dfrac{c_i}{c^0}$
ci is the concentration of reactant i in mol L−1, and the standard state concentration is c0 = 1 mol L−1. So the molar reaction free energy for scheme (1) is
\begin{aligned} \Delta \overline{G} &=\sum_i v_i\mu_i \ &=\mu_C-\mu_A\mu_B , \ &=\Delta \overline{G}^0+RT\ln K \end{aligned}
vi is the stoichiometric coefficient for component i. K is the reaction quotient
$K= \dfrac{(c_C/c^0)}{(c_A/c^0)(c_B/c^0)}$
At equilibrium, $\Delta \overline{G} = 0$, so
$\Delta \overline{G}^0 = -RT\ln K_a$
where the association constant Ka is the value of the reaction quotient under equilibrium conditions. Dropping c0, with the understanding that we must express concentration in M units:
$K_a=\dfrac{c_C}{c_Ac_B}$
Since it is defined as a standard state quantity, Ka is a fundamental constant independent of concentration and pressure or volume, and is only dependent on temperature. The inverse of Ka is Kd the equilibrium constant for the C dissociation reaction $C \rightleftharpoons A+B$.
Concentration and Fraction Bound
Experimentally one controls the total mass $m_{TOT}=m_C+m_A+m_B$, or concentration
$c_{TOT}=c_C+c_A+c_B$
The composition of system can be described by the fraction of concentration due to species i as
\begin{aligned} \theta_i &=\dfrac{c_i}{c_{TOT}}\ \theta_A +\theta_B + \theta_C &=1 \end{aligned}
We can readily relate Ka to θi, but it is practical to set some specific constraint on the composition here. If we constrain the A:B composition to be 1:1, which is enforced either by initially mixing equal mole fractions of A and B, or by preparing the system initially with pure C, then
\begin{aligned} K_{a} &=\frac{4 \theta_{C}}{\left(1-\theta_{C}\right)^{2} c_{T O T}} \qquad \qquad (\theta_A=\theta_B) \ &=\frac{\left(1-2 \theta_{A}\right)}{\theta_{A}^{2} c_{T O T}} \end{aligned}
This expression might be used for mixing equimolar solutions of binding partners, such as complementary DNA oligonucleotides. Using eq. (21.1.6) (with cA=cB) and (21.1.7) here, we can obtain the composition as a function of total concentration fraction as a function of the total concentration
$\begin{array}{l} \theta_{C}=\left(1+\frac{2}{K_{a} c_{T O T}}\right)-\sqrt{\left(1+\frac{2}{K_{a} c_{T O T}}\right)^{2}-1} \ \theta_{A}=\frac{1}{2}\left(1-\theta_{C}\right) \end{array}$
In the case where A=B, applicable to homodimerization or hybridization of self-complementary oligonucleotides, we rewrite scheme (21.1.1) as the association of monomers to form a dimer
$2M \rightleftharpoons D \nonumber$
and find:
\begin{aligned} K_a &=\theta_D/2(1-\theta_D)^2c_{TOT} \ K_a &=(1-\theta_M)/2\theta_M^2c_{TOT} \end{aligned}
$\theta_D =1+\dfrac{1}{4c_{TOT}K_a} \left( 1-\sqrt{1+8c_{TOT}K_a} \right)$
$\theta_M = 1-\theta_D$
These expressions for the fraction of monomer and dimer, and the corresponding concentrations of monomer and dimer are shown below. An increase in the total concentration results in a shift of the equilibrium toward the dimer state. Note that cTOT= (9Ka)−1 = Kd/9 at θM = θD = 0.5,
For ligand receptor binding, ligand concentration will typically be much greater than that of the receptor, and we are commonly interested in fraction of receptors that have a ligand bound, θbound. Re-writing our association reaction as
$L+R\rightleftharpoons LR \qquad\qquad K_a= \dfrac{c_{LR}}{c_Lc_R}$
we write the fraction bound as
\begin{aligned} \theta_{bound} &= \dfrac{c_{LR}}{c_R+c_{LR}} \ &= \dfrac{c_LK_a}{1+c_LK_a} \end{aligned}
This is equivalent to a Langmuir absorption isotherm.
Temperature Dependence
The temperature dependence of Ka is governed by eq. (21.1.4) and the fundamental relation
$\Delta G^0(T)=\Delta H^0(T)-T\Delta S^0(T)$
Under the assumption that ΔH0 and ΔS0 are temperature independent, we find
$K_a(T) = exp \left[ -\dfrac{\Delta H_a^0}{RT}+ \dfrac{\Delta S_a^0}{R} \right]$
This allows us to describe the temperature-dependent composition of a system using the expressions above for θi. While eq. (12) allows you to predict a melting curve for a given set of thermodynamic parameters, it is more difficult to use it to extract those parameters from experiments because it only relates the value of Kd at one temperature to another.
Temperature is often used to thermally dissociate or melt dsDNA or proteins, and the analysis of these experiments requires that we define a reference temperature. In the case of DNA melting, the most common and readily accessible reference temperature is the melting temperature Tm defined as the point where the mole fractions of ssDNA (monomer) and dsDNA (dimer) are equal, θM = θD = 0.5. This definition is practically motivated, since DNA melting curves typically have high and low temperature limits that correspond to pure dimer or pure monomer. Then Tm is commonly associated with the inflection point of the melting curve or the peak of the first derivative of the melting curve. From eq. (21.1.9), we see that the equilibrium constants for the association and dissociation reaction are given by the total concentration of DNA: Ka(Tm) = Kd(Tm)−1 = ctot−1 and ΔGd0(Tm) = ‒RTmlnctot. Furthermore, eq. (21.1.12) implies Tm = ΔH0/ΔS0.
The examples below show the dependence of melting curves on thermodynamic parameters, Tm, and concentration. These examples set a constant value of Tm (ΔH0/ΔS0). The concentration dependence is plotted for ΔH0 = 15 kcal mol−1 and ΔS0 = 50 cal mol−1 K−1.
For conformational changes in macromolecules, it is expected that the enthalpy and entropy will be temperature dependent. Drawing from the definition of the heat capacity,
$C_p = \left( \dfrac{\partial H}{\partial T} \right)_{N,P} = T\left( \dfrac{\partial S}{\partial T} \right)_{N,P} \nonumber$
we can describe the temperature dependence of ΔH0 and ΔS0 by integrating from a reference temperature T0 to T. If ΔCp is independent of temperature over a small enough temperature range, then we obtain a linear temperature dependence to the enthalpy and entropy of the form
$\Delta H^0 (T) = \Delta H^0 (T_0) + \Delta C_p[T-T_0]$
$\Delta S^0 (T) = \Delta S^0(T_0) +\Delta C_p \left( \dfrac{T}{T_0} \right)$
These expressions allow us to relate values of ΔH0, ΔS0, and ΔG0 at temperature T to its value at the reference temperature T0. From these expressions, we obtain a more accurate description of the temperature dependence of the equilibrium constant is
$K_d(T) = exp \left[ -\dfrac{\Delta H_m^0}{RT} +\dfrac{\Delta S_m^0}{R}-\dfrac{C_p}{R} \left[ 1-\dfrac{T_m}{T}-\ln \left( \dfrac{T}{T_m} \right) \right] \right]$
where $\Delta H_m^0 = \Delta H^0(T_m)$ and $\Delta S_m^0 = \Delta S^0(T_m)$ are the enthalpy and entropy for the dissociation reaction evaluated at Tm. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/21%3A_Binding_and_Association/21.01%3A_Thermodynamics_and_Biomolecular_Reactions.txt |
Statistical mechanics can be used to calculate Ka on the basis of the partition function. The canonical partition function Q is related to the Helmholtz free energy through
$F= -k_bT\ln Q$
$Q = \sum_{\alpha}e^{-E_{\alpha}/k_BT}$
where the sum is over all microstates (a particular configuration of the molecular constituents to a macroscopic system), Boltzmann weighted by the energy of that microstate Eα. The chemical potential of molecular species i is given by
$\mu_i = -k_BT \left( \dfrac{\partial \ln Q}{\partial N_i} \right)_{V,T, \{ N_{j \neq i} \} }$
We will assume that we can partition Q into contributions from different molecular components of a reacting system such that
$Q =\prod_iQ_i$
The ability to separate the partition function stems from the assumption that certain degrees of freedom are separable from each other. When two sub-systems are independent of one another, their free energies should add (FTOT = F1 + F2) and therefore their partition functions are separable into products: QTOT = Q1Q2. Generally this separability is a result of being able to write the Hamiltonian as HTOT = H1 + H2, which results in the microstate energy being expressed as a sum of two independent parts: Eα= Eα,1+Eα,2. In addition to separating the different molecular species, it is also very helpful to separate the translational and internal degrees of freedom for each species, Qi = Qi,transQi,int. The entropy of mixing originates from the translational partition function, and therefore will be used to describe concentration dependence.
For Ni non-interacting, indistinguishable molecules, we can relate the canonical and molecular partition function qi for component i as
$Q_i= \dfrac{q_i^{N_i}}{N_i!}$
and using Sterling’s approximation we obtain the chemical potential,
$\mu_i = -RT\ln \dfrac{q_i}{N_i}$
Following the reasoning in eqs. (2)–(5), we can write the equilibrium constant as
$K_a = \dfrac{N_C}{N_AN_B}=\dfrac{q_C}{q_Aq_B}V$
This expression reflects that the equilibrium constant is related to the stoichiometrically scaled ratio of molecular partition functions per unit volume $K_a = \prod_i(q_i/V)^{v_i}$. Then the standards binding free energy is determined by eq. (4).
21.03: DNA Hybridization
To illustrate the use of statistical thermodynamics to describe binding, we discuss simple models for the hybridization or melting of DNA. These models are similar to our description of the helix–coil transition in their approach. These do not distinguish the different nucleobases, only considering nucleotides along a chain that are paired (bp) or free (f).
Consider the case of the pairing between self-complementary oligonucleotides.
$S+S \rightleftharpoons D \nonumber$
S refers to any fully dissociated ssDNA and D to any dimer forms that involve two strands which have at least one base pair formed. We can then follow expressions for monomer–dimer equilibria above. The equilibrium constant for the association of single strands is
$K_a = \dfrac{c_D}{c_S^2}$
This equilibrium constant is determined by the concentration-dependent free-energy barrier for two strands to diffuse into contact and create the first base pair. If the total concentration of molecules present is either monomer or dimer, the form is
$C_{TOT} = c_S + 2c_D$
then the fraction of the DNA strands in the dimer form is
$\theta_D = \dfrac{2c_D}{C_{tot}}$
and eq. (10) leads to
$\theta_D = 1+(4K_aC_{tot})^{-1}-\sqrt{(1+(4K_aC_{tot})^{-1})^2-1}$
We see that at the total concentration, which results in a dimer fraction $\theta_D = 0.5$, the association constant is obtained from $K_a=(9C_{tot})^{-1}$. This is a traditional description of the thermodynamics of a monomer–dimer equilibrium.
We can calculate Ka from the molecular partition functions for the S and D states:
$K_a = \dfrac{q_D}{q_S^2} \nonumber$
Different models for hybridization will vary in the form of these partition functions. For either state, we can separate the partition function into contributions from the conformational degrees of freedom relevant to the base-pairing and hybridization, and other degrees of freedom, qi = qi,confqi,ext. Assuming that the external degrees of freedom will be largely of an entropic nature, we neglect an explicit calculation and factor out the external degrees of freedom by defining the variable γ:
$\gamma = \dfrac{q_{D,ext}C_{tot}}{q_{S,ext}^2}$
then
$\theta_D = 1+\dfrac{q_{S,int}^2}{4\gamma q_{D,int}}-\sqrt{\left( 1+ \dfrac{q_{S,int}^2}{4\gamma q_{D,int}} \right)^2-1}$
Short Oligonucleotides: The Zipper Model
For short oligonucleotide hybridization, a common (and reasonable) approximation is the single stretch model, which assumed that base-pairing will only occur as a single continuous stretch of base pairs. This is reasonable for short oligomers (n < 20) where two distinct helical stretches separated by a bubble (loop) are unlikely given the persistence length of dsDNA. The zipper model refers to the single-stretch case with “perfect matching”, in which only pairing between the bases in precisely sequence-aligned DNA strands is counted. As a result of these two approximations, the only dissociated base pairs observed in this model appear at the end of a chain (fraying).
The number of bases in a single strand is n and the number of bases that are paired is nbp. For the dimer, we consider all configurations that have at least one base pair formed. The dimer partition function can be written as
\begin{aligned} q_{D,int}(n) &=\sigma \sum_{n_{bp}=1}^ng(n,n_{bp})s^{n_{bp}} \ &=\sigma \sum_{n_{bp}=1}^n (n-n_{bp} +1)s^{n_{bp}} \end{aligned}
Here g is the number of ways of arranging nbp continuous base pairs on a strand with length n; σis the statistical weight for nucleating the first base pair; and s is the statistical weight for forming a base pair next to an already-paired segment: $s=e^{-\Delta \varepsilon_{bp}/k_BT}$. Therefore, in the zipper model, the equilibrium constant in eq. (23) between ssDNA and dimers involving at least one intact base pair is: Kzip = σs. In the case of homogeneous polynucleotide chains, in which sliding of registry between chains is allowed: $q_{D,int}(n) =\sigma \sum_{n_{bp}=1}^n (n-n_{bp} +1)^2s^{n_{bp}}$. The sum in eq. (27) can be evaluated exactly, giving
$q_{D,int}(n) = \dfrac{\sigma_S}{(s-1)^2}\left[ s^{n+1} -(n+1)s+n \right]$
In the case that s > 1 ( $\Delta \varepsilon_{bp} < 0$ ) and n≫1, qD,int→σsn. Also, the probability distribution of helical segments is
$P_{bp}(n,n_{bp}) = \dfrac{(n-n_{bp}+1)\sigma s^{n_{bp}}}{q_{D,int}}\qquad 1\leq n_{bp} \leq n$
The plot below shows illustrations of the probability density and associated energy landscape for a narrow range of s across the helix–coil transition. These figures illustrate a duplex state that always has a single free-energy minimum characterized by frayed configurations.
In addition to the fraction of molecules that associate to form a dimer, we must also consider the fraction of contacts that successfully form a base pair in the dimer state
$\theta_{bp} = \dfrac{ \langle n_{bp} \rangle }{n} \nonumber$
We can evaluate this using the identity
$\langle n_H \rangle = \dfrac{s}{q} \dfrac{\partial q}{\partial s} \nonumber$
Using eq. (28) we have
$\theta_{bp} = \dfrac{ns^{n+2}-(n+2)s^{n+1}+(n+2)s - n}{n(s-1)(s^{n+1}-s(n+1)+n)} \nonumber$
Similar to the helix–coil transition in polypeptides, θbp shows cooperative behavior with a transition centered at s = 1, which gets steeper with increasing n and decreasing σ.
Finally, we can write the total fraction of nucleobases that participate in a base pair as the product of the fraction of the DNA strands that are associated in a dimer form, and the average fraction of bases of the dimer that are paired.
$\theta_{tot} = \theta_D \theta_{bp} \nonumber$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/21%3A_Binding_and_Association/21.02%3A_Statistical_Thermodynamics_of_Biomolecular_Reactions.txt |
Returning to our basic two-state scheme, we define the rate constants ka and kd for the association and dissociation reactions:
$A+B \underset{k_{d}}{\stackrel{k_{a}}{\rightleftharpoons}} C$
From detailed balance, which requires that the total flux for the forward and back reactions be equal under equilibrium conditions:
$K_a = \dfrac{k_a}{k_d}$
The units for Ka are M−1, M−1s-1 for ka, and s−1 for kd.
For the case where we explicitly consider the AB encounter complex:
$A+B \underset{k_{-1}}{\stackrel{k_{1}}{\rightleftharpoons}}(A B) \underset{k_{-2}}{\stackrel{k_{2}}{\rightleftharpoons}} C$
Schemes of this sort are referred to as reaction–diffusion problems. Note, this corresponds to the scheme used in Michaelis–Menten kinetics for enzyme catalysis, where AB is an enzyme–substrate complex prior to the catalytic step.
The kinetic equations corresponding to this scheme are often solved with the help of a steady-state approximation (∂[AB]/∂t ≈ 0), leading to
$\dfrac{d[C]}{dt} = k_a[A][B]-k_d[C]$
$k_a = \dfrac{k_1k_2}{(k_{-1}+k_2)} \qquad \quad k_d = \dfrac{k_{-1}k_{-2}}{k_{-1}+k_2}$
Let’s look at the limiting scenarios:
1. Diffusion controlled reactions refer to the case when reaction or final association is immediate once A and B diffusively encounter one another, i.e., $k_2 \gg k_{-1}$. Then the observed rate of product formation ka≈k1,and we can then equate k1 with the diffusion-limited association rate we have already discussed.
2. Pre-Equilibrium. When the reaction is limited by the chemical step, an equilibrium is established by which A and B can associate and dissociate many times prior to reaction, and the AB complex establishes a pre-equilibrium with the unbound partners defined by a nonspecific association constant $K'_a = k_1/k_{-1}$. Then the observed association rate is $k_a = k_2K'_a$.
What if both diffusion and reaction within encounter complex matter? That is the two rates $k_1 \approx k_2$.
$A+B \stackrel{k_a}{\rightleftharpoons} AB \stackrel{k_{rxn}}{\rightleftharpoons} C$
Now all the rates matter. This can be solved in the same manner that we did for diffusion to capture by a sphere, but with boundary conditions that have finite concentration of reactive species at the critical radius. The steady-state solution gives:
\begin{aligned} k_{eff} &= \dfrac{k_ak_{rxn}}{k_a+k_{rxn}} \ k_{eff}^{-1} &= k_a^{-1}+k_{rxn}^{-1} \end{aligned}
keff is the effective rate of forming the product C. It depends on the association rate ka (or k1) and krxn is an effective forward reaction rate that depends on k2 and k–1.
Competing Factors in Diffusion–Reaction Processes
In diffusion–reaction processes, there are two competing factors that govern the outcome of the binding process. These are another manifestation of the familiar enthalpy–entropy compensation effects we have seen before. There is a competition between enthalpically favorable contacts in the bound state and the favorable entropy for the configurational space available to the unbound partners. Overall, there must be some favorable driving force for the interaction, which can be expressed in terms of a binding potential UAB(R) that favors the bound state. On the other hand, for any one molecule A, the translational configuration space available to the partner B will grow as R2.
We can put these concepts together in a simple model.1 The probability of finding B at a distance R from A is
$P(R)dR = Q^{-1}e^{-U(R)/kT}4\pi R^2dR$
where Q is a normalization constant. Then we can define a free energy along the radial coordinate
\begin{aligned} F(R) &= -k_BT\ln P(R)dR \ &=U(R)-k_BT\ln R^2-\ln Q \end{aligned}
Here F(R) applies to a single A-B pair, and therefore the free energy drops continuously as R increases. This corresponds to the infinitely dilute limit, under which circumstance the partners will never bind. However, in practice there is a finite volume and concentration for the two partners. We only need to know the distance to the nearest possible binding partner $\langle R_{AB} \rangle$. We can then put an upper bound on the radii sampled on this free energy surface. In the simplest approximation, we can determine a cut off radius in terms of the volume available to each B, which is the inverse of the B concentration: $\frac{4}{3}\pi r^3_c = [B]^{-1}$. Then, the probability of finding the partners in the bound state is
$P_a = \dfrac{\int_0^{r*} e^{-F(r)/k_BT}4\pi r^2dr}{\int_0^{r_c} e^{-F(r)/k_BT}4\pi r^2dr}$
At a more molecular scale, the rates of molecular association can be related to diffusion on a potential of mean force. g(r) is the radial distribution function that describes the radial variation of B density about A, and is related to the potential of mean force W(r) through $g(r) = exp[-W(r)/k_BT]$. Then the association rate obtained from the flux at a radius defined by the association barrier ( r = r) is
$k_a^{-1} = \int_{r^†}^{\infty} dr [4\pi r^2D(r)e^{-W(r)/k_BT}]^{-1}$
Here D(r) is the radial diffusion coefficient that describes the relative diffusion of A and B. The spatial dependence reflects the fact that at small r the molecules do not really diffuse independently of one another.
__________________________________________
1. D. A. Beard and H. Qian, Chemical Biophysics; Quantitative Analysis of Cellular Systems. (Cambridge University Press, Cambridge, UK, 2008). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/21%3A_Binding_and_Association/21.04%3A_Biomolecular_Kinetics.txt |
Association Rate
The diffusion-limited association rate is typically approximated from the expression for the relative diffusion of A and B with an effective diffusion constant D = DA + DB to within a critical encounter radius R0 = RA + RB, as described earlier.
$k_a = 4\pi R_0 f(D_A + D_B)$
One can approximate association rates between two diffusing partners using the Stokes–Einstein expression: $D_A = k_BT/6\pi \eta R_A$. For two identical spheres (i.e., dimerization) in water at T = 300 K, where η ~ 1 cP = 100 kg m−1 s−1,
$k_a = \dfrac{8k_BT}{3\eta} = 6.6 \times 10^9 M^{-1}s^{-1}$
Note that this model predicts that the association rate is not dependent on the size or mass of the object.
For bimolecular reactions, the diffusion may also include those orientational factors that bring two binding sites into proximity. Several studies have investigated these geometric effects.
During diffusive encounter in dilute solution, once two partners collide but do not react, there is a high probability of re-colliding with the same partner before diffusing over a longer range to a new partner. Depending on concentration and the presence of interaction potentials, there may be 5–50 microcollisions with the same partner before encountering a new partner.
Diffusion‐Limited Dissociation Rate
For the limit where associations are weak, k1 and k–1 are fast and in equilibrium, and the dissociation is diffusion limited. Then we can calculate k–1
$A+B \underset{k_{-1}}{\stackrel{k_{1}}{\rightleftharpoons}} AB$
Now we consider boundary conditions for flux moving away from a sphere such that
\begin{aligned} C_B(\infty) &= 0\ C_B(R_0) &=\left( \dfrac{4}{3} \pi R^3_0 \right)^{-1} \end{aligned}
The boundary condition for concentration at the surface of the sphere is written so that the number density is one molecule per sphere.
The steady state distribution of B is found to be
$C_B (r) = \dfrac{3}{4\pi R^2_0 r}$
The dissociation flux at the surface is
$J = -D_B \left( \dfrac{\partial C_B}{\partial r} \right)_{r=R_0} = \dfrac{3D_B}{4\pi R_0^4}$
and the dissociation frequency is
$\dfrac{J}{4\pi R_0^2} = \dfrac{3D_B}{R_0^2}$
When we also consider the dissociative flux for the other partner in the association reaction,
$k_{-1} = k_d = 3(D_A +D_B) R_0^{-2}$
Written in a more general way for a system that may have an interaction potential
$k_d = \dfrac{4\pi De^{U(R_0)/ kT}}{\dfrac{4}{3}\pi R_0^3 \int^{\infty}_{R_0}e^{U(r)kT}r^{-2} dr} = 3DR^*R_0^{-3}$
Note that equilibrium constants do not depend on D for diffusion-limited association/dissociation
$K_D = \dfrac{k_D}{k_A} = \dfrac{3DR_0^{-2}}{4\pi R_0D} = \dfrac{3}{4\pi R_0^3}$
Note this is the inverse of the volume of a sphere.
______________________________
1. D. Shoup, G. Lipari and A. Szabo, Diffusion-controlled bimolecular reaction rates. The effect of rotational diffusion and orientation constraints, Biophys. J. 36 (3), 697-714 (1981); D. Shoup and A. Szabo, Role of diffusion in ligand binding to macromolecules and cell-bound receptors, Biophys. J. 40 (1), 33-39 (1982).
21.06: Protein Recognition and Binding
Enzyme/Substrate Binding
Lock-and-Key (Emil Fisher)
• Emphasizes shape complementarity
• Substrate typically rigid
• Concepts rooted in initial and final structure
• Does not directly address recognition
But protein-binding reactions typically involve conformational changes. Domain flexibility can give rise to dramatic increase in binding affinity. A significant conformational change/fluctuation may be needed to allow access to the binding pocket.
For binding a substrate, two models vary in the order of events for conformational change vs. binding event:
1. Induced fit (Daniel Koshland)
2. Conformational selection:Pre-existing equilibrium established during which enzyme explores a variety of conformations.
Protein–Protein Interactions
• Appreciation that structure is not the only variable
• Coupled folding and binding
• Fold on contact
• Fly-casting
• Both partners may be flexible
21.07: Forces Guiding Binding
Electrostatics
• Electrostatics play a role at long and short range
• Long-range nonspecific interactions accelerate diffusive encounter
• Short range guides specific contacts
• Electrostatic complementarity
• Electrostatic steering
• van der Waals, π-π stacking
Shape and Geometry
• Shape complementarity
• Orientational registry
• Folding
• Anchoring residues
Hydrogen Bonding
• Short range
• Cross over from electrostatic to more charge transfer with strong HBs (like DNA, protein–DNA binding)
• Important in specificity
Solvation/Desolvation
• To bind, a ligand needs to desolvate the active site
• Bimolecular contacts will displace water
• Water often intimate binding participant (crystallographic waters)
• Hydrophobic patches
• Charge reconfiguration in electrolyte solutions at binding interface
• Electrostatic forces from water
Depletion Forces
• Entropic effect
• Fluctuations that lead to an imbalance of forces that drives particles together
• Crowding/Caging
• Hydrophobicity
• Dewetting and Interfacial Fluctuations
Folding/Conformational Change
• Disorder increases hydrodynamic volume
• Coupled folding and binding
• Fly-casting mechanism
• Partially unfolded partners
• Long-range non-native interaction
• Gradual decrease in free energy | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/21%3A_Binding_and_Association/21.05%3A_Diffusion-Limited_Reactions.txt |
Specificity in Recognition
What determines the ability for a protein to recognize a specific target amongst many partners? To start, let’s run a simple calculation. Take the case that a protein (transcription factor) has to recognize a string of n sequential nucleotides among a total of N bases in a dsDNA.
• Assume that each of the four bases (ATGC) is present with equal probability among the N bases, and that there are no enthalpic differences for binding to a particular base.
• Also, the recognition of a particular base is independent of the other bases in the sequence. (In practice this is a poor assumption).
• The probability of finding a particular n nucleotide sequence amongst all n nucleotide strings is
$\left( \dfrac{1}{4} \right)^n$
• For a particular n nucleotide sequence to be unique among a random sequence of N bases, we need
$\left( \dfrac{1}{4} \right)^n \geq \dfrac{1}{N}$
• Therefore we can say
$n \geq \dfrac{\ln{N}}{\ln{4}}$
Example
For the case that you want to define a unique binding site among N = 65k base pairs:
• A sequence of n = ln (65000)/ln(4) ≈ 8 base pairs should statistically guarantee a unique binding site.
• n = 9 → 262 kbp
This example illustrates that simple statistical considerations and the diversity of base combinations can provide a certain level of specificity in binding, but that other considerations are important for high fidelity binding. These considerations include the energetics of binding, the presence of multiple binding motifs for a base, and base-sequence specific binding motifs.
Energetics of Binding
We also need to think about the strength of interaction. Let’s assume that the transcription factor has a nonspecific binding interaction with DNA that is weak, but a strong interaction for the target sequence. We quantify these through:
∆G1: nonspecific binding
∆G2: specific binding
Next, let’s consider the degeneracy of possible binding sites:
gn: number of nonspecific binding sites = (N – n) or since N ≫ n: (N – n) ≈ N
gs: number of sites that define the specific interaction: n
The probability of having a binding partner bound to a nonspecific sequence is
\begin{aligned} P_{\text {nonsp }} &=\frac{g_{n} e^{-\Delta G_{1} / k T}}{g_{n} e^{-\Delta G_{1} / k T}+g_{s} e^{-\Delta G_{2} / k T}} \ &=\frac{(N-n) e^{-\Delta G_{1} / k T}}{(N-n) e^{-\Delta G_{1} / k T}+n e^{-\Delta G_{2} / k T}} \ &=\frac{1}{1+\frac{n}{N} e^{-\Delta G / k T}} \end{aligned}
where ∆G = ∆G2 – ∆G1. We do not want to have a high probability of nonspecific binding, so let’s minimize Pnonsp. Solving for ΔG, and recognizing Pnonsp 1,
$\Delta G \leq -k_BT\ln{\left[ \dfrac{N}{nP_{nonsp}} \right] }$
Suppose we want to have a probability of nonspecific binding to any region of DNA that is Pnonsp 1%. For N = 106 and n = 10, we find
$\Delta G \approx -16k_BT \qquad \textrm{or} \qquad -1.6 k_BT/nucleotide$
for the probability that the partner being specifically bound with Psp > 99%.
______________________________________________________________________________________
Readings
1. G. Schreiber, G. Haran and H. X. Zhou, Fundamental aspects of protein−protein association kinetics, Chem. Rev. 109 (3), 839-860 (2009).
2. D. Shoup, G. Lipari and A. Szabo, Diffusion-controlled bimolecular reaction rates. The effect of rotational diffusion and orientation constraints, Biophys. J. 36 (3), 697-714 (1981).
3. D. Shoup and A. Szabo, Role of diffusion in ligand binding to macromolecules and cell-bound receptors, Biophys. J. 40 (1), 33-39 (1982). | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/21%3A_Binding_and_Association/21.08%3A_Specificity_in_Recognition_and_Binding.txt |
Time-dependent problems in molecular biophysics: How do molecular systems change? How does a molecular system change its microscopic configuration? How are molecules transported? How does a system sample its thermodynamically accessible states?
Two types of descriptions of time-dependent processes:
1. Kinetics: Describes the rates of interconversion between states. This is typically measured by most experiments. It does not directly explain how processes happen, but it can be used to predict the time-dependent behavior of populations from a proposed mechanism.
2. Dynamics: A description of the time-evolving molecular structures involved in a process, with the objective of gaining insight into mechanism. At a molecular level, this information is typically more readily available from dynamical simulations of a model than from experiments.
There is no single way to describe biophysical kinetics and dynamics, so we will survey a few approaches. The emphasis here will be on the description and analysis of time-dependent phenomena, and not on the experimental or computational methods used to obtain the data.
Two common classes of problems:
1. Barrier crossing or activated processes: For a solution phase process, evolution between two or more states separated by a barrier whose energy is $\gg k_BT$. A description of “rare events” when the system rapidly jumps between states. Includes chemical reactions described by transition-state theory. $\rightarrow$ We’ll look at two state problems.
2. Diffusion processes: Transport in the absence of significant enthalpic barriers. Many small barriers on the scale of $k_BT$ lead to “friction”, rapid randomization of momenta, and thereby diffusion.
Now let’s start with some basic definitions of terms we will use often:
Coordinates
Refers to many types of variables that are used to describe the structure or configuration of a system. For instance, this may refer to the positions of atoms in a MD simulation as a function of time {rN,t}, or these Cartesian variables might be transformed onto a set of internal coordinates (such as bond lengths, bond angles, and torsion angles), or these positions may be projected onto a different collective coordinate. Unlike our simple lattice models, the transformation from atomic to collective coordinate is complex when the objective is to calculate a partition function, since the atomic degrees of freedom are all correlated.
Collective coordinate
• A coordinate that reflects a sum/projection over multiple internal variables—from a high-dimensional space to a lower one.
Example: Solvent coordinate in electron transfer. In polar solvation, the position of the electron is governed by the stabilization by the configuration of solvent dipoles. An effective collective coordinate could be the difference in electrostatic potential between the donor and acceptor sites: $q ~ Φ_A‒Φ_D$.
Example: RMSD variation of structure with coordinates from a reference state.
$R M S D=\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left(\mathbf{r}_{i}-\mathbf{r}_{i}^{0}\right)^{2}}$
where $r$ is the position of an atom in an n atom molecule.
• Sometimes the term “order parameter” gets used to describe a collective coordinate. This term originated in the description of changes of symmetry at phase transitions, and is a more specific term than order parameter. While order parameters are collective variables, collective variables are not necessarily order parameters.
Reaction coordinate
• An internal variable that describes the forward progress of a reaction or process.
• Typically an abstract quantity, and not a simple configurational or geometrical coordinate. In making a connection to molecular structure, often the optimal reaction coordinate is not known or cannot be described, and so we talk about a “good reaction coordinate” as a collective variable that is a good approximate description of the progress of the reaction.
Energy Landscape
A structure is characterized by an energy of formation. There are many forms of energy that we will use, including free energy (G, A), internal energy or enthalpy (E, H), interaction potential (U, V), ... so we will have to be careful to define the energy for a problem. Most of the time, though, we are interested in free energy.
The energy landscape is used to express the relative stability of different states, the position and magnitude of barriers between states, and possible configurational entropy of certain states. It is closely related to the free energy of the system, and is often used synonymously with the potential of mean force. The energy landscape expresses how the energy of a system (typically, but it is not limited to, free energy) depends on one or more coordinates of the system. It is often used as a free energy analog of a potential energy surface. For many-particle systems, they can be presented as a reduced dimensional surface by projecting onto one or a few degrees of freedom of interest, by integrating over the remaining degrees of freedom.
“Energy landscapes” represent the free energy (or rather the negative of the logarithm of the probability) along a particular coordinate. Let’s remind ourselves of some definitions. The free energy of the system is calculated from .
$A = -k_BT \ln{Z}$
where Z is the partition function. The free energy is a number that reflects the thermally weighted number of microstates available to the system. The free energy determines the relative probability of occupying two states of the system:
$\dfrac{P_A}{P_B} = e^{-(A_A-A_B)/k_BT}$
The energy landscape is most closely related to a potential of mean force
$F(x) = -k_BT\ln{P(x)}$
P(x) is the probability density that reflects the probability for observing the system at a position x. As such it is equivalent to decomposing the free energy as a function of the coordinate x. Whereas the partition function is evaluated by integrating a Boltzmann weighting over all degrees of freedom, P(x) is obtained by integrating over all degrees of freedom except x.
States
We will use the term “state” in the thermodynamic sense: a distinguishable minimum or basin on free energy surface. States refer to a region of phase–space where you persist long compared to thermal fluctuations. The regions where there is a high probability of observing the system. One state is distinguished from another kinetically by a time-scale separation. The rate of evolving within a state is faster than the rate of transition between states.
Configuration
• Can refer to a distinct microstate or a structure that has been averaged over a local energy basin. You average over configurations (integrate over q) to get states (macrostates).
Transition State
• The transition state or transition–state ensemble, often labeled ‡, refers to those barrier configurations that have equal probability of making a transition forward or backward.
• It’s not really a “state” by our definition, but a barrier or saddle point along a reaction coordinate. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/22%3A_Biophysical_Reaction_Dynamics/22.01%3A_Concepts_and_Definitions.txt |
There are a number of ways of computationally modeling time-dependent processes in molecular biophysics. These methods integrate equations of motion for the molecular degrees of freedom evolving under a classical force–field interaction potential, a quantum mechanical Hamiltonian, or an energy landscape that could be phenomenological or atomistically detailed. Examples include using classical force fields to propagate Newton’s equation of motion, integrating the Schrödinger equation, or integrating the Langevin equation on a potential of mean force. Since our interest is more on the description of computational or experimental data, this will just be a brief overview.
Classical Dynamics from a Potential (Force Field)
An overview of how to integrate Newton’s equation of motion, leaving out many important details. This scheme, often used in MD simulations, is commonly called a Verlet integration.
1. Set initial positions r and velocities v of particles. For equilibrium simulations, the velocities are chosen from a Maxwell–Boltzmann distribution.
2. Take small successive steps in time δt, calculating the velocities and positions of the particles for the following time step.
• At each time step calculate the forces on each particle by calculating the gradient of the potential with respect to r: F(r)=$-\Delta V$(r). The force is proportional to the acceleration a = F/m, where m is the mass of the particle.
• Now propagate the position of each particle n in time from time step i to time step i+1 as rn,i+1 = rn,i + vn,i δt + an,iδt2. This is a good point to save information for the system at a particular time.
• Calculate the new velocity for each particle from vn,i+1 = [rn,i+1rn,i]/δt.
3. Now, you can increment the time step and repeat step iteratively.
Langevin Dynamics
Building on our discussion of Brownian motion, the Langevin equation is an equation of motion for a particle acting under the influence of a fixed potential U, friction, and a time-dependent random force. Writing it in one dimension:
$ma= f_{\textrm{potential}}+f_{\textrm{friction}}+f{\textrm{random}}(t)$
$m \frac{\partial^{2} x}{\partial t^{2}}=-\frac{\partial U}{\partial x}-\zeta \frac{\partial x}{\partial t}+f_{r}(t)$
The random force reflects the equilibrium thermal fluctuations acting on the particle, and is the source of the friction on the particle. In the Markovian limit, the friction coefficient ζ and the random force fr(t) are related through a fluctuation–dissipation relationship:
$\langle f_r(t)\rangle = 0$
$\langle f_r(t)f_r(t_0)\rangle = 2ζ k_BT \delta (t-t_0)$
Also, the diffusion constant is D = kBT/ζ, and the time scale for loss of velocity correlations is $\tau_c = \gamma-1 =$ m/ζ. The Langevin equation has high and low friction limits. In the low friction limit (ζ→0), the influence of friction and random force is minimal, and the behavior is dominated by the inertial motion of the particle. In the high friction limit, the particle’s behavior, being dominated by ζ, is diffusive. The limit is defined by any two of the following four linearly related variables: ζ, D, T, and $\langle f_r^2 \rangle$. The high and low friction limit are also referred to as the low and high temperature limits: $\langle f_r^2 \rangle / 2ζ = k_BT$.
Example: Trajectory for a particle on a bistable potential from Langevin dynamics | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/22%3A_Biophysical_Reaction_Dynamics/22.02%3A_Computing_Dynamics.txt |
We will survey different representation of time-dependent processes using examples from one-dimension.
Trajectories
Watch the continuous time-dependent behavior of one or more particles/molecules in the system.
Time‐dependent structural configurations
A molecular dynamics trajectory will give you the position of all atoms as a function of time {rN,t}. Although there is an enormous amount of information in such a trajectory, the raw data is often overwhelming and not of particularly high value itself. However, it is possible to project this high dimensional information in structural coordinates onto one or more collective variables ξ that forms a more meaningful representation of the dynamics, ξ(t). Alternatively, single molecule experiments can provide a chronological sequence of the states visited by molecule.
State trajectories: Time‐dependent occupation of states
A discretized representation of which state of the system the particle occupies. Requires that you define the boundaries of a state.
Example: A two state trajectory for an equilibrium $A \rightleftharpoons B$, where the time-dependent probability of being in state A is:
$P_{A}(t)=\left\{\begin{array}{l}1 \text { if } \xi(t)<\xi^{‡} \0 \text { if } \xi(t)>\xi^{‡}\end{array}\right.$
Time‐Dependent Probability Distributions and Fluxes
With sufficient sampling, one can average over trajectories in order to develop a time-dependent probability distribution $P(ξ,t)$ for the non-equilibrium evolution of an initial state.
State Populations: Kinetics
• Average over states to get time-dependent populations of those states.
$\int_{\text {state A}}P(ξ,t)dξ=P_A(t)$
• Alternatively, one can obtain the same information by analyzing waiting time distributions from state trajectories, as described below.
• The kinetics can be modeled with rate equations/master equation: P=kP.
Time‐Correlation Functions
Time-correlation functions are commonly used to characterize trajectories of a fluctuating observable. These are described next
22.04: Analyzing Trajectories
Analyzing Trajectories
Waiting‐Time Distributions, P
τW: Waiting time between arriving and leaving a state or P(k,t)
Pk: Probability of making k jumps during a time interval, t. → Survival probability
Pw: Probability of waiting a time τw between jumps? Waiting time distribution → FPT distribution
Let’s relate these...
Assume independent events. No memory of history – where it was in trajectory.
Flux: $\dfrac{dP_R}{dt}=J$
J: Probability of jump during ∆t. ∆t is small enough that J$\ll$ 1, but long enough to lose memory of earlier configurations.
The probability of seeing k jumps during a time interval t, where t is divided into N intervals of width Δt (t = N∆t) is given by the binomial distribution
$P(k,N)=\dfrac{N!}{k!(N-k)!}J^k(1-J)^{N-k}$
Here N≫k. Define rate λ in terms of the average number of jumps per unit time
$\lambda = \dfrac{\langle k \rangle}{t}=\dfrac{1}{\langle \tau_W \rangle} \nonumber$
$J=\lambda \Delta t \rightarrow J=\dfrac{\lambda t}{N} \nonumber$
Substituting this into eq. (22.4.1) Error! Reference source not found.. For N ≫ k, recognize
$(1-J)^{N-k} \approx (1-J)^N = \left( 1-\dfrac{\lambda t}{N} \right)^N \approx e^{-\lambda t} \nonumber$
The last step is exact for lim N → ∞.
Poisson distribution for the number of jumps in time t.
$\langle P(k,t) \rangle = \langle \lambda t \rangle = \dfrac{\lambda t}{\langle P^2(k,t)\rangle^{1/2}}=(\lambda t)^{1/2} \nonumber$
Fluctuations: $\sigma / \langle P(k,t) \rangle = (\lambda t)^{-1/2}$
OK, now what about Pw the waiting time distribution?
Consider the probability of not jumping during time t:
$P_k(0,t) = e^{-\lambda t} \nonumber$
As you wait longer and longer, the probability that you stay in the initial state drops exponentially. Note that Pk(0, t) is related to Pw by integration over distribution of waiting times.
$\int_{t}^{\infty}P_w(t')dt'=P(0,t)=e^{-\lambda t} \nonumber$
$\int_{t}^{\infty}P_wdt \rightarrow \text {probability of staying for t} \nonumber$
$\int_{0}^{t}P_wdt \rightarrow \text {probability of jumping within t} \nonumber$
Probability of jumping between t and t+∆t:
Probability of no decay for time <t decay on last
\begin{aligned} P_{w}(t) \Delta t &=\overbrace{\left(1-\langle k\rangle \Delta t_{1}\right)} \overbrace{\left(1-\langle k\rangle \Delta t_{2}\right) \ldots\left(1-\langle k\rangle \Delta t_{N}\right)} \overbrace{k \Delta t} \ &=(1-\langle k\rangle \Delta t)^{N} k \Delta t \approx k e^{-k t} \Delta t \end{aligned} \nonumber
\begin{aligned} P_{w} &=\lambda e^{-\lambda t} \ \langle\tau\rangle &=\int_{0}^{\infty} t p_{w}(t) t \end{aligned} \nonumber
\begin{aligned} \langle\tau_{w}\rangle &=1 / \lambda \rightarrow \text {the average waiting time is the lifetime} (1/\lambda)\ \langle\tau_{w}^{2}\rangle-\langle\tau_{w}\rangle^{2} &= (1 / \lambda)^{2} \end{aligned} \nonumber
Reduction of Complex Kinetics from Trajectories
• Integrating over trajectories gives probability densities.
• Need to choose a region of space to integrate over and thereby define states:
• States: Clustered regions of phase space that have high probability or long persistence.
• Markovian states: Spend enough time to forget where you came from.
• Master equation: Coupled first order differential equations for the flow of amplitude between states written in terms of probabilities.
$\dfrac{dP_m}{dt}=\sum_{n}k_{n\rightarrow m}P_n-\sum_{n}k_{m \rightarrow n}P_m \nonumber$
$k_{n\rightarrow m}$ is rate constant for transition from state n to state m. Units: probability/time. Or in matrix form: P=kP where k is the transition rate matrix. With detailed balance, conservation of population all initial conditions will converge on equilibrium | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/22%3A_Biophysical_Reaction_Dynamics/22.03%3A_Representations_of_Dynamics.txt |
Time‐Correlation Functions
Time-correlation functions are commonly used to characterize the dynamics of a random (or stochastic) process. If we observe the behavior of an internal variable A describing the behavior of one molecule at thermal equilibrium, it may be subject to microscopic fluctuations.
Although there may seem to be little information in this noisy trajectory, this dynamics is not entirely random, since they are a consequence of time-dependent interactions with the environment. We can provide a statistical description of the characteristic time scales and amplitudes to these changes by comparing the value of A at time t with the value of A at a later time t’. We define a time-correlation function as the product of these values averaged over an equilibrium ensemble:
$C_{AA}(t-t') \equiv \langle A(t)A(t')\rangle$
Correlation functions do not depend on the absolute point of observation (t and t’), but rather the time interval between observations (for stationary random processes). So, we can define the time interval $\tau \equiv t-t'$, and express our function as $C_{AA}(\tau)$.
We can see that when we evaluate CAA at τ=0, we obtain the mean square value of $A , \langle A^2 \rangle$. At long times, as thermal fluctuations act to randomize the system, the values of A become uncorrelated: $\lim_{\tau\to\infty} C_{AA}(\tau)=\langle A\rangle ^2$. It is therefore common to redefine the correlation function in terms of the deviation from average
$\delta A=A-\langle A\rangle$
$C_{\delta A\delta A}(t)=\langle \delta A(t)\delta A(0) \rangle = C_{AA}(t)-\langle A\rangle ^2$
Then $C_{\delta A \delta A}(0)$ gives the variance for the random process, and the correlation function decays to zero as τ → ∞. The characteristic time scale for this relaxation is the correlation time, $\tau_c$. which we can obtain from
$\tau_c = \dfrac{1}{\langle \delta A^2 \rangle } \int_0^{\infty}dt \langle \delta A(t) \delta A(0)\rangle$
The classical correlation function can be obtained from an equilibrium probability distribution as
$C_{AA}(t-t')=\int \mathrm{d}p \int \mathrm{d}q A(p,q;t) A(p,q;t') P_{eq}(p,q)$
In practice, correlation function are more commonly obtained from trajectories by calculating it as a time average
$C_{A A}(\tau)=\overline{A(\tau) A(0)}=\lim _{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} d t^{\prime} A_{i}\left(\tau+t^{\prime}\right) A_{i}\left(t^{\prime}\right)$
If the time-average value of C is to be equal to the equilibrium ensemble average value of C, we say thesystem is ergodic.
Example: Velocity Autocorrelation Function for Gas
A dilute gas of molecules has a Maxwell–Boltzmann distribution of velocities, for which we will focus on the velocity component along the $\hat{x}$ direction, xv. We know that the average velocity is $\langle v_x \rangle=0$. The velocity correlation function is
$c_{v_xv_x}(\tau)=\langle v_x(\tau)v_x(0)\rangle \nonumber$
The average translational energy is $\frac{1}{2}m\langle v_x^2 \rangle = k_BT/2$, so
$C_{v_xv_x}(0)=\langle v_x^2(0) \rangle = \dfrac{k_BT}{m} \nonumber$
For time scales that are short compared to the average collision time between molecules, the velocity of any given molecule remains constant and unchanged, so the correlation function for the velocity is also unchanged at kBT/m. This non-interacting regime corresponds to the behavior of an ideal gas.
For any real gas, there will be collisions that randomize the direction and speed of the molecules, so that any molecule over a long enough time will sample the various velocities within the Maxwell–Boltzmann distribution. From the trajectory of x-velocities for a given molecule we can calculate $C_{v_xv_x}(\tau)$ using time averaging. The correlation function will drop on with a correlation time τc, which is related to mean time between collisions. After enough collisions, the correlation with the initial velocity is lost and $C_{v_xv_x}(\tau)$ approaches $\langle v_x^2 \rangle = 0$. Finally, we can determine the diffusion constant for the gas, which relates the time and mean square displacement of the molecules: $\langle x^2(t)\rangle = 2D_xt$. From $D_x= \int_0^{\infty}dt\langle v_x(t)v_x(0)\rangle$ we have $D_x = k_BT\tau_c/m$. In viscous fluids $\tau_c/m$ is called the mobility.
Calculating a Correlation Function from a Trajectory
We can evaluate eq. (22.5.6) for a discrete and finite trajectory in which we are given a series of N observations of the dynamical variable A at equally separated time points ti. The separation between time points is ti+1‒ ti = δt, and the length of the trajectory is T=N δt. Then we have
$C_{AA} = \dfrac{1}{T} \sum_{i,j=1}^{N} \delta t A(t_i) A(t_j) = \dfrac{1}{N}\sum_{i,j=1}^NA_iA_j$
where $A_i=A(t_i)$. To make this more useful we want to express it as the time interval between points $\tau = t_j-t_i = (j-i)\delta t$, and average over all possible pairwise products of A separated by τ. Defining a new count integer n=j-i, we can express the delay as $\tau = n\delta t$. For a finite data set there are a different number of observations to average over at each time interval (n). We have the most pairwise products—N to be precise—when the time points are equal (ti = tj). We only have one data pair for the maximum delay τ = T. Therefore, the number of pairwise products for a given delay τ is N ‒ n. So we can write eq. (22.5.7) as
$C_{AA} (\tau) = C(n) = \dfrac{1}{N-n}\sum_{i-1}^{N-n}A_{i+n}A_i$
Note that this expression will only be calculated for positive values of n, for which tj≥ ti.
As an example consider the following calculation for fluctuations in fluorescence intensity in an FCS experiment. This trajectory consists of 32000 consecutive measurements separated by 44 μs, and is plotted as a deviation from the mean δA(t) = A(t)A.
The correlation function obtained from eq. (22.5.8) is
We can see that the decay of the correlation function is observed for sub-ms time delays. From eq. (22.5.4) we find that the correlation time is τC = 890 μs. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/22%3A_Biophysical_Reaction_Dynamics/22.05%3A_Time-Correlation_Functions.txt |
"Rare but important events"
The rates of chemical reaction are obtained by calculating the forward flux of reactant molecules passing over the transition state, i.e. the time rate of change of concentration, population, or probability for reactants passing over the transition state.
$\langle J^‡_f \rangle = dP^‡_R/dt$ ‡
• 23.1: Transition State Theory
Transition state theory is an equilibrium formulation of chemical reaction rates that originally comes from classical gas-phase reaction kinetics.
• 23.2: Kramers’ Theory
In our treatment the motion of the reactant over the transition state was treated as a free transitional degree of freedom. This ballistic or inertial motion is not representative of dynamics in soft matter at room temperature. Kramers’ theory is the leading approach to describe diffusive barrier crossing. It accounts for friction and thermal agitation that reduce the fraction of successful barrier crossings.
23: Barrier Crossing and Activated Processes
Transition state theory is an equilibrium formulation of chemical reaction rates that originally comes from classical gas-phase reaction kinetics. We’ll consider a two-state system of reactant R and product P separated by a barrier ≫kBT:
$R \underset{k_{r}}{\stackrel{k_{f}}{\rightleftharpoons}} P \nonumber$
which we obtain by projecting the free energy of the system onto a reaction coordinate ξ (a slow coordinate) by integrating over all the other degrees of freedom. There is a time-scale separation between the fluctuations in a state and the rare exchange events. All memory of a trajectory is lost on entering a state following a transition.
Our goal is to describe the rates of crossing the transition state for the forward and reverse reactions. At thermal equilibrium, the rate constants for the forward and reverse reaction, $k_f$ and $k_r$, are related to the equilibrium constant and the activation barriers as
$K_{e q}=\frac{[P]}{[R]}=\frac{P_{P, e q}}{P_{R, e q}}=\frac{k_{f}}{k_{r}}=\exp \left(-\frac{\left(E_{a}^{f}-E_{a}^{r}\right)}{k_{B} T}\right) \nonumber$
$E^f_a$, $E^r_a$ are the activation free energies for the forward and reverse reactions, which are related to the reaction free energy through $E^f_a - E^r_a = \Delta G^0_{rxn}$. Pi refers to the population or probability of occupying the reactant or product state.
The primary assumptions of TST is that the transition state is well represented by an activated complex $RP^‡$ that acts as an intermediate for the reaction from R to P, that all species are in thermal equilibrium, and that the flux across the barrier is proportional to the population of the activated complex.
$R \rightleftharpoons RP^‡ \rightleftharpoons P \nonumber$
Then, the steady state population of the activated complex can be determined by an equilibrium constant that we can express in terms of the molecular partition functions.
Let’s focus on the rate of the forward reaction considering only the equilibrium
$R \rightleftharpoons RP^‡ \nonumber$
We relate the population of reactants within the reactant well to the population of the activated complex through an equilibrium constant
$K^‡_{eq} = \dfrac{[RP^‡]}{[R]} \nonumber$
which we will evaluate using partition functions for the reactant and activated complex
$K^‡_{eq} =\dfrac{q^‡/V}{q_R/V}e^{-E^f_a/k_BT} \nonumber$
Then we write the forward flux in eq. (23.1) proportional to the population of activated complex
\begin{aligned}\langle J^‡\rangle &= v[RP^‡]\ &= vK^‡_{eq}[R] \end{aligned} \nonumber
Here ν is the reaction frequency, which is the inverse of the transition state lifetime $\tau_{mol}$. $v^{-1}$ or $\tau_{mol}$ reflects the time it takes to cross the transition state region.
To evaluate ν, we will treat motion along the reaction coordinate ξ at the barrier as a translational degree of freedom. When the reactants gain enough energy ($E^f_a$), they will move with a constant forward velocity $v_f$ through a transition state region that has a width $\ell$. (The exact definition of $\ell$ will not matter too much).
$\tau_{mol} = \dfrac{\ell}{v_f} \nonumber$
Then we can write the average flux of population across the transition state in the forward direction
\begin{aligned} \langle J^‡\rangle &= K^‡_{eq} [R] \dfrac{v_f}{\ell}\&= \dfrac{q^‡}{q_R}e^{-E^f_a/k_BT}[R] \frac{1}{\ell} \sqrt{\dfrac{k_BT}{2\pi m}} \end{aligned}
where $v_f$ is obtained from a one-dimensional Maxwell–Boltzmann distribution.
For a multidimensional problem, we want to factor out the slow coordinate, i.e., reaction coordinate (ξ) from partition function.
$q^‡ = q_ξ q^{'‡} \nonumber$
$q^{'‡}$ contains all degrees of freedom except the reaction coordinate. Next, we calculate $q_ξ$ by treating it as translational motion:
$q_ξ (trans) = \displaystyle\int\limits_0^{\ell} dξe^{-E_{trans}/k_BT} = \sqrt{\dfrac{2\pi m k_B T}{h^2}\ell}$
Substituting (23.1.2) into (23.1.1):
$\left\langle J_{f}^{‡}\right\rangle=\frac{k_{B} T}{h} \frac{q^{\prime ‡}}{q_{R}} e^{-E_{a}^{f} / k_{B} T}[R] \nonumber$
We recognize that the factor $v = k_BT/h$ is a frequency whose inverse gives an absolute lower bound on the crossing time of $~10^{-13}$ seconds. If we use the speed of sound in condensed matter this time is what is needed to propagate 1–5 Å. Then we can write
$\left\langle J_{f}^{‡}\right\rangle= k_f[R] \nonumber$
where the forward rate constant is
$k_f = Ae^{-E^f_a/k_BT}$
and the pre-exponential factor is
$A = v\dfrac{q^{'‡}}{q_R} \nonumber$
A determines the time that it takes to cross the transition state in the absence of barriers (Ea → 0). kf is also referred to as kTST.
To make a thermodynamic connection, we can express eq. (4) in the Eyring form
$k_{f}=v e^{\Delta S^{‡} / k_{B}} e^{-\Delta E_{f}^{‡} / k_{B} T} \nonumber$
where the transition state entropy is
$\Delta S^‡ = k\ln{\dfrac{q^{'‡}}{q_R}} \nonumber$
$\Delta S^‡$ represents a count (actually ratio) of the reduction of accessible microstates in making the transition from the reactant well to the transition state. For biophysical phenomena, the entropic factors are important, if not dominant!
Also note implicit in TST is a dynamical picture in which every trajectory that arrives with forward velocity at the TST results in a crossing. It therefore gives an upper bound on the true rate, which may include failed attempts to cross. This is often accounted for by adding a transmission coefficient κ < 1 to kTST: kf=κkTST. Kramers' theory provides a physical basis for understanding κ. | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/23%3A_Barrier_Crossing_and_Activated_Processes/23.01%3A_Transition_State_Theory.txt |
In our treatment the motion of the reactant over the transition state was treated as a free transitional degree of freedom. This ballistic or inertial motion is not representative of dynamics in soft matter at room temperature. Kramers’ theory is the leading approach to describe diffusive barrier crossing. It accounts for friction and thermal agitation that reduce the fraction of successful barrier crossings. Again, the rates are obtained from the flux over barrier along reaction coordinate, Equation (23.1).
One approach is to treat diffusive crossing over the barrier in a potential using the Smoluchowski equation. The diffusive flux under influence of potential has two contributions:
1. Concentration gradient $dC/dξ$. Proportional to diffusion coefficient, $D$.
2. Force from gradient of potential.
$J(\xi)=-D \frac{d C(\xi)}{d \xi}-\frac{C(\xi)}{\zeta} \frac{d U(\xi)}{d \xi} \nonumber$
As discussed earlier $ζ$ is the friction coefficient and in one dimension:
$\zeta=\frac{k_{B} T}{D} \nonumber$
Written in terms of a probability density $P$
\begin{align*} J &=D\left(-\frac{P}{k_{B} T} \frac{d U}{d \xi}-\frac{d P}{d \xi}\right) \ &=-D e^{-U / k_{B} T} \frac{d}{d \xi}\left(P e^{U / k_{B} T}\right) \end{align*}
or
$J e^{U / k_{B} T}=-D \frac{d}{d \xi} P e^{U / k_{B} T}$
Here we have assumed that $D$ and $ζ$ are not functions of $ξ$.
The next important assumption of Kramers’ theory is that we can solve for the diffusive flux using the steady-state approximation. This allows us to set: $J$ = constant. Integrate along $ξ$ over barrier.
\begin{align*} J \int_{a}^{b} e^{U / k_{B} T} d \xi * &=-D \int_{a}^{b} d P e^{U / k_{B} T} \[4pt] J \int_{a}^{b} e^{U(\xi) / k_{B} T} d \xi &=D\left\{P_{R} e^{U_{R} / k_{B} T}-P_{P} e^{U_{P} / k_{B} T}\right\} \end{align*}
$P_i$ are the probabilities of occupying the $R$ or $P$ state, and $U_i$ are the energies of the $R$ and $P$ states. The right hand side of this equation describes net flux across barrier.
Let’s consider only flux from $R\longrightarrow P$: $J_{R\longrightarrow P}$, which we do by setting $P_P\longrightarrow 0$. This is just a barrier escape problem. Also as a reference point, we set $U_R(ξ_R) = 0$.
$J_{R \rightarrow P}=\frac{D P_{R}}{\int_{a}^{b} e^{U(\xi) / k_{B} T} d \xi} \label{23.2.2}$
The flux is linearly proportional to the diffusion coefficient and the probability of being in the reactant state. The flux is reduced by a factor that describes the energetic barrier to be overcome. Now let’s evaluate with a specific form of the potential. The simplest form is to model $U(ξ)$ with parabolas. The reactant well is given by
$U_{R}=\frac{1}{2} m \omega_{R}^{2}\left(\xi-\xi_{R}\right)^{2} \label{23.2.3}$
and we set $\xi_R \longrightarrow 0$. The barrier is modeled by an inverted parabola centered at the transition state with a barrier height for the forward reaction $E_f$ and a width given by the barrier frequency $ω_{bar}$:
$U_{\mathrm{bar}}=E_{f}-\frac{1}{2} m \omega_{\mathrm{bar}}^{2}\left(\xi-\xi^{‡}\right)^{2} \nonumber$
In essence this is treating the evolution of the probability distribution as the motion of a fictitious particle with mass $m$.
First we evaluate the denominator in Equation \ref{23.2.2}. $e^{U_{bar}/k_BT}$ is a probability density that is peaked at $\xi^‡$, so changing the limits on the integral does not affect things much.
$\int_{a}^{b} e^{U_{b a r} / k_{B} T} d \xi \approx \int_{-\infty}^{+\infty} d \xi \exp \left[-\frac{m \omega_{B}^{2}\left(\xi-\xi^{‡}\right)^{2}}{2 k_{B} T}\right]=\sqrt{\frac{2 \pi k_{B} T}{m \omega_{B}^{2}}} \nonumber$
Then Equation \ref{23.2.2} becomes
$J_{R \rightarrow P}=\omega_{\text {bar }} D \sqrt{\frac{m}{2 \pi k_{B} T}} e^{-E_{f} / k_{B} T} P_{R} \label{23.2.4}$
Next, let’s evaluate $P_R$. For a the Gaussian well in Equation \ref{23.2.3}, the probability density along $ξ$ is $P_R = e^{-U_{R} / k_{B} T}$:
$P_{R}(\xi)=\exp \left[-\frac{1}{2} m \omega_{R}^{2}\left(\xi-\xi_{R}\right)^{2} / k_{B} T\right] \nonumber$
$P_{R} \approx \int_{-\infty}^{+\infty} P_{R}(\xi) d \xi=\omega_{R} \sqrt{\frac{m}{2 \pi k_{B} T}} \nonumber$
Substituting this into Equation \ref{23.2.4} we have
$J_{R \rightarrow P}=\omega_{R} \omega_{b a r} D\left(\frac{m}{2 \pi k_{B} T}\right) e^{-E_{f} / k_{B} T} \nonumber$
Using the Einstein relation $D = k_BT/\zeta$, we find that the forward flux scales inversely with friction (or viscosity).
$J_{R \rightarrow P}=\frac{\omega_{R}}{2 \pi} \frac{\omega_{b a r}}{\zeta} e^{-E_{f} / k_{B} T}$
Also, the factor of $m$ disappears when the problem is expressed in mass-weighted coordinates $\omega_{\text {bar }}=\sqrt{m} \omega_{\text {bar }}$. Note the similarity of Equation (9) to transition state theory. If we associate the period of the particle in the reactant well with the barrier crossing frequency,
$\frac{\omega_{R}}{2 \pi} \Rightarrow v=\frac{k_{B} T}{h} \nonumber$
then we can also find that we an expression for the transmission coefficient in this model:
$k_{diff}=\kappa_{diff} k_{TST} \nonumber$
$\kappa_{diff}=\dfrac{\omega_{bar}}{\zeta}\ll 1 \nonumber$
This is the reaction rate in the strong damping, or diffusive, limit. Hendrik Kramers actually solved a more general problem based on the Fokker–Planck Equation that described intermediate to strong damping. The reaction rate was described as
$k_{Kr}=\kappa_{Kr} k_{TST} \nonumber$
$\kappa_{K r}=\frac{1}{\omega_{b a r}}\left(-\frac{\zeta}{2}+\sqrt{\frac{\zeta^{2}}{4}+\omega_{b a r}^{2}}\right) \nonumber$
$\zeta=\frac{1}{m k_{B} T} \int_{0}^{\infty} d t\langle\xi(0) \xi(t)\rangle \nonumber$
This shows a crossover in behavior between the strong damping (or diffusive) behavior described above and an intermediate damping regime:
• Strong damping/friction: $\zeta \longrightarrow \infty$ $\kappa_{Kr}\longrightarrow\dfrac{\omega_{bar}}{\zeta} \nonumber$
• Intermediate damping: $\zeta <<2\omega_B$ $\kappa_{Kr}\longrightarrow 1$ and $k_{Kr}\longrightarrow k_{TST} \nonumber$
In the weak friction limit, Kramers argued that the reaction rate scaled as
$k_{\text {weak }} \sim \zeta k_{T S T} \nonumber$
That is, if you had no friction at all, the particle would just move back and forth between the reactant and product state without committing to a particular well. You need some dissipation to relax irreversibly into the product well. On the basis of this we expect an optimal friction that maximizes $κ$, which balances the need for some dissipation but without so much that barrier crossing is exceedingly rare. This “Kramers turnover” is captured by the interpolation formula
$\kappa^{-1}=\kappa_{K r}^{-1}+\kappa_{\text {weak }}^{-1} \nonumber$ | textbooks/chem/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/23%3A_Barrier_Crossing_and_Activated_Processes/23.02%3A_Kramers_Theory.txt |
• 1.1: Open Access Readings
• 1.2: Introduction to Fermentation and Microbes
• 1.3: Fermentation Paper
• 1.4: Basic Metabolic Pathways
• 1.5: Intro to Microbial Metabolism
• 1.6: Vinegar and Acetic Acid Fermentation
• 1.7: Carbohydrates
• 1.8: Fermented Vegetables
• 1.9: Cheese Production
Cheese making is essentially a dehydration process in which milk casein, fat and minerals are concentrated 6 to 12-fold, depending on the variety. The basic steps common to most varieties are acidification, coagulation, dehydration, and salting.
• 1.10: Yeast Metabolism
Yeasts are ubiquitous unicellular fungi widespread in natural environments. Yeast have a broad set of carbon sources (e.g., polyols, alcohols, organic acids and amino acids) that they can metabolize but they prefer sugars. Yeast are capable of metabolizing hexoses (glucose, fructose, galactose or mannose) and disaccharides (maltose or sucrose) as well as compounds with two carbons (ethanol or acetate).
• 1.11: Yogurt
Yogurt has been around for several millennia. Today, the FDA defines yogurt as a milk product fermented by two bacterial strains: a lactic acid producing bacteria: Lactobacillus bulgaricus and Streptococcus thermophiles.
• 1.12: Bread
Bread is a staple food in many cultures. The key ingredients are a grain starch, water, and a leavening agent. However, there are some breads without leavening agents (tortillas or naan), but these are flat breads.
• 1.13: Beer
• 1.14: Cider
Cider is a drink made from apples. In the US, cider can refer to apple juice or the fermented, alcoholic version. This section will focus on the fermented, alcoholic drink.
• 1.15: Wine
Wine is defined as the fermented juice of a fruit. Wines have been produced from all kinds of plant materials and fruits. However, the most classic version is made from grapes.
• 1.16: Distilled Spirits
Distilled spirits are all alcoholic beverages in which the concentration of ethanol has been increased above that of the original fermented mixture by a method called distillation.
01: Modules
Open Access Resources for Fermentation Course by Topic
Basic Metabolism
1. Structure and Reactivity, Reactivity 1
MP. Metabolic Pathways
GL. Mechanisms of Glycolysis
TC. Mechanisms of the TCA Cycle
FA. Mechanisms of Fatty Acid Metabolism
2. Lumen Learning Glucose Metabolism
Microbial Metabolism
1. Lumen Learning, Microbiology, Metabolic Biological Pathways
2. P. Jurtshuk, Chapter 4: Bacterial Metabolism , in Medical Microbiology, S. Baron, Ed, 4th Edition, Galveston (TX). University of Texas Medical Branch at Galveston; 1996.
Acetic Acid Bacteria and Vinegar
1. Mamlouk and Gullo, Acetic Acid Bacteria, Indian J. Microbiology, 2013, 53(4), 377-384
2. Mas, et. al., Acetic Acid Bacteria and the Production and Quality of Wine Vinegar, Scientific World Journal, 2014, 2014, 1-6.
3. Christopher Anthony, Quinoprotein Catalyzed Reactions, Biochem J., 1996, 320, 697-711
4. Gómez-Manzo, et. al., The Oxidative Fermentation of Ethanol, Int J Mol Sci. 2015, 16(1), 1293–1311
Carbohydrates
1. Saylor, Ch 16. Carbohydrates
2. Khan Academy, Carbohydrates
Fermented Vegetables
1. Pérez-Díaz IM, Breidt F, Buescher RW, Arroyo-Lopez FN, Jimenez-Diaz R, Bautista-Gallego J, Garrido-Fernandez A, Yoon S, Johanningsmeier SD. 2014. Chapter 51: Fermented and Acidified Vegetables In: Pouch Downes F, Ito KA, editors. Compendium of Methods for the Microbiological Examination of Foods, 5th Ed. American Public Health Association.
2. Franco W, Johanningsmeier SD, Lu J, Demo J, Wilson E, Moeller L. 2016 Chapter 7: Cucumber fermentation In: Paramithiotis, S., Editor. Lactic Acid Fermentation of Fruits and Vegetables. Boca Raton, FL: CRC Press. pp 107-155.
3. Fleming HP, McFeeters RF. 1985. Residual sugars and fermentation products in raw and finished commercial sauerkraut In 1984 Sauerkraut Seminar, N. Y. State Agric. Expt. Sta. Special Report No. 56:25-29.
4. Johanningsmeier, et. al. Chemical and Sensory Properties of Sauerkraut J. Food Sci., 2005, 70(5), 343-349.
Cheese
1. University of Guelph, Cheese Making Technology eBook
Cheese - the short version
Cheese Families
Cultures
Milk Structures & Coagulation Processes
2. Simon Cotton, Education in Chemistry, Royal Society of Chemistry, Really Cheesy Chemistry
3. Propionic Acid, H. Hettinga and G. W. Reinbold,The Propionic Acid Bacteria: A Review, Journal of Milk and Food Technology, 1972, 35(6), 358-372.
4. H. Falentin, S. Deutsch, et. al. Propionic Acid Fermentation, PLOS One, 2010 https://doi.org/10.1371/journal.pone.0011748
Yogurt
1. A Zourari, Jp Accolas, Mj Desmazeaud. Metabolism and biochemical characteristics of yogurt bacteria. A review. Le Lait, INRA Editions, 1992, 72 (1), pp.1-34.
Bread
1. Brewer’s Journal, Science/Maillard Reaction
2. Struyf, et. al. Bread Dough and Baker's Yeast: An Uplifting Synergy, Comprehensive Reviews in Food Science and Food Safety, 2017, 16, 850-867.
3. Guy Crosby, The Cooking Science Guy, Explaining Gluten
Beer
1. John Palmer, How to Brew 1st Edition
2. Bokulich and Bamforth, Microbiology of Malting and Brewing, Microbiol Mol Biol Rev. 2013, 77(2), 157–172.
3. Holt, et. al, The Molecular Biology of Fruity and Floral Aromas in Beer and Other Alcoholic Beverages, FEMS Microbiology Reviews, 2019, 43, 193–222
4. Craft Beer.com Beer Styles Study Guide (also available as .pdf download on their site)
Cider
1. Andrew Lea, The Science of Cidermaking
2. Cousin, et. al., Microorganisms in Fermented Apple Beverages: Current Knowledge and Future Directions Microorganisms, 2017, 5(3), 39.
3. Cox and Henick-Kling, Chemiosmotic Energy from Malolactic Fermentation, J. Bacteriol. 1989, 5750-5752
Wine
1. A list of varietals (and pronunciations) is available from J. Henderson, Santa Rosa Junior College.
2. The Wine Spectator has an article by J. Laube and J. Molesworth on Varietal Characteristics.
3. Niculescu, Paun, and Ionete, The Evolution of Polyphenols from Must to Wine, In Grapes and Wine, A. M. Jordão, Ed., 2018, InTechOpen.
4. Garrido & Borges, Wine and Grape Polyphenols, Food Research International, 2013, 54, 1844–1858
5. Chantal Ghanam, Study of the Impact of Oenological Processes on the Phenolic Composition of Wines, Thesis, Université de Toulouse.
6. Casassa, Flavonoid Phenolics in Red Winemaking In Grapes and Wine, A. M. Jordão, Ed., 2018, InTechOpen.
7. Dangles & Fenger, The Chemical Reactivity of Anthocyanins, Molecules, 2018, 23(8), 1970-1993.
8. He, et. al., Anthocyanins and Their Variation in Red Wines, Molecules, 2012, 17(2), 1483-1519.
9. Goold, et. al. Yeast's balancing act between ethanol and glycerol production in low-alcohol wines, Microbial Biotechnology 2017, 10(2), 1-15.
10. J. Harbertson, A Guide to the Fining of Wine, Washington State University
11. E.J. Bartowsky, Bacterial Spoilage of Wine, Letters in Applied Microbiology, 2009, 48, 149–156.
12. Belda, et. al., Microbial Contribution to Wine Aroma, Molecules 2017, 22(2), 189
Distilled Spirits
1. Artisanal Distilling, A Guide for Small Distilleries, Kris Berglund
2. Coldea, Mudura & Socaciu, Chapter 6: Advances in Distilled Beverages Authenticity and Quality Testing, In Ideas and Applications Toward Sample Preparation for Food and Beverage Analysis, M. Stauffer, Ed., IntechOpen, 2017.
3. N. Spaho, Ch 6: Distillation Techniques in the Fruit Spirits Production, In Distillation – Innovative Applications and Modeling, M. Mendes, Ed., IntechOpen, 2017.
4. S. Canas, Phenolic Composition and Related Properties of Aged Wine Spirits: Influence of Barrel Characteristics. A Review, Beverages, 2017, 3(4), 55-77. | textbooks/chem/Biological_Chemistry/Fermentation_in_Food_Chemistry/1.01%3A_Open_Access_Readings.txt |
Fermentation
Exercise \(1\)
• Define Fermentation:
• List as many uses of fermentation in modern food production as you can:
• Compare your list to Wikipedia List of Fermented Foods. Were there any surprises?
Fermentation Microbes
Exercise \(2\)
• Define prokaryotes and eukaryotes:
• Define gram positive vs gram negative bacteria:
• Define filamentous fungus vs yeast:
We will be talking about several fermentation microbes this semester. Review this complete list of microbes used in fermentation of food.
Exercise \(3\)
This is a sampling of key species. Define each as a prokaryote/bacterium or eukaryote/yeast/fungus
• Pseudomonas
• Candida albicans
• Saccharomyces
• Brettanomyces
• Lactobacillus
• Leuconostoc
• Lactococcus
• Streptococcus
• Penicillium
• Tetragenococcus
• Staphylococcus
• Gluconacetobacter
• Acetobacter
• Brachybacterium
1.03: Fermentation Paper
Topics in Biochemistry: Fermentation Fermentation Paper
Step 1: Choose a Topic
You will write a research paper explaining the production of a fermented product not discussed in class or expanding on a covered topic. There must be significant chemistry/biochemistry in your paper. Additionally, there will be a comparison of the use or production in the US vs another country.
Potential Topics for Review Article on Fermentation:
• Meat preservation
• Bletting of fruit (beyond ripening)
• Olive Fermentation (effects on oleuropein)
• Kimchee
• Tempeh
• Shalgam juice, hardaliye, or boza (Turkish fermented vegetable and grain beverages)
• Injera (organisms, fermentation, and carbohydrates in t'eff)
• Miso and Soy
• Distilled alcoholic beverages
• Impacts of Nitrogen/nutrients on fermentation in a specific product
• Impacts of pH on fermentation in beer or wine
• Effect of local water chemistry on brewing or distilling
• Tannin and polyphenolics in beer production
• Megasphaera cerevisiae effects on beer production (H2S formation)
• Hop content on flavor profiles
• Sulfur compounds in beers (production, regulation, flavor profiles)
• 'Head' or foam on beers
• Wheat ales
• Barley wines
• Cask conditioning of beers
• Production of two short branched-chain fatty acids, 2-methylbutanoic acid and 3- methylbutanoic acid, imparting the “cheesy/sweaty” notes in many cheeses.
• Propionic acid fermentation and the distinctive flavor of Swiss cheese
• Mold Fermentations (e.g. roquefort cheese)
• Buttermilk
• Microbe variability in flavors for a specific fermented product
• Lactic Acid Bacteria and the undesirable flavor products in cider such as 'piqûre acroléique’
• Phenolic variation in wine varietals and flavor profiles
• Impact of oxygen on wine (what happens to chemical profile after you open the bottle?)
• Effects of chemical aging on wine
• Champagne and sparkling wines
• Wine (broad topic -- will need a narrower focus)
• Tej: ethiopian honey wine
• Sulfur compounds in wine (production, regulation, flavor profiles)
• Champagne and sparkling wines
• Malolactic fermentation in wine. This secondary fermentation process is standard for most red wine production and common for some white grape varieties such as Chardonnay, where it can impart a "buttery" flavor from diacetyl, a byproduct of the reaction.
• Use of additives in wine. Ascorbic Acid, lysozyme, fumaric acid, sorbic acid, DMDC, tannins, gum arabic, colors. How do these impact chemistry and flavor?
• Biological aging of wines. Sherry. Use of 'flor'. Chemical byproducts and pathways involved.
• Astringency. Astringency is an important factor in the sensory perception of beers, ciders, and wines. What compounds are responsible for this sensation and how do they interact with tastebuds on a molecular level?
• Sake
• Tea
• Chocolate
• Coffee
• Kombucha
• Bulk chemical production
• Pharmaceuticals
• Wood-Ljungdal pathway for biofuel production
• ABE fermentation
• Enzymes needed for Gluten free bread
• FODMAPs (fermentable oligosaccharides disaccharides, monosaccharides and, polyols) cause IBS and gluten sensitivity -- diets, solutions?
• Microbe variability in flavors for a specific fermented product
• Propose your own topic
Confirm your topic for your research paper that includes these three key ideas:
1. Thesis statement (Purdue Online Writing Lab Tips for Writing a Thesis Statement)
2. Biochemistry/chemistry content
3. Cultural Comparison
Step 2: Outline the Paper
Write a 1-2 page outline of the literature on your topic. It should be in a typical bulleted or numbered form. See Purdue's Online Writing Lab for more details about writing an outline. This outline should contain an introduction and sufficient background biochemical pathway information, key experimental results, topics for discussion (applications/uses, variations), and a possible direction for cultural comparison essay.
Step 3: Annotated Bibiliography
List in your bibliography at least 15 references, 10 of which must be primary references. For each reference, cite it in the appropriate format and write a 2-3 sentence summary of each reference.
Step 4: Literature Review
Complete the background and literature review of your fermentation topic. This section should cover the biochemical pathways involved in your topic. This should be a minimum of five pages.
• Include drawings with structures (in ChemDraw) not clipped from a literature article.
Step 5: Applications Section
This section of the paper should address the applications or uses of your fermentation topic. It should be a complete story with current uses and modifications. This section of the paper should be at least 2-3 pages long.
Some possible topics to cover:
• What food or industrial applications are you exploring?
• Why are people interested in this topic?
• How is this technique or process or food used in US culture?
• What are current concerns/problems with the process?
• How are people attempting to improve this process?
• Is climate change going to affect production?
• Quality control issues?
• Regulatory issues?
• Are there different types of related fermentation products or processes?
Step 6: Cultural Comparisons
Outline or draft of the cultural comparison of your topic.
This last section should be 2-3 pages that looks at cultural differences in either the production or process or use of your topic. This could include cultural differences in consumption or different regulatory processes or production. Compare and contrast differences between at least two countries or cultures. Please use citations to support your ideas.
Step 7: Final Paper
This is your final Fermentation Paper.
There should be three parts:
1. Literature Review (with edits incorporated).
2. Application Section (with edits incorporated).
3. Cultural comparison of your topic (with edits and insights from Amsterdam and Belgium incorporated). | textbooks/chem/Biological_Chemistry/Fermentation_in_Food_Chemistry/1.02%3A_Introduction_to_Fermentation_and_Microbes.txt |
Basic Metabolism Overview
Exercise $1$
• Glycolysis (cytosol) and TCA cycle (mitochondria) convert glucose to high energy molecules: ________ and ___________ and __________.
This is just the beginning of energy production. NADH and FADH2 can be converted to more ATP. Oxidative phosphorylation is a metabolic pathway that transfers energy from NADH to the synthesis of ATP in the mitochondria.
• NADH oxidation occurs over many steps. Why don’t cells do this reaction directly? (Hint: This is a hydride reaction!)
Cellular Locations
Electrons stored in the form of the reduced coenzymes, NADH or FADH2, are passed through a chain of proteins and coenzymes to reduce O2 – the terminal electron acceptor – into H2O.
Exercise $2$
• NADH is formed at what point in metabolism: ___________.
• The TCA cycle occurs in __________.
• This electron transfer of oxidative phosphorylation occurs in ________________.
ATP production
The energy released by electrons flowing through this electron transport chain is used to transport protons to generate a pH gradient across the membrane.
Exercise $3$
• The phosphorylation of ADP to form ATP is [endothermic or exothermic].
• Protons to flow back across the membrane to restore equilibrium. This process is [diffusion or active transport ] and can drive a reaction.
Basic Metabolism: Glycolysis
Glucose is metabolized to produce energy (ATP) for the cell with the release of CO2 and H2O as byproducts. Glycolysis is a series of enzyme-catalyzed reactions that break glucose into 2 equivalents of pyruvate. This process (summarized below) is also called the Embden-Meyerhoff pathway.
Exercise $4$
• How many ATP are produced in this process? Keep in mind that everything is doubled after the 6 C glucose is cleaved into 2 3C units.
• How many ATP are consumed?
• Glycolysis results in the net formation of:
• ______ NADH
• ______ ATP
• ______ H2O
• Is glycolysis an uphill or downhill process? (i.e. exothermic or endothermic?)
Assume all reactions take place within an enzyme.
Glucose is first phosphorylated at the hydroxyl group on C6 by reaction with ATP.
Exercise $5$
• Propose a mechanism for this reaction.
• ATP is not that reactive on its own. Why?
• Explain why a phosphate ester is a good electrophile when the Mg+2 is around.
Glucose-6-phosphate is isomerized to fructose-6-phosphate in the next step. The glucose-fructose interconversion is a multistep process whose details are not yet fully understood.
It begins with opening of the hemiacetal to an open-chain aldehyde.
Exercise $6$
• Propose a mechanism for this reaction.
The open-chain aldehyde undergoes keto-enol tautomerization to the enediol which is further tautomerized to a different keto form.
Exercise $7$
• Looking at the structures of the sugars, propose a mechanism for this reaction.
Cyclization of the open-chain hydroxy ketone gives fructose (hemiacetal).
Exercise $8$
• Show a mechanism.
• Predict the product.
Fructose-6-phosphate is then converted to fructose 1,6-bisphosphate which is subsequently cleaved into two three-carbon compounds through a retro-aldol.
Review: aldol reaction
Exercise $9$
• On the aldol reaction above,
1. Put a circle around the nucleophile
2. Put a box around the electrophile in your starting materials
3. Highlight the bond that is formed (broken in the retro reaction)
Retro-Aldol
If the reaction is driven to starting materials (retro-aldol), then the reaction will favor the starting materials.
Exercise $10$
• Draw the mechanism for the retro-aldol when starting with fructose 1,6-bisphosphate.
• Predict the two products of this retro aldol reaction.
This mechanism is actually completed with an imine. Fructose 1,6-bisphosphate first reacts with the amino group of a lysine residue from an enzyme.
Exercise $11$
• Draw a mechanism for the formation of the imine.
The imine can then do a ‘retro-Stork enamine’ reaction (similar to the retro-aldol).
Review: Stork Enamine (an adol with the enamine replacing the enolate anion as the nucleophile).
Exercise $12$
• On the enamine reaction above,
1. Put a circle around the nucleophile
2. Put a box around the electrophile in your starting materials
3. Highlight the bond that is formed (broken in the retro reaction)
Retro-Stork enamine
If the reaction is driven to starting materials (retro-Stork enamine), then the reaction will favor the enamine and aldol starting materials.
Exercise $13$
• Predict the two products formed.
The products of the retro-Stork enamine are the enamine of dihydroxyacetone phosphate and glyceraldehyde 3-phosphate (shown below).
Exercise $14$
• Propose a mechanism for the conversion of the enamine of dihydroxyacetone phosphate is converted to a second molecule of glyceraldehyde 3-phosphate.
Glyceraldehyde 3-phosphate is oxidized and phosphorylated to 1,3-bisphosphoglycerate.
Exercise $15$
• Show the mechanisms for this transformation.
• What is the functional group formed in 1,3-bisphosphoglycerate?
• Predict the reactivity of this carbonyl.
Phosphoglycerate kinase catalyzes the transfer of a phosphoryl group from 1,3-bisphosphoglycerate to ADP forming ATP and 3-phosphoglycerate.
Exercise $16$
• Propose a mechanism for this transformation.
3-phosphoglycerate is converted to phosphoenol pyruvate (PEP) through dehydration and dephosphorylation.
In the last step of the metabolic breakdown of sugars (glycolysis), an enol phosphate is converted to pyruvic acid (shown below). The pyruvic acid is then converted to Acetyl Co A, which is the beginning of the TCA cycle.
Exercise $17$
• Draw a mechanism for the conversion of the enol phosphate to pyruvic acid.
• What drives this reaction? (ie what factors make this reaction energetically favorable?)
Basic Metabolism: TCA Cycle
Hans Krebs and Fritz Lipmann shared the Nobel Prize for Physiology and Medicine in 1953 for their work on elucidating the Krebs cycle and coenzyme A. The Krebs Cycle [or tricarboxylic acid (TCA) or citric acid cycle] plays a central role in the metabolism of glucose to produce energy (ATP). The TCA cycle results ultimately in the oxidation of acetic acid to two molecules of carbon dioxide.
Pyruvate (end product of glycolysis) must be converted to acetyl CoA to enter the TCA cycle.
This process begins with the formation of a thiol ester from pyruvate.
Exercise $18$
• Draw reaction mechanisms for the steps shown below.
• In his experiments that led to the elucidation of the TCA cycle, Hans Krebs added malonate (shown below) to extracts of pigeon flight muscle. The malonate could not be used as a substrate to replace pyruvate in the pathway above. Why can’t malonate be used? (Think of the carbonyl hill).
At this point, co-enzyme A reacts with the thiol ester (formed in question on previous page) to form acetyl CoA (shown below). To help keep track of the sulfurs, one is in a box and one is in a circle.
Exercise $19$
• Draw the mechanism for this reaction.
• The thiol ester formed in the last step of the reaction above is an ‘activated carbonyl’ (i.e. a better electrophile). Explain why the thiol ester is a better electrophile than the carboxylate anion.
• In an equivalent organic chemistry reaction, what would you use as an ‘activated carbonyl’?
In the next step Acetyl CoA reacts with oxaloacetate to form citryl CoA.
Exercise $20$
• Propose a mechanism for this reaction.
• In a similar reaction in organic chemistry, what would be the product for the reaction below? What type of reaction is this?
Citryl CoA is then hydrolyzed to citrate.
Exercise $21$
• Propose a reaction mechanism for this reaction.
Citrate is converted to isocitrate through two steps.
Exercise $22$
• Label all chiral centers with R or S.
• What changed in the conversion of citrate to isocitrate?
Isocitrate is oxidized to oxalosuccinate with NAD+.
Exercise $23$
• Draw the mechanism (and the other product) for this reaction.
Ketoglutarate is transformed to succinyl CoA in a multistep process analogous to the transformation of pyruvate to acetyl CoA that we saw in the first step.
Exercise $24$
• Draw the transformation starting with the reaction with TPP ylide.
Succinyl CoA is hydrolyzed to succinate and is coupled with the phosphorylation of guanosine diphosphate (GDP) to give guanosine triphosphate (GTP).
Exercise $25$
• Draw the mechanism for this reaction.
Basic Metabolism: Oxidative Phosphorylation
Electron Transfer in Complex I
Complex I is located in the inner mitochondrial membrane in eukaryotes. The electrons from NADH (produced in the TCA cycle) begin to be shuttled through small steps to capture the energy.
This section will examine the mechanisms of electron transfer by the peripheral domain, proton transfer by the membrane domain and how their coupling can drive proton transport.
The net reaction of Complex I is the oxidation of NADH and the reduction of ubiquinone.
Net reaction:
$\ce{NADH + H^+ + UQ \rightarrow NAD^+ + UQH2}$
Exercise $26$
• How many protons are moved across the membrane for each cycle of Complex I?
• Is this active transport or passive diffusion?
• If this is active, what is fueling this transport?
• Is this with or against the concentration gradient? (i.e. antiporter or synporter?)
Complex II: Overview
Complex II (aka succinate dehydrogenase from the TCA cycle) oxidizes succinate (O2CCH2CH2CO2) to fumarate (trans-O2CCH=CHCO2).
Complex II also has a cascade of electron transfers. When succinate is converted to fumarate, the electrons are passed through a new cascade to eventually reduce UQ (just like Complex I!)
$\ce{succinate \rightarrow fumarate + 2H+ + 2e-}$
$\ce{UQ + 2H+ + 2e- \rightarrow UQH2}$
Exercise $27$
• Write the net reaction for the work of Complex II.
• The reaction catalyzed in Complex II has a very small ΔG°. Is it sufficient to power an antiporter channel?
Complex III: Overview
Complex III (sometimes called cytochrome bc1 complex) has two main substrates: cytochrome c and UQH2. The structure of this complex was determined by Johann Deisenhofer (Nobel Prize for a photosynthetic reaction center – we will see this soon).
This role of complex III is to transfer the electrons from UQH2 to cytochrome c.
Exercise $28$
• Complete the equation for the redox reactions of complex III.
___ UQH2 + 1 UQ + 2 H+ + ___ cyt c+3 $\ce{\rightarrow}$ ___ UQH2 + ___ UQ + 4 H+ + ___ cyt c+2
• There are two H+ coming from the mitochondrial matrix but _____ H+ are transported into the
inter-membrane space
Complex III to Complex IV: Cytochrome C as a mobile carrier
Exercise $29$
• Circle the mobile electron carriers in the picture above.
Complex IV Overview
Another complex whose goal is to move electrons and protons! This is the big step since it is the main site for dioxygen
utilization in all anaerobic organisms. The structure of complex IV is shown in the left figure and to the right in a diagram taken from the Kegg pathways (with permission).
Exercise $\PageIndex{30$
• Complete the net equation for the redox reactions of complex IV.
___ cyt c+2 + 1 O2 + 8 H+ $\ce{\rightarrow}$ ___ H2O + 4 H+ + ___ cyt c+3
• How many protons are being “pumped” into the intermembrane space? _________
• How many electrons are needed to balance this equation? _________
• What are the initial and final “mobile” carriers of electrons?
Complex V: ATP Synthase
Neglecting Complex II, the overall reaction of the mitochondrial chain, per 2e transferred, can be written as:
$\ce{NADH + H+ + ½ O2 + 10 H+("in") \rightarrow NAD+ + H2O + 10 H+("out")} E° = +1.135V$
Exercise $31$
Each two e (from 1 NADH molecule) through the electron transport chain results in the net transfer of 10 protons across the membrane:
• Complex I: ________ H+
• Complex III: ________ H+
• Complex IV: ________ H+
Protons will diffuse from an area of high proton concentration to an area of lower proton concentration. Peter Mitchell received the Nobel Prize in 1978 for his proposal that an
electrochemical concentration gradient of protons across a membrane could be harnessed to make ATP. The proton gradient created by the electron transport chain provides enough energy to synthesize about 2.5 molecules of ATP through a process called chemiosmosis.
Exercise $32$
• This proton flow is driven by two forces (fill in the blanks):
1. Diffusion force caused by a concentration gradient. All particles tend to move from __________ concentration to __________ concentration.
2. Electrostatic force caused by an electrical potential gradient. An electrical gradient is a consequence of charge separation. Protons will be attracted to ___________.
ATP synthase is an important enzyme that utilizes the proton gradient drive the synthesis of (ATP).
Electric Potential Drives Motor
The rotor is not locked in a fixed position in the center of the bilayer and the rotor sites switch between the empty and the ion bound states. When driving ATP synthesis, an ion arrives from the periplasm and binds at an empty rotor site.
The positive stator charge (Arg227) plays a fundamental role in the function of the F0 motor.
Exercise $33$
• What is the charge of the empty binding sites:
1. when no ion is bound?
2. when a Na+/H+ ion is binding?
• When an ion enters the rotor site from the stator channel, the net charge is reduced, thus [ increasing /decreasing ] the attraction to the stator. Now the rotor is able to move through the hydrophobic part of the stator, while the arginine attracts the next empty rotor site.
• The empty site (charge = ________) is electrostatically attracted by the stator (charge = ________) and guided into the next slot.
• The rotor site is occupied until it reaches the stator from the opposite side, where it encounters the positive stator charge, causing dissociation of the ion. Why? Consider diffusion gradients and charges.
Electrical Power Fuels Rotary ATP Synthase
Exercise $34$
• Fill in the blanks on the following summary of ATP Synthase:
During ATP synthesis, the ____________ gradient fuels the membrane-embedded F0 motor to rotate the central stalk. This rotation causes sequential binding changes at the peripheral F1 domain so that one catalytic site binds ________ and phosphate, the second makes tightly bound ATP, and the third step ____________.
In anaerobically growing bacteria, when the respiratory enzymes are not active, the F1 motor can hydrolyze ATP.
• Which direction will the pump turn in these conditions?
• What will happen to the F0 motor? And the H+ gradient?
Sources
Dimroth, Operation of the F0 motor of the ATP synthase, Biochimica et Biophysica Acta (BBA) -Bioenergetics, 2000, 1458, 374-386. | textbooks/chem/Biological_Chemistry/Fermentation_in_Food_Chemistry/1.04%3A_Basic_Metabolic_Pathways.txt |
Microbial Metabolism: Bacterial Pathways
Oxygen (O2) is essential for organisms growing by aerobic respiration (previous worksheet). Many organisms are unable to carry out aerobic respiration because of one or more of the following circumstances:
1. The cell lacks a sufficient amount of any final electron acceptor (such as O2) to carry out cellular respiration.
2. The cell lacks genes to make appropriate complexes and electron carriers in the electron transport system (oxidative phosphorylation).
3. The cell lacks genes to make one or more enzymes in the TCA cycle.
Fermentation usually refers to anaerobic processes in which organisms do not use molecular oxygen in respiration. Some microbes are facultative fermenters; they contain all the genes required to use either aerobic or anaerobic respiration pathways and they will use aerobic respiration unless there is no oxygen available. However, many prokaryotes are permanently incapable of respiration, even in the presence of oxygen because they lack enzymes or complexes to complete either TCA cycle or electron transport. These are obligate anaerobes.
Lactic Acid Fermentation
One important fermentation process is lactic acid fermentation. This process is common in lactobacilli bacteria (and many others). If respiration does not occur through oxidative phosphorylation, NADH must be re-oxidized to NAD+ for reuse in glycolysis through the EMP pathway (covered earlier).
Exercise \(1\)
• How many NAD+ are created in glycolysis? _________
• Draw the arrows for the glycolysis reaction of NAD+ to NADH.
Exercise \(2\)
NAD+ is a catalyst in these reactions.
• Catalysts must change the activation energy. Normally the barrier to breaking a C-H bond is very high. How does this enzyme: NAD+ complex achieve that?
• Catalysts must be regenerated. In aerobic metabolism, NADH is converted back to NAD+ by reacting with pyruvate (see below).
Facultative microbes, particularly bacteria, often use pyruvate as a final electron acceptor.
• Draw a curved arrow mechanism and predict products for this reaction.
Lactic acid fermentation regenerates NAD+ but does not directly produce additional ATP.
Exercise \(3\)
• Thus, organisms carrying out fermentation produce a maximum of _____ ATP molecules per glucose during glycolysis.
• When there is sufficient O2, facultative microbes will preferentially switch to cellular respiration for glucose metabolism. Explain why.
• Bacteria creating lactic acid as a side product, create a ____________ [acidic / basic ] environment.
• The acidity of lactic acid impedes biological processes. This can be beneficial to the fermenting organism as it drives out competitors. Humans discovered that foods prepared with lactic acid fermentation will have a longer shelf-life. Explain in your own words.
Homolactic vs Heterolactic Fermentation
When lactic acid is the only fermentation product, the process is said to be homolactic fermentation; such is the case for Lactobacillus delbrueckii and S. thermophiles used in yogurt production.
However, many bacteria perform heterolactic fermentation utilize the pentose phosphate pathway to produce a mixture of lactic acid and ethanol. More detail on this pathway follows. One important heterolactic fermenter is Leuconostoc mesenteroides, which is used for souring vegetables like cucumbers and cabbage, producing pickles and sauerkraut, respectively.
Exercise \(4\)
• Thus, organisms carrying out fermentation produce a maximum of _____ ATP molecules per glucose during glycolysis.
• When there is sufficient O2, facultative microbes will preferentially switch to cellular respiration for glucose metabolism. Explain why.
• Bacteria creating lactic acid as a side product, create a ____________ [acidic / basic ] environment.
• The acidity of lactic acid impedes biological processes. This can be beneficial to the fermenting organism as it drives out competitors. Humans discovered that foods prepared with lactic acid fermentation will have a longer shelf-life. Explain in your own words.
Pentose Phosphate Pathway
The pentose phosphate pathway has three primary roles in metabolism (human and prokaryotic).
1. Production of ribose 5-phosphate (R5P) for synthesis of nucleotides and nucleic acids.
2. Production of erythrose 4-phosphate (E4P) for synthesis of aromatic amino acids.
3. The PPP creates NADPH (up to 60% of NADPH production comes from this pathway).
There are two phases to these pathways: oxidative phase and non-oxidative phase.
Exercise \(5\)
• Add curved arrows and missing biological reagents to this schematic for the oxidative phase of pentose phosphate pathway
Ribulose-5-phosphate (the product of the oxidative stage) is the precursor to the sugar that makes up DNA and RNA.
Exercise \(6\)
• How many NADPH are produced for each glucose in this phase of the pathway?
In the non-oxidative phase, there are different options that depend on the cell’s needs. The ribose-5-phosphate from step 3 is combined with another molecule of ribose-5-phosphate to make one, 10-carbon molecule. Excess ribose-5-phosphate, which may not be needed for nucleotide biosynthesis, is converted into other sugars that can be used by the cell for metabolism.
Ribulose-5-phosphate (the product of the oxidative stage) is the precursor to the sugar that makes up DNA and RNA.
Exercise \(7\)
• Propose a mechanism for this conversion to the cyclic ribose-5-phosphate (three steps!).
Of interest for heterolactic fermentation, ribose-5-phosphate is converted to glyceraldehyde-3- phosphate which enters the glycolysis pathway to be converted to pyruvate and then lactic acid.
The first step is a simple epimerization alpha to the carbonyl to convert ribose-5-phosphate to xylulose-5-phosphate.
Exercise \(8\)
Propose a mechanism for this interconversion.
The second step is a reaction of xylulose-5-phosphate with a ribose-5-phosphate to prepare a 7- carbon sugar and the glyceraldehyde-3-phosphate.
Exercise \(9\)
Draw curved arrows for this mechanism.
The glyceraldehyde-3-phosphate is then converted to lactic acid. This is a repeat of glycolysis and homolactic acid fermentation.
Exercise \(10\)
Draw out the pathway to convert glyceraldehyde-3-phosphate to lactic acid.
The next steps follow a similar pathway to produce other length sugars and more glyceraldehyde-3-phosphate.
In heterolactive fermentation, xylulose-5-phosphate can also be converted directly to glyceraldehyde-3-phosphate and acetyl phosphate.
Exercise \(11\)
Draw curved arrows for this mechanism.
Acetyl phosphate can then be converted to ethanol. Suggest some steps for this conversion (HINT: Look at the ethanol fermentation pathway in yeast).
Entner-Doudoroff (ED) Glycolytic Pathway
Some bacteria often utilize the Entner-Doudoroff (ED) Glycolytic Pathway rather than the classic glycolysis pathway.
Exercise \(12\)
• How does the ED pathway differ from classic Embden-Meyerhof glycolysis pathway? Be specific.
• How does the ED pathway differ from pentose phosphate pathway? Be specific.
• The Entner–Doudoroff pathway also has a net yield of ________ ATP for every glucose molecule processed, as well as 1 NADH and 1 NADPH.
• Embden-Meyerhof glycolysis has a net yield of _________ ATP and ______ NADH for every glucose molecule processed.
Sources
du Toit, Englebrecht, Lerm, & Krieger-Weber, Lactobacillus: The Next Generation of Malolactic Acid Fermentation Starter Cultures, Food Bioprocess. Technol. 2011, 4, 876-906. | textbooks/chem/Biological_Chemistry/Fermentation_in_Food_Chemistry/1.05%3A_Intro_to_Microbial_Metabolism.txt |
Vinegar Production
Acetic Acid Fermentation
The first description of microbial vinegar fermentation was made by Pasteur in 1862. He recognized that vinegar was produced by a living organism.
Overview of Acetic Acid Metabolism
Acetic acid bacteria (AAB), genus Acetobacter, are a group of Gram-negative bacteria which oxidize sugars or ethanol and produce acetic acid during fermentation. There are several different genera in the family Acetobacteraceae. AAB are found in sugary, alcoholic and acidic niches such as fruits, flowers and particularly fermented beverages. Given sufficient oxygen, these bacteria produce acetic acid (vinegar) from ethanol.
Several species of acetic acid bacteria are used in industry for production of certain foods and chemicals. Commonly used feeds include apple cider, wine and fermented grain mashes. AAB are also involved in the production of other foods such as cocoa powder and kombucha. However, they can also be considered spoilage organisms.
Exercise $1$
List 2-3 places/times that acetic acid bacteria would be considered spoilage organisms.
Location of Ethanol Oxidations
AAB make acetic acid by two successive catalytic reactions of the alcohol dehydrogenase (ADH) and a membrane-bound aldehyde dehydrogenase (ALDH) that are bound to the periplasmic side of the cytoplasmic membrane.
Ethanol, acetaldehyde, and acetic acid can be quite toxic for living organisms. However, AAB are able to live in both alcoholic and acid media because of a few adaptations.
1. Location of the alcohol dehydrogenases. Are the toxic compounds ever entering the cell cytoplasm?
2. The ALDH and ADH is one complex in many AAB species, thus never releasing acetaldehyde.
3. Acetobacter have H+ pumps that actively remove protons from the cells.
4. There are changes in the composition of membrane phospholipids to help maintain membrane fluidity at low pH.
5. Many cellular proteins show increased negative surface charge that stabilizes them at low pH.
Location of Acetic Acid Metabolism with PQQ
AAB are able to oxidize ethanol to acetic acid using a membranebound ADH and ALDH complexes with a PQQ cofactor.
This enzyme is capable of oxidizing a few primary alcohols (C2 to C6) but not methanol or secondary alcohols.
PQQ Reaction Mechanisms:
Exercise $2$
Add a curved arrow mechanism for the oxidation of ethanol to acetaldehyde using this PQQ cofactor.
How many electrons are transferred from the ethanol molecule to the PQQ in this step?
PQQ Reaction Mechanisms
Exercise $3$
In the second step, acetaldehyde forms a hydrate. Show the mechanism for this step.
The acetaldehyde hydrate then reacts with another PQQ to form acetic acid. Propose a curved arrow mechanism for this transformation.
• How many electrons are transferred from the acetaldehyde hydrate molecule to the PQQ molecule?
• How many total electrons are involved in this two-step transformation?
PQQ tied to Electron Transport Process
The electrons are transferred electrons to ubiquinone (UQ) that are tightly linked to the respiratory chain (oxidative phosphorylation).
Exercise $4$
• For every EtOH molecule that is oxidized twice to Acetic Acid:
1. How many electrons move through the electron transport chain?
2. How oxygen atoms are converted to water?
3. How many protons are pumped across the membrane?
4. Assuming that approximately 3-4 protons yield 1 ATP, how many ATP produced?
• Why is this process considered to require oxygen? i.e. Why is this organism an obligate aerobe?
• What is the purpose of converting ethanol to acetic acid for these bacteria?
Acetic Acid Assimilation
Some Acetobacter and Gluconacetobacter strains can metabolize acetic acid to carbon dioxide and water using Krebs cycle enzymes. In vinegar, for instance, Acetobacter species exhibits a biphasic growth curve, where the first corresponds to an EtOH oxidation with AcOH production. The second spike in growth is due to ‘acetic acid assimilation’ wherein the bacteria move the ethanol and/or acetic acid into the cytoplasm to metabolize using the TCA cycle and oxidative phosphorylation.
Exercise $5$
• What is advantage of using acetic acid assimilation?
• Why do the bacteria not use this pathway from the beginning?
• In vinegar fermentation, producers attempt to prevent this process. Explain why.
Mechanisms of NAD+ Driven Dehydrogenases in Acetobacter
The overall chemical reaction facilitated by these bacteria is:
$\ce{C2H5OH + O2 → CH3COH → CH3COOH + H2O} \nonumber$
Exercise $6$
Propose a mechanism for the conversion of ethanol to acetaldehyde (reverse of the reduction done by yeast) utilizing NAD+.
In the second step, acetaldehyde forms a hydrate which is then converted to acetic acid.
Exercise $7$
Propose a mechanism for the conversion of acetaldehyde to acetic acid utilizing NAD+.
In the third step, acetic acid is converted to acetyl CoA for use in the TCA Cycle.
Exercise $8$
Propose the missing biological ‘reagents’ for this conversion.
Sources
1. Christopher Anthony, Quinoprotein Catalyzed Reactions, Biochem J., 1996, 320, 697-711
2. Gómez-Manzo, et. al., The Oxidative Fermentation of Ethanol, Int J Mol Sci. 2015, 16(1), 1293–1311.
3. Mamlouk and Gullo, Acetic Acid Bacteria: Physiology and Carbon Sources Oxidation, Indian J. Microbiology, 2013, 53(4) 377-384.
4. Mas, et. al., Acetic Acid Bacteria and the Production and Quality of Wine Vinegar, Scientific World Journal, 2014; 2014, 1-6 | textbooks/chem/Biological_Chemistry/Fermentation_in_Food_Chemistry/1.06%3A_Acetic_Acid_Fermentation.txt |
Fermentation Carbohydrates
Carbohydrates are the most abundant biomolecules on earth and are widely used by organisms for structural and energy-storage purposes. In particular, glucose is the most commonly used monosaccharide, thus, this is why all of the pathways that we have covered start with glucose.
However, many microorganisms are able to utilize more complex carbohydrates for energy.
Let’s look at the structures of different carbohydrates and their use in microbial metabolism.
Monosaccharides
Monosaccarides are the building blocks (monomers) for the synthesis of polymers. These sugars are classified by the length of the chain and the position of the carbonyl.
Exercise $1$
Glucose and Ribose are shown below.
• They are both aldoses because the carbonyl is a [ketone/ aldehyde].
• One is a hexose and one is a pentose. Label each.
Exercise $2$
Glyderaldehyde and dihydroxyacetone are shown below.
• One is an aldose and one is a ketone. Label each.
• They are both ______________oses
Monosaccharides of four or more carbon atoms are typically more stable when they adopt cyclic, or ring, structures. This is a nucleophilic addition the results in a hemiacetal.
Exercise $3$
• Draw arrows for this forward reaction.
• Draw arrows for the reaction back to the straight chain.
Stereochemistry of Cyclic Sugars
There hemiacetal formed when the sugar cyclizes is a new chiral center. Two possible orientations can be formed.
Exercise $4$
• Circle the new chiral center on the two possible isomers ($\alpha$-glucose and $\beta$-glucose) below. This is called the anomeric carbon.
Disaccharides
Disaccharides are carbohydrates composed of two monosaccharide units that are joined by a carbon–oxygen-carbon linkage known as a glycosidic linkage.
Three common disaccharides are the grain sugar maltose, made of two glucose molecules; the milk sugar lactose, made of a galactose and a glucose molecule; and the table sugar sucrose, made of a glucose and a fructose molecule.
Exercise $5$
• Circle the disaccharide linkage in each of these disaccharides from the table below.
Maltose Lactose Sucrose
There are different types of glycosidic linkages. They are characterized by the numbering of the alcohols that are linked in the ether. And the anomer of the sugar.
Exercise $6$
• For the maltose shown here, the sugars are [ $\alpha / \beta$ ] anomers. Circle one.
• Which alcohols are linked? Numbering proceeds around the ring starting with the anomeric carbon.
Thus, this is alpha-1,4-maltose.
Exercise $7$
• Draw $\beta$-1,4-maltose, where the glucose on the right has isomerized.
• What type of glycosidic linkage is present in this lactose isomer?
• What type of glycosidic linkage is present in sucrose?
The human body is unable to metabolize maltose or any other disaccharide directly from the diet because the molecules are too large to pass through the cell membranes of the intestinal wall. Therefore, an ingested disaccharide must first be broken down by hydrolysis into its two constituent monosaccharide units. In the body, such hydrolysis reactions are catalyzed by enzymes such as maltase or lactase.
** This will be important in upcoming discussions of beer, cheese, and yogurt production!
Polysaccharides
Polysaccharides are very large polymers composed of hundreds to thousands of monosaccharides. These structures are used for energy storage or, in the case of cellulose, structural components. Starch is a mixture of two polysaccharides and is an important component of grains (wheat, rice, barley, etc.). This will again be important in bread and beer fermentations. These two polymers are amylose and amylopectin.
Amylose is a straight chain polysaccharide (shown below).
Exercise $8$
• What monomer is present?
• What type of linkage is present?
• This structure becomes a spiral. What IMF and geometry effects might cause the structure to take on this shape?
• Amylose is not water soluble even though the monosaccharides are soluble. Suggest a reason why this is not soluble in water. Consider how easily the water might be able to interact with the spiral structure.
• Amylase cleaves only internal alpha (1-4) glycosidic bonds. Which disaccharide will be formed when amylose is being hydrolyzed by amylase
Amylopectin is a branched-chain polysaccharide. (cartoon shown below)
Exercise $9$
• What monomer makes up this polymer?
• There are two types of linkage are present. What are they?
• Amylopectin is water-soluble but amylose is not. Show how water might interact with amylopectin above (IMF) so that it will dissolve.
• The branched linkages are hydrolyzed by isoamylase, while the 1-4 linkages are hydrolyzed by amylase. What is the disaccharide produced?
• Amylopectin is more easily digested than the amylose. This is due to the packing of amylose. Explain.
HW questions:
Exercise $10$
Another Polysaccharide: Cellulose
• Draw cellulose.
• How does it differ from amylose?
• Cellulose is not digestible by mammals (unless they have a symbiotic bacteria in their gut). Why? | textbooks/chem/Biological_Chemistry/Fermentation_in_Food_Chemistry/1.07%3A_Carbohydrates.txt |
Pickled Vegetables Production
Vegetables may be preserved by fermentation or acidification. The most common commercial fermented vegetables include cucumbers, cabbage, and olives, but there are many other vegetables that have been used.
Definitions:
• Fermented Vegetables: vegetables that have been preserved with acid-producing microorganisms (additional acid may or may not be added to the process)
• Acidified Vegetables: vegetables that have been preserved the direct addition of acid
• Pickles: generic term that refers to either fermented or acidified vegetables but usually refers to the use of acetic acid as the primary acidifying agent
Typical Process for Vegetable Fermentation:
Vegetable Carbohydrates
Carbohydrates in Vegetables: Simple sugars
Fresh cabbage contains about 4-8% fermentable sugars: glucose, fructose, and sucrose. Cucumbers have much lower amounts of these fermentable sugars.
Exercise $1$
• Draw these three fermentable sugars.
• Cabbage fermentations reach much lower pH than cucumber pickles causing sauerkraut to be more sour than other fermented vegetables. Explain this observation
There are many complex polysaccharides in vegetables that are not fermentable or easily metabolized. This is often called fiber.
Carbohydrates in Vegetables: Cellulose
Cellulose is a linear chain of thousands of linked D-glucose units.
Exercise $2$
What type of linkages are used in this polysaccharide? Circle the correct designations.
• $\alpha \text{ or } \beta$
• 1-2 1-3 1-4 1-5 1-6 2-4
Carbohydrates in Vegetables: Pectins
Pectin is a polysaccharide made from a mixture of monosaccharides. While many distinct polysaccharides have been identified and characterized within these ‘pectic polysaccharide family’, most contain stretches of linear chains of $\alpha$-(1–4)-linked D-galacturonic acid.
Exercise $3$
• Draw a linear chain of linear chains of $\alpha$-(1–4)-linked D-galacturonic acid.
Brining
Yeast and many microorganisms are usually present on surface of raw vegetables. Salt, either as a solid or as a brine solution, is added to the vegetable. Shredded cabbage or other suitable vegetables are placed in a jar. Salt, either as a solid or as a brine solution, is added to the vegetable so that is fully submerged. Mechanical pressure is applied to the cabbage to expel the juice, which contains fermentable sugars and other nutrients suitable for microbial activity.
Salt, primarily NaCl, serves several major roles in the preservation of fermented vegetables:
1. High salt concentration limits the growth of many spoilage organisms
2. Salt helps rupturing the membranes, releasing the fermentable sugars into the solution for the bacteria
3. Salt contributes to the flavor of the final pickle
Exercise $4$
In addition, the salt can prevent the pectinolytic or cellulolytic enzymes from working.
• How might salt impact an enzyme on a molecular level? Consider IMF.
• Why would you want to prevent the pectinolytic and cellulolytic enzymes from working? Consider texture of pickled cucumbers.
Fermentation Process
Fermentation organisms
The fermentation of vegetables usually involves naturally occurring lactic acid bacteria (LAB). This is considered to be a wild fermentation as the LAB bacteria are found naturally on the vegetables. At the start, there are many bacteria that colonize the fresh vegetable; these organisms will compete. As the LAB begin to excrete lactic acid, the pH will decrease, and most other organisms will die.
Exercise $5$
• Review: Outline the pathway for the formation of lactic acid.
• Some producers will add acetic acid to the brine at the start. Suggest a reason why. The first stage of vegetable fermentation involves anaerobic bacteria, Leuconostoc species, that ferments the sugars into lactic acid.
• This is a heterolactic fermentation. What are the other products produced in this process?
• It is important that the fermenting vegetables stay submerged in the brine/acid solution and the system not be exposed to air for the first week. Explain why.
As the pH drops, the environment becomes too acidic for these bacteria to survive and they die out. In the second stage, Lactobacillus species that are better adapted to acidic environments will begin to flourish. Lactobacillus will continue to anaerobically ferment the remaining sugars into lactic acid until the pH reaches about 3.
Exercise $6$
• This is a homolactic fermentation process. What are the products of this type of fermentation?
Fermentation Pathways and Flavor: Mannitol production
In sauerkraut, Leuconostoc mesenteroides converts the vegetable sugars, typically glucose, to lactic and acetic acids and carbon dioxide. Lc. mesenteroides also uses fructose as an electron acceptor, reducing it to mannitol. Fructose can be used as an electron acceptor being reduced to mannitol; this reaction contributes to the replenishment of the cells’ NAD+ pool.
Exercise $7$
• Draw a curved arrow mechanism for this process.
• What is the side product formed? How does that help the bacteria?
Given enough time, Lc. mesenteroides will continue to ferment mannitol to lactic acid.
• Why does this take time
Fermentation Pathways and Flavor: Mannitol as Contributor to Flavor
Sauerkraut consumption has decreased in the US. In taste comparisons of partially fermented European vs American sauerkraut vs fully fermented sauerkraut, most consumers preferred the flavors of the partially fermented European sauerkraut. The primary chemical differences were higher levels of remaining sugars, mannitol and ethanol (probably from post-processing addition of wine). Mannitol is sweet and has a desirable cooling effect often used to mask bitter tastes. However, ‘partially fermented’ sauerkraut can cause problems in bulk storage; remaining sugars allow spoilage organisms to thrive (and gas evolution). Fully fermented sauerkraut has no remaining sugars, so it does not need further processing.
Exercise $8$
• If you were a sauerkraut producer in the US, which process would you use? Defend your answer.
Fermentation Pathways and Flavor: Malolactic Fermentation
Many strains of Lc. mesenteroides and Lactobacillus can ferment malic acid (naturally found in vegetables) to lactic acid. The malolactic fermentation (MLF) involves the conversion of malic acid into lactic acid and carbon dioxide. Some LAB bacteria convert the malic acid in one step; while others utilize these steps that include intermediates from the TCA cycle.
Exercise $9$
• Complete the steps in this biochemical pathway to convert malic acid to lactic acid.
• In cucumber fermentation, this is a problem because the ____________ production causes gas bubbles in the cucumbers, softens the pickle, and creates ‘bloaters and floaters’.
• Heaving’ is a rapid increase in sauerkraut volume resulting in gas entrapment within the sauerkraut and a rise in brine level in the tank. This is a problem in industrial sauerkraut production. It is probably due to malolactic fermentation. Explain.
• Suggest a method for reducing malolactic fermentation in pickle and sauerkraut production.
• On the other hand, malic acid has a harsher and more aggressive flavor than lactic acid. High levels of malic acid decrease the flavor ratings of sauerkraut. How much should the MLF be suppressed?
Sauerkraut Flavor Profiles
Sulfur compounds
Sulfur aromas and flavors are strongly associated with cruciferous vegetables such as cabbage, radishes, kale, and broccoli. S-Methyl cysteine sulfoxide (SMCSO) naturally occurs in large quantities in fresh cabbage.
Sauerkraut flavors are characterized mostly by salty, sour, and sulfur notes. The sulfur character of sauerkraut can lend both desirable flavors, as well as unfavorable aromas and flavors. This is often dependent upon concentration levels.
Many of the compounds (shown below) found in sauerkraut are derived from the enzymatic degradation of SMCSO.
DMTS and MMTSO2 appear to be the most critical compounds for the sauerkraut sulfur flavor.
Caraway spiced commercial sauerkraut is known to be less sulfurous and milder in flavor than traditional sauerkraut, was found to contain no DMTS and the level of the DMDS was also lower. Caraway seeds appear to remove the precursor to these molecules, methanethiol.
Exercise $10$
• Propose a method for how caraway seeds might reduce the presence of methanethiol
Post-Fermentation
Spices, wines, and other ingredients may be added to the pickles to augment its flavor.
After fermentation and removal from brine storage, cucumbers may be desalted or rinsed to reduce acid content.
Exercise $11$
• What are some problems associated with decreasing salt or acid content?
Many pickle and sauerkraut products undergo pasteurization in their glass containers before they are sold.
• Why are these products pasteurized?
• What is the downside to pasteurization for vegetables?
Sources
Fleming HP, McFeeters RF. Residual sugars and fermentation products in raw and finished commercial sauerkraut In Sauerkraut Seminar, 1985, N. Y. State Agric. Expt. Sta. Special Report No. 56:25-29.
Johanningsmeier, et. al. Chemical and Sensory Properties of Sauerkraut, J. Food Sci., 2005, 70(5), 343-349.
Pérez-Díaz IM, Breidt F, Buescher RW, Arroyo-Lopez FN, Jimenez-Diaz R, Bautista-Gallego J, Garrido-Fernandez A, Yoon S, Johanningsmeier SD. 2014. Chapter 51: Fermented and Acidified Vegetables. In: Pouch Downes F, Ito KA, editors. Compendium of Methods for the Microbiological Examination of Foods, 5th Ed. American Public Health Association. | textbooks/chem/Biological_Chemistry/Fermentation_in_Food_Chemistry/1.08%3A_Fermented_Vegetables.txt |
Cheese Production (University of Guelph, Cheese Production)
Cheese making is essentially a dehydration process in which milk casein, fat and minerals are concentrated 6 to 12-fold, depending on the variety. The basic steps common to most varieties are acidification, coagulation, dehydration, and salting.
Cheese Production Process:
Chemical Components of Milk
Milk is primarily composed of water with four biological macromolecules; carbohydrate (lactose), fats, casein phosphoproteins, and whey protein.
Casein
Caseins are phosphoproteins. These proteins are mostly random coils will little secondary or tertiary structure. They are highly heat stable.
Exercise $1$
• Define primary, secondary and tertiary structure in proteins.
• Circle the phosphorylated serine side chains in a typical repeating unit in $\beta$-casein.
• Caseins have a relatively high [ negative / positive ] charge from phosphates.
Casein exists in the milk as micelles that consist of hundreds of casein molecules coordinated with Ca+2 ions.
• Draw a cartoon of several casein globular proteins forming a micelle with a hydrophobic center and a hydrophilic outer surface. Add the calcium ions.
Casein Aggregation (curd formation)
Although the casein micelle is fairly stable, here are two major ways in which aggregation can be induced. Aggregation is a key step of cheese production.
1. Enzymatic - chymosin (rennet) or other enzymes (important for cheddars and gouda)
During the primary stage, rennet cleaves the Phe(105)-Met(106) linkage of kappa-casein forming a soluble protein which diffuses away from the micelle and para-kappa-casein.
During the secondary stage, the micelles aggregate. This is due to the loss of steric repulsion of the kappa-casein. Calcium assists coagulation by acting as a bridge between micelles.
During the tertiary stage of coagulation, a gel forms, the milk curd firms, and the liquid separates.
Exercise $2$
• Draw a cartoon of several micelles clumping together. Show the calcium ions acting as bridges.
2. Acid. Acidification causes the casein micelles to destabilize or aggregate. Acid coagulated fresh cheeses may include Cottage cheese, Quark, and Cream cheese.
Exercise $3$
• Considering the number of phosphate groups present, suggest what will happen to the phosphates as the pH drops below 4.6.
• Draw the acid-base reaction that occurs.
• What happens to the Ca+2 ions?
Acid coagulation can be achieved naturally with the starter culture of lactobacillus.
• These bacteria convert lactose to _________________.
Acid curd is more fragile than rennet curd due to the loss of calcium.
• Explain why the loss of calcium makes a more fragile curd.
Whey
Whey proteins include $\beta$ -lactoglobulin, alpha-lactalbumin, bovine serum albumin (BSA), and immunoglobulins (Ig). These proteins have well defined tertiary and quaternary structures. They are soluble in water at lower pH but do not coagulate after proteolysis or acid treatment. When the casein is coagulated with enzymes or acid treatment, there is usually a straining step whereby the water is separated from the curd.
Exercise $4$
• Does the whey stay with the curd or the water?
There is a third process for casein coagulation, heat-acid.
Exercise $5$
In this process, heat causes denaturation of the whey proteins which can interact with the caseins. With the addition of acid, the caseins precipitate with the whey proteins.
• Draw a cartoon of this process.
In heat-acid coagulation, 90% of protein can be recovered. Examples of cheeses made by this method include Paneer, Ricotta and Queso Blanco.
Metabolism of Lactose in Homofermentative Lactobacilli
Overview
Exercise $6$
When lactobacilli are added to milk, the bacterium uses enzymes to produce energy (ATP) from lactose.
• The byproduct of ATP production is __________.
The lactic acid curdles the milk that then separates to form curds, which are used to produce cheese and whey.
We previously covered the pathway for bacteria to convert glucose to lactic acid.
• Recap this pathway and the reason that these bacteria use this process rather than TCA cycle and oxidative phosphorylation.
However, we haven’t talked about how this bacterium can convert lactose to glucose.
• Lactose is a [ monosaccharide / disaccharide / polysaccharide ]. Circle one.
Lactose is hydrolyzed to glucose and $\beta$-galactose.
• Draw the two monosaccharides.
Galactose Metabolism
Glucose can be converted to lactic acid as discussed before. Galactose is converted into glucose 6-phosphate in four steps in the Leloir pathway.
Leloir Pathway Stepwise
1. The first reaction is the phosphorylation of galactose to galactose 1-phosphate.
Exercise $7$
• Draw the reaction and predict the product for this reaction.
2. Galactose 1-phosphate reacts with uridine diphosphate glucose (UDP-glucose) to form UDP-galactose and glucose 1-phosphate are formed.
Exercise $8$
• Draw the reaction and predict the products for this reaction.
3. The galactose moiety of UDP-galactose is then epimerized to glucose-1-phosphate. The configuration of the hydroxyl group at carbon 4 is inverted by UDP-galactose 4-epimerase.
This enzyme utilizes NAD+ in the first step. And then regenerates the NAD+ in the second step.
Exercise $9$
• Draw the arrows for this reaction. Add in the NAD+.
4. Glucose 1-phosphate, formed from galactose, is isomerized to glucose 6-phosphate by phosphoglucomutase.
In this pathway, UDP-glucose and UDP-galactose fulfill catalytic roles but are not subject to any net turnover. It might therefore be said that they form a tiny metabolic cycle between the two of them.
Exercise $10$
• Explain this observation about the Leloir Pathway.
Maturation of Cheese
Preparation and Ripening
After the whey is removed the curds, there is a wide variety of curd handling dependent upon the type of cheese being prepared. Some cheese varieties, such as Colby or Gouda require a curd washing to increases the moisture content and reduce the acidity. Salt is added to some cheeses through different methods: Gouda is soaked in brine, while Feta has surface salt added.
The curd is then ripened until the desired flavors and textures are produced. This ripening process includes further fermentation by bacteria, added yeasts or molds, and enzymatic reactions from added lipases or rennet. These processes develop distinctive characteristics for each cheese.
The table below shows a sample of flavor molecules derived from the breakdown of milk
components.
Casein Protein Milk Fats Lactose
Ethanoic Acid Carboxylic Acids Diacetyl
Aldehydes Lactones Acetaldehyde
Amines Esters Acetic Acid
Ketones
More examples: Simon Cotton, Education in Chemistry, Royal Society of Chemistry, Really Cheesy Chemistry
Maturation of Cheese
Lipolysis
Lipolysis is a critical step is the lipolysis of triglycerides to esters and acids which yield many flavorful molecules.
Exercise $11$
• Draw the products of this lipolysis reaction.
Fatty acid metabolism (b-oxidation) removes to carbons at a time to each of these fatty acids.
• Draw a couple of shorter chain fatty acids.
Esterases are often present that can turn these shorter chain fatty acids into methyl esters.
• Draw a couple of possible short chain methyl esters.
These are smelly and flavorful
Maturation of Cheese
Propionic Acid Fermentation: Flavors
Propionibacterium species are facultative anaerobes that can ferment sugars (glucose or lactose) into propionic acid. This process creates aroma and flavors found in Swiss cheeses.
Exercise $12$
A facultative anaerobe is an organism that: (choose the correct definition)
1. makes ATP by aerobic respiration if oxygen is present but is capable of switching to fermentation or anaerobic respiration if oxygen is absent.
2. cannot make ATP in the absence of oxygen and will die in the absence of oxygen.
Exercise $13$
This process hijacks a part of the TCA Cycle.
• On the diagram above, circle or highlight the structures that are used in the TCA cycle.
• This pathway will function when the TCA cycle is [ on / off ] due to ____________.
• How does this pathway help the bacteria regenerate NAD+
Propionic Acid Fermentation: Flavors
A key step in the Wood-Werkman Pathway is to transfer a carboxyl group from methylmalonyl CoA to pyruvate to form propionyl CoA and oxaloacetate.
This mechanism utilizes vitamin B12 (biotin).
Exercise $14$
Draw the arrows for the decarboxylation of methylmalonyl CoA:
Process continues with the carboxylated biotin and the enolate of pyruvate.
Exercise $15$
Draw the enolate anion of pyruvate. Is this a nucleophile or an electrophile?
As this step continues with the carboxylated biotin and the enolate of pyruvate to form oxaloacetate.
Exercise $16$
Draw the arrows for this conversion.
Now that you have made propionyl CoA. How is it converted to propionic acid?
Exercise $17$
Draw arrows for this trans-thioesterification process. Be sure to include a tetrahedral intermediate.
Extra Questions
Exercise $18$
• Curdling the milk is not the bacterium's only role in cheese production. The lactic acid produced by the bacterium lowers the pH of the product and preserves it from the growth by unwanted bacteria and molds while other metabolic products and enzymes produced by Lactococcus lactis contribute to the subtle aromas and flavors that distinguish different cheeses.
• Look up other chemical side products created by this bacterium and what “flavors” are imparted. (More covered in Yogurt Section!)
• A deficiency of the lactase enzyme in the small intestine gives rise to lactose intolerance, which is found frequently in most populations outside of northern Europe who are past the infant age.
If lactose is not cleaved, it cannot be absorbed, so it travels to the large intestine. Many of the bacteria found there have the capacity to metabolize lactose, which they will happily convert to acids and gas.
• Would someone who is lactose intolerant be able to eat cheese? Why or why not?
Sources
D. H. Hettinga and G. W. Reinbold, THE PROPIONIC-ACID BACTERIA–A REVIEW. Journal of Milk and Food Technology, 1972, 35 (6), 358-372. | textbooks/chem/Biological_Chemistry/Fermentation_in_Food_Chemistry/1.09%3A_Cheese_Production.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.