chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
It is important to point out that the PCS that we have just discussed was originally outlined by van der Waals. In reality, it is the simplest version of the principle of corresponding states, and it is referred to as the two-parameter PCS. This is because it relies on two parameters (reduced pressure and temperature) for defining a “corresponding state.”
With the passing of time, more accurate PCS formulations have made use of more than two parameters. For instance, the three-parameter PCS affirms that two substances are in corresponding states not only when they are at the same reduced conditions (reduced pressure and temperature), but also when they have the same “acentric factor” value. In any case, the general statement of PCS remains untouched:
Substances at corresponding states behave alike.
What makes the difference is the definition of “what a corresponding state is.”
The acentric factor “$ω$” is a concept that was introduced by Pitzer in 1955, and has proven to be very useful in the characterization of substances. It has become a standard for the proper characterization of any single pure component, along with other common properties, such as molecular weight, critical temperature, critical pressure, and critical volume.
Pitzer came up with this factor by analyzing the vapor pressure curves of various pure substances. From thermodynamic considerations, the vapor pressure curve that we studied in our first modules for pure components can be mathematically described by the Clausius Clapeyron equation:
$\frac{1}{P} \frac{d P}{d T}=\frac{\Delta \widetilde{H}_{v a p}}{R T \Delta Z} \label{8.7}$
The use of the integrated version of Equation \ref{8.7} is very common for the mathematical fitting of vapor pressure data. The integrated version of equation (8.7) shows that the relationship between the logarithm of vapor pressure and the reciprocal of absolute temperature is approximately linear. That is, in terms of reduced conditions, vapor pressure data approximately follows a straight line when plotted in terms of “logPr” versus “1/Tr”, or, equivalently:
$\log _{10} P_{r}=a\left(\frac{1}{T_{r}}\right)+b \label{8.8}$
If the two-parameter corresponding state principle were to hold true for all substances, the parameters “a” and “b” should be the same for all substances. That is, all vapor pressure curves of all imaginable substances should lie on top of each when plotted in terms of reduced conditions. Stated in another way, if the plot is of the form “logPr” versus “1/Tr”, all lines should show the same slope (a) and intercept (b).
The bad news is that, as you may imagine, this is not always true. Vapor pressure data for different substances do follow different trends. The good news is that some gases follow the expected trend. Which are they? The noble gases. Noble gases (such as Ar, Kr and Xe) happen to follow the two-parameter corresponding states theory very closely. Hence, they yield themselves amenable to acting as a reference to evaluate “compliance” with the two-parameter equation of state.
Pitzer wanted to come up with a reliable way of quantifying the deviation of substances with respect to two-parameter corresponding state predictions. He decided to use noble gases as the base for comparison. Analyzing vapor pressure data for noble gases, Pitzer showed that a value of $\log P_r = – 1$ was achieved at approximately Tr = 0.7. So, BINGO! There you are! He thought: if the vapor pressure data of a substance show that $\log P_r = – 1$ at $T_r = 0.7$, it behaves as the noble gases and thus complies with the two-parameter corresponding states. If not, we are to compute the difference:
8.03: Action Item
Problem Set
1. Consider Methane (Pc = 666 psia, Tc = – 117 F) and Ethane (Pc = 706 psia, Tc = – 90 F) stored in two different vessels at the following conditions:
Methane Vessel P = 1332 psia T = 55 F
Ethane Vessel P = 1412 psia T = 95 F
Using the Standing-Katz Z-factor plot, determine the compressibility factor of both substances. Any observation? Are those values different? Describe the situation for both gases. If their Z-factors are the same, does it mean that both gases have the same density? What does it mean?
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/08%3A_PT_Behavior_and_Equations_of_State_III/8.02%3A_Acentric_Factor_and_Corresponding_States.txt
|
Learning Objectives
• Module Goal: To demonstrate thermodynamic quantification using modern cubic EOS.
• Module Objective: To introduce you to the basis premises of cubic EOS and their behavior
• 9.1: Introduction
In general, any equation of state that is cubic in volume (and Z) and explicit in pressure is regarded as a cubic equation of state. vdW EOS is a cubic EOS, and all the transformations and modifications that it has undergone during the more than one hundred years since its publication are also cubic EOS; or better, they are in-the-van-der-Waals-spirit EOS or of-the-van-der-Waals-family EOS.
• 9.2: The Cubic Behavior
• 9.3: Implications of S-shaped curve (Sub-critical Conditions)
Can we really model the discontinuity? Not really, but we can get around it. van der Waals provided a possible solution in his dissertation on the “continuity of vapor and liquid.” Even though neither cubic equations nor any other continuous mathematical function is able to follow the discontinuity, what they can do is good enough for engineering purposes. The “cubic behavior” can reasonably match the liquid and vapor branches for the real, experimental isotherms.
• 9.4: Action Item
09: Cubic EOS and Their Behavior I
If we multiply the vdW EOS (expression 7.11a) by and expand the factorized product by applying the distributive law, the result is the vdW EOS expressed in terms of molar volume, as follows:
$\bar{v}^{3}-\left(b+\frac{R T}{P}\right) \bar{v}^{2}+\left(\frac{a}{P}\right) \bar{v}-\frac{a b}{P}=0 \label{9.1}$
Note that Equation \ref{9.1} is a third order polynomial in $\bar{v}$ i.e., it is cubic in molar volume. Additionally, we can substitute the definition of compressibility factor $Z$,
$Z=\frac{P \bar{v}}{R T} \label{9.2}$
into Equation \ref{9.1} and obtain a different cubic polynomial in $Z$, as shown:
$Z^{3}-\left(1+\frac{b P}{R T}\right) Z^{2}+\left(\frac{a P}{R^{2} T^{2}}\right) Z-\frac{a b P^{2}}{(R T)^{3}}=0 \label{9.3}$
As we see, vdW EOS is referred to as cubic because it is a polynomial of order 3 in molar volume (and hence in compressibility factor $Z$). In general, any equation of state that is cubic in volume (and $Z$) and explicit in pressure (Equation 7.11b) is regarded as a cubic equation of state. vdW EOS is a cubic EOS, and all the transformations and modifications that it has undergone during the more than one hundred years since its publication are also cubic EOS; or better, they are in-the-van-der-Waals-spirit EOS or of-the-van-der-Waals-family EOS.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
9.02: The Cubic Behavior
Let us see what we have accomplished thus far. First, remember that we are not satisfied with the fact that the ideal gas law is not able to predict the discontinuity of Figure 6.3, which corresponds to the condensation that an all-gas pure substance undergoes during isothermal compression. The isotherm that we get from the ideal model was shown in Figure 6.2. Condensation, as we know, is to be expected at some point when you isothermally compress any pure gas while below its critical conditions (T < Tc). This lack of conformance to actual behavior led us to realize that the ideal model is quantitatively and qualitatively wrong at high pressures. To remove the weaknesses of the ideal model, we recall that vdW, in developing his equation of state (Equation 7.11), introduced the concepts of co-volume and the attraction term.
Now, we wonder, what did he really accomplish? Are we now able to predict the condensation phenomena? Are these “new” cubic-type EOS capable of showing where such a discontinuity occurs? So far, we have not seen what a cubic isotherm looks like. Let us plot the cubic isotherm for conditions below critical (T < Tc), superimpose it on Figure 6.3, and see what we get.
Figure $1$ shows a typical cubic behavior. That is, around condensation conditions, the equation of state presents an S-shaped curve. This should not come as a surprise; because the equation is cubic in volume, it will provide three roots for volume, and hence the S-shaped cubic feature. For the sake of our discussion, we have shown the gas and liquid branches of the cubic equation lying directly on top of the experimental ones. This kind of matching is the “ultimate goal” of any cubic EOS, but in reality, the matching is not this good — especially for the case of the original vdW EOS, as we will discuss later.
The cubic behavior is bounded by the two extremes of real fluid behavior, given by the zero pressure and infinite pressure limits. On one hand, it is clear from equation (7.11b) that we have a singularity at $\bar{v}=b$, where “b” represents the co-volume, or physical space that molecules themselves occupy. This singularity conveniently creates an asymptotic behavior (high pressure asymptote) of the cubic equation liquid branch, by which $\bar{v} \rightarrow b$ as $P \rightarrow \infty$. Recall from our previous discussions that predictions for $\tilde{v}<b$ are meaningless. Accordingly, we need an infinite amount of pressure if we are to compress a fluid to the extent that no free space is available among molecules ($\widetilde{v}=b$). On the other hand, it is clear from vdW EOS (Equation 7.11a) that as $v \rightarrow \infty(P \rightarrow 0)$, the cubic EOS collapses to the ideal EOS (Equation 7.6). At this low pressure limit, vdW corrections to the ideal model become inconsequential ($a / \bar{v}^{2} \rightarrow 0$, $\bar{v} >>$).
Mathematically, this is the low-pressure asymptote of the cubic-equation gas branch by which $\bar{v} \rightarrow \infty$ as $P→0$.
Let us see how “good” the cubic behavior is for the ideal-gas model. Refer to Figure $2$, where we have superimposed the ideal gas isotherm of Figure 6.2 on the cubic EOS behavior.
A look at Figure $2$ helps us confirm that the cubic equation of state collapses to the prediction of the ideal gas model at reduced pressure — i.e., they share the same low-pressure asymptote. This is to be expected, since the assumptions underlying the ideal model are satisfied. As we recall, these assumptions are that the attractive forces between molecules are very weak and that the physical volume of the molecules can be disregarded when compared to the total volume of the container. It is worthy to note that the high pressure asymptote is not the same for both models; as $P \rightarrow \infty$, $\widetilde{v} \rightarrow b$ as for the cubic model while $\widetilde{v} \rightarrow 0$ for the ideal model. The latter is a direct consequence of neglecting the molecular volume in the ideal model.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/09%3A_Cubic_EOS_and_Their_Behavior_I/9.01%3A_Introduction.txt
|
A major issue that has kept us banging our heads has been how to mathematically represent the discontinuity of the P-v isotherm during the vapor-liquid transition (Figure 6.3). Such a discontinuity shows up during the isothermal compression of any pure substance at sub-critical conditions (T < Tc). What we want here requires fitting a continuous mathematical function to a discontinuous, real-life event. Strictly speaking, it would be contradictory to find a single continuous mathematical function that can capture such a discontinuity in its full nature.
Can we really model the discontinuity? Not really, but we can get around it. van der Waals provided a possible solution in his dissertation on the “continuity of vapor and liquid.” Even though neither cubic equations nor any other continuous mathematical function is able to follow the discontinuity, what they can do is good enough for engineering purposes. The “cubic behavior” can reasonably match the liquid and vapor branches for the real, experimental isotherms.
Since van der Waals’ EOS, we have been able to consider the continuity between gas and liquid phases. Now, we need to learn how to deal with the S-shaped behavior, and to look at it as a minor, inconsequential price that we pay for the modeling of the vapor — liquid discontinuous transition with a continuous mathematical function. Let us zoom in on Figure 9.1, as shown in Figure $1$.
There are several features of the S-shaped behavior that should be noted.
1. The S-shaped transition represents the zone where gas and liquid coexist in equilibrium for a pure substance; hence, such a behavior will show up whenever a cubic equation of state is used for predictions at temperatures below critical conditions (sub-critical conditions).
2. Physically, changes in pressure and changes in volume in a fluid must have different signs in any isothermal process, such that: $\left(\frac{\partial P}{\partial \bar{v}}\right)_{r}<0 \tag{Mechanical Stability Condition}$ This requirement is met by the liquid branch, gas branches and sections AA’ and BB’ of the cubic isotherm. Portions AA’ and BB’ have even been realized experimentally for metastable conditions, i.e., conditions of fragile or weak stability. However, changes of pressure and volume have the same sign in portion A’B’; consequently, this portion of the cubic isotherm is regarded as meaningless and unphysical.
3. Point A’ and B’ are the minimum and maximum values of the metastable behavior represented by the S-shaped curve. The saturation pressure (Psat) will naturally lie between these two extremes. Graphically, Psat can be attained as the pressure that make the areas AoA’ and BoB’ equal. This equal-area rule is known as the Maxwell principle. Similarly, we can also determine the saturation condition as the pressure where fugacity — a thermodynamic property we will be studying later — is equal both at the liquid and vapor branches.
4. It is not impossible for section AA’ of the cubic isotherm to reach negative pressures (i.e., PA’ < 0). This should not be of any concern to us because we are seldom interested in such metastable behavior. In fact, once Psat is determined, most practical applications call for cleaning up the cubic isotherm and suppressing areas AoA’ and BoB’. In this case, we are left with liquid branch, the discontinuity AB’ at Psat, and the gas branch; just as the experimental isotherm would look.
5. The most crucial implication of the S-shaped curve is that the cubic equation will certainly produce three distinct real roots for molar volume (or compressibility factor, if it is the case.) This will always be the case as long as you are making a prediction within the S-shaped curve (PA’ < P < PB’; T < Tc). At the vapor-liquid discontinuous transition — i.e., at the intercept of the cubic isotherm with the saturation pressure, $P^{sat}$, at the given temperature — we end up with 3 mathematically possible real roots for volume. The extreme points of intersection represent liquid molar volume and gas molar volume respectively, represented as $\widetilde{v}_{\text {inquid }}$ and $\widetilde{v}_{\mathrm{ges}}$ in Figure $1$. The third, middle root, given by point “o”, is always regarded as unphysical and is always discarded, because it belongs to the path A’B’.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
9.04: Action Item
Problem Set
1. Cubic EOS yield three roots for Z-factor for a pure substance at subcritical conditions. Does that mean that there is a “Z” factor for liquids? Isn’t the “Z” factor concept only applicable to gases? What does a “Z factor” connote for liquids?
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/09%3A_Cubic_EOS_and_Their_Behavior_I/9.03%3A_Implications_of_S-shaped_curve_%28Sub-critical_Conditions%29.txt
|
Learning Objectives
• Module Goal: To demonstrate thermodynamic quantification using modern cubic EOS.
• Module Objective: To highlight the most often used cubic EOS
• 10.1: Multiple Roots and Cubic Behavior
It comes as no surprise that cubic equations of state yield three different roots for volume and compressibility factor. This is simply because they are algebraic equations, and any nth order algebraic equation will always yield “n” roots. However, those “n” roots are not required to be distinct, and that is not all: they are not required be real numbers, either.
• 10.2: Modern Cubic EOS
First, we can say that the vdW cubic behavior is qualitatively reasonable; and second, we can say that it is capable of describing the continuity between liquid and vapor. Nevertheless, vdW cubic EOS has been proven not to bequantitatively suitable for most engineering purposes. Certainly, it yields unacceptable errors for the quantitative prediction of densities and any other related thermodynamic property.
• 10.3: Redlich-Kwong EOS (1949)
Redlich and Kwong revised the van der Waals EOS by making the attraction parameter “a” of van der Waals a function of temperature.
• 10.4: Soave-Redlich-Kwong EOS (1972)
In 1972, Soave proposed an important modification to the RK EOS — or shall we say, a modification to vdW EOS. Between the time of vdW EOS and Redlich-Kwong’s, a new concept for fluid characterization was being discussed. Pitzer had introduced the concept of acentric factor in 1955.
• 10.5: Action Item
10: Cubic EOS and Their Behavior II
It comes as no surprise that cubic equations of state yield three different roots for volume and compressibility factor. This is simply because they are algebraic equations, and any nth order algebraic equation will always yield “n” roots. However, those “n” roots are not required to be distinct, and that is not all: they are not required be real numbers, either. A quadratic expression (n = 2) may have zero real roots (e.g., $x^2 + 1 = 0$); this is because those roots are complex numbers. In the case of cubic expressions (n = 3), we will either have one or three real roots; this is because complex roots always show up in pairs (i.e., once you have a complex root, its conjugate must also be a solution.) In our case, and because we are dealing with physical quantities (densities, volumes, compressibility factors), only real roots are of interest. More specifically, we look for real, positive roots such that $\bar{V} > b$ in the case of molar volume and $Z > Pb/RT} in the case of compressibility factor. In a cubic equation of state, the possibility of three real roots is restricted to the case of sub-critical conditions (\(T < T_c$), because the S-shaped behavior, which represents the vapor-liquid transition, takes place only at temperatures below critical. This restriction is mathematically imposed by the criticality conditions. Anywhere else, beyond the S-shaped curve, we will only get one real root of the type $\bar{V} > b$. Figure $1$ illustrates this point.
Let us examine the three cases presented in Figure $1$:
1. Supercritical isotherms ($T > T_c$: At temperatures beyond critical, the cubic equation will have only one real root (the other two are imaginary complex conjugates). In this case, there is no ambiguity in the assignment of the volume root since we have single-phase conditions. The occurrence of a unique real root remains valid at any pressure: any horizontal (isobaric) line cuts the supercritical isotherm just once in Figure $1$.
2. Critical isotherm ($T = T_c$): At the critical point ($P = P_c$), vapor and liquid properties are the same. Consequently, the cubic equation predicts three real and equal roots at this special and particular point. However, for any other pressure along the critical isotherm ($P < P_c$ or $P > P_c$,) the cubic equation gives a unique real root with two complex conjugates.
3. Subcritical isotherm ($T < T_c$): Predictions for pressures within the pressure range for metastability ($P_A’ < P < P_B’$) or for the saturation condition ($P = P^{sat}$) will always yield three real, different roots. In fact, this is the only region in Figure $1$ where an isobar cuts the same isotherm more than once. The smallest root is taken as the specific volume of the liquid phase; the largest is the specific volume of the vapor phase; the intermediate root is not computed as it is physically meaningless. However, do not get carried away. Subcritical conditions will not always yield three real roots of the type $\bar{v} > b$. If the pressure is higher than the maximum of the S-shaped curve, $P_B$, we will only have one (liquid) real root that satisfies $\bar{v} > b$. By the same token, pressures between $0 < P < P_A’$ yield only one (vapor) root. In the case of $P_A’$ being a negative number, three real roots are to be found even for very low pressures when the ideal gas law applies. This can be seen in Figure 10.1 as well. The largest root is always the correct choice for the gas phase molar volume of pure components.
Most of these considerations apply to the cubic equation of state in Z (compressibility factor). The most common graphical representation of compressibility factor is the well-known chart of Standing and Katz (Figure $2$), where Z is plotted against pressure. Standing and Katz presented their chart for the compressibility factor (Z) of sweet natural gases in 1942. This chart was based on experimental data. Graphical determination of properties was widespread until the advent of computers, and thus the Standing and Katz Z-chart became very popular in the natural gas industry. Typical Standing and Katz charts are given for high temperature conditions ($T > T_c$ or $T_r > 1$). Figure $2$, using a cubic equation of state for a pure gas, presents the qualitative behavior of the solution of $Z$ versus pressure. Isotherms (T > Tc) show the typical qualitative behavior we are accustomed to seeing in the Standing and Katz chart. Cases T < Tc (Tr < 1) are not as familiar to us, as they were not considered by Standing and Katz. For such isotherms, it is clear that you come up with two values of Z (liquid, gas) at saturation conditions.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/10%3A_Cubic_EOS_and_Their_Behavior_II/10.01%3A_Multiple_Roots_and_Cubic_Behavior.txt
|
At this point, we have a couple of comments on the cubic behavior that the pioneer work of vdW introduced to the field of equations of state. First, we can say that the vdW cubic behavior is qualitatively reasonable; and second, we can say that it is capable of describing the continuity between liquid and vapor. Nevertheless, vdW cubic EOS has been proven not to bequantitatively suitable for most engineering purposes. Certainly, it yields unacceptable errors for the quantitative prediction of densities and any other related thermodynamic property. However, all of the development in the field of phase behavior that has been achieved today is due to the work of van der Waals. Although his own equation is seldom used because of its lack of accuracy, his principles are still the foundations of the current developments. vdW concepts were so far reaching that he won the Nobel Prize for his equation.
The truth is that van der Waals’ accomplishment in 1873 triggered a tremendous effort among scientists to make modifications to his EOS which would remove from it large disagreements with experimental data. This effort has not yet ceased today and is not likely to stop in the near future. Much of this endeavor has focused on how to better model the attractive parameter “a” and the repulsive term “b”, with the hope that we can get better quantitative predictions. Naturally, the qualitative cubic-nature of vdW’s original EOS is always preserved, and hence all subsequent refinements belong to the same family of modified-van-der-Waals equations of state. We refer to vdW EOS and all its descendents as cubic equations of state, because, as we have said, they take a cubic form when expressed in terms of volume or compressibility factor and are explicit in pressure.
It is fair to claim that modern cubic EOS started to make a difference when a temperature dependency was introduced to the attractive parameter “a”. Interestingly enough, van der Waals was convinced that the parameters “a” (and even “b”) of his equation of state were not necessarily constants and suggested that, indeed, some dependency on temperature could be found. A very interesting discussion on this, from van der Waals himself, is found in the lecture speech that he offered during his acceptance of the Nobel Prize in Physics, in 1910, for his work on the continuity of vapor and liquid. This speech and the biography of this great physicist, Johannes Diderik van der Waals (1837-1923), can be found in the web resources of the Nobel Prize organization.
The most popular cubic EOS, which time has proven to be most reliable, are:
• Redlich-Kwong EOS,
• Soave-Redlich-Kwong EOS (very popular among chemical engineers),
• Peng-Robinson EOS (very popular among petroleum and natural gas engineers).
Keep in mind that, once you have an EOS, you can derive virtually any property of the fluid.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
10.03: Redlich-Kwong EOS (1949)
The vdW cubic equation of state had to wait almost 100 years before a real, successful improvement was introduced to it. As we stated before, this progress occurred once researchers committed themselves to finding the empirical temperature dependency of the attraction parameter “$a$” proposed by van der Waals. In contrast, very little attention has been paid to modifying the parameter “$b$” for co-volume. It makes a lot of sense that “b” would not be modified by temperature, because it represents the volume of the molecules, which should not be affected by their kinetic energy (measured in terms of temperature).
The very first noteworthy successful modification to the attraction parameter came with the publication of the equation of state of Redlich-Kwong in 1949. Redlich and Kwong revised van der Waals EOS and proposed the following expression:
$\left(P+\frac{a}{T^{0.5} v(v+b)}\right)(v-b)=R T \label{10.1}$
Notice that the fundamental change they introduced was to the functional form of $\delta P_{\text {attraction}}$ (equation 7.8). Additionally, they introduced the co-volume “b” into the denominator of this functional form.
The important concept here is that the attraction parameter “a” of van der Waals needed to be made a function of temperature before any cubic EOS was able to do a better job of quantitatively matching experimental data. This was a realization that vdW himself had suggested, but no actual functional dependency had been introduced until the Redlich-Kwong EOS.
We know what follows at this point. To come up with an expression for “a” and “b” of Equation \ref{10.1}, we apply the criticality conditions to this EOS. As we recall, imposing the criticality conditions allows us to relate the coefficients “a” and “b” to the critical properties (Pc, Tc) of the substance. Once we have done that, we obtain the definition of “a” and “b” for the Redlich-Kwong EOS,
$a=0.427480 \frac{R^{2} T_{c}^{2.5}}{P_{c}} \label{10.2a}$
$b=0.086640 \frac{R T_{c}}{P_{c}} \label{10.2b}$
This EOS radically improved, in a quantitative sense, the predictions of vdW EOS. We now recall that vdW-type equations are cubic because they are cubic polynomials in molar volume and compressibility factor. It comes as no surprise then, that we can transform Equation \ref{10.1} into:
$\tilde{v}^{3}-\left(\frac{R T}{P}\right) \bar{v}^{2}+\frac{1}{P}\left(\frac{a}{T^{0.5}}-b R T-p b^{2}\right) \bar{v}-\frac{a b}{P T^{0.5}}=0 \label{10.3}$
and, by defining the following parameters,
$A=\frac{a P}{R^{2} T^{2.5}} \label{10.3a}$
$B=\frac{b P}{R T} \label{10.3b}$
and introducing the compressibility factor definition ($Z=\frac{P \tilde{v}}{R T}$), we get:
$Z^{3}-Z^{2}+\left(A-B-B^{2}\right) Z-A B=0 \label{10.4}$
We may also verify the two-parameter corresponding state theory by introducing Equations \ref{10.2a}, \ref{10.2b}, and \ref{10.3} into Equation \ref{10.4},
$Z^{3}-Z^{2}+\frac{P_{r}}{T_{r}}\left(\frac{0.42748}{T_{r}^{1.5}}-0.08664-0.007506 \frac{P_{r}}{T_{r}}\right) Z-0.03704 \frac{P_{r}^{2}}{T_{r}^{3.5}}=0 \label{10.5}$
In Equation \ref{10.5} we can observe the same thing that we saw with vdW EOS: gases at corresponding states have the same properties. Equation \ref{10.5} is particularly clear about it: any two different gases at the same Pr, Tr condition have the same compressibility factor.
Just as any other cubic equation of state, Equations \ref{10.1}-\ref{10.5}, as they stand, are to be applied to pure substances. For mixtures, however, we apply the same equation, but we impose certain mixing rules to obtain “a” and “b”, which are functions of the properties of the pure components. Strictly speaking, we create a new “pseudo” pure substance that has the average properties of the mixture. Redlich-Kwong preserved the same mixing rules that vdW proposed for his EOS:
$a_{m}=\sum_{i} \sum_{j} y_{i} y_{j} a_{i j} \label{10.6a1}$
$a_{i j}=\sqrt{a_{i} a_{j}} \label{10.6a2}$
$b_{m}=\sum_{i} y_{i} b_{i} \label{10.6b}$
Naturally, Redlich and Kwong did not have the last word on possible improvements to the vdW EOS. The Redlich-Kwong EOS, as shown here, is no longer used in practical applications. Research continued and brought with it new attempts to improve the RK EOS. After more than two decades, a modified RK EOS with very good potential was developed. The Soave-RK EOS was born.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/10%3A_Cubic_EOS_and_Their_Behavior_II/10.02%3A_Modern_Cubic_EOS.txt
|
In 1972, Soave proposed an important modification to the RK EOS — or shall we say, a modification to vdW EOS. Between the time of vdW EOS and Redlich-Kwong’s, a new concept for fluid characterization was being discussed. Pitzer had introduced the concept of acentric factor in 1955.
All modifications to the vdW EOS had focused on the temperature dependency of the attractive parameter. Soave expanded this by proposing a two-variable dependency for “a”:
$a=a(T, \omega) \label{10.7}$
It was the first time that “a” was expressed not only as a function of temperature, but also as a function of the shape (sphericity) of the molecules (through w, Pitzer’s acentric factor). As we recall, Pitzer’s acentric factor is a measure of the configuration and sphericity of the molecule. It can also be seen as a measure of the deformity of the molecule.
The Soave-Redlich-Kwong EOS is given by the expression:
$\left(P+\frac{\alpha a}{\bar{v}(\bar{v}+b)}\right)(\bar{v}-b)=R T \label{10.8a}$
Like all cubic equations of state, the SRK EOS is also explicit in pressure. Notice, for example, how the SRK EOS readily becomes:
$P=\frac{R T}{\bar{v}-b}-\frac{\alpha a}{\bar{v}(\bar{v}+b)} \label{10.8b}$
where,
$\alpha=\left[1+\left(0.48508+1.55171 \omega-0.15613 \omega^{2}\left(1-\sqrt{T}_{r}\right)\right]\right) \label{10.8c}$
The influence of acentric factor and temperature on the attractive term is introduced now through “a”. What do we do next? We apply the criticality conditions to Equation \ref{10.8b}. Notice that expression \ref{10.8c} becomes unity at $Tr=1$, throughout the critical isotherm. We obtain:
$a=0.427480 \frac{R^{2} T_{c}^{2}}{P_{c}} \label{10.9a}$
$b=0.086640 \frac{R T_{c}}{P_{c}} \label{10.9b}$
Now we show the cubic form (in compressibility factor) of Soave-Redlich-Kwong EOS. Defining,
$A=\frac{\alpha a P}{R^{2} T^{2}} \label{10.10a}$
$B=\frac{b P}{R T} \label{10.10b}$
we are able to obtain:
$z^{3}-Z^{2}+\left(A-B-B^{2}\right) Z-A B=0 \label{10.11}$
For mixtures, Soave proposed a “little” modification to the mixing rules with which we have dealt with so far by introducing the use of “binary interaction parameters” ($k_{ij}$):
$(\alpha a)_{m}=\sum \sum y_{i j_{j}(\alpha a)_{i j}} \label{10.12a1}$
$(\alpha a)_{i j}=\sqrt{(\alpha a)_{i}(\alpha a)}_{j}\left(1-k_{i j}\right) \label{10.12a2}$
$b_{m}=\sum_{i} y_{i} b_{i} \label{10.12b}$
The use of binary interaction parameters ($k_{ij}$) generated a lot of resistance upon their first introduction. This is because there is no analytical, science-based derivation that justifies their existence. Nowadays, they are regarded just as they are, empirical factors used to tune equations of state and make them match experimental data for mixtures. This has become the heuristic justification for their existence: with them, EOS can do a better job of matching experimental data. Heuristically speaking, they are a measure of interaction between a pair of dislike molecules. Based on this “definition,” their value is zero for pairs of molecules that are alike. Actually, this is no more than a mathematical requirement in order for Equations \ref{10.12a1} and \ref{10.12a2} to give $(\alpha a)_{i j}=(\alpha a)_{i}$ when $j=i$. The determination of kij is based on experimental data from binary systems; “kij” results from the value that allows the given equation of state (through the expression in 10.12a) to yield the closest match. These values are assumed to be constant (and so are used) when the same two components are part of a more complex multi-component mixture.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
10.05: Action Item
Problem Set
1. An engineer is seeking your advice on which EOS he should use to model various hydrocarbon mixtures he is dealing with. He has access to a program that provides him with several options for different cubic EOS. He presents you with the following list:
• A sweet natural gas from Nigeria
• A sour natural gas (high CO2, H2S contents)
• Flue gas
1. Which EOS would you recommend him to use for each of these cases?
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/10%3A_Cubic_EOS_and_Their_Behavior_II/10.04%3A_Soave-Redlich-Kwong_EOS_%281972%29.txt
|
Learning Objectives
• Module Goal: To demonstrate thermodynamic quantification using modern cubic EOS.
• Module Objective: To assess the relative merit of applying the most common EOS.
• 11.1: Peng-Robinson EOS (1976)
The Peng-Robinson EOS has become the most popular equation of state for natural gas systems in the petroleum industry. A slightly better performance around critical conditions makes the PR EOS somewhat better suited to gas/condensate systems.
• 11.2: Comparative Assessment of RK, SRK, and PR EOS
Over the years, these EOS have been tested, and some comparisons can be made. As an engineer, you have to be able to decide which EOS best fits your purposes.
• 11.3: Critical Compressibility as a Measure of Goodness of an EOS
The fact of the matter is, as a consequence of the Corresponding States Principle, all cubic EOS predict a “unique” and “universal” value of Z at the critical point, regardless of the substance. None of the equations of state we have studied is capable of predicting a value similar to experimental values. The “best” job is done by PR EOS, which provides the “closest” match to the real values observed for most substances. This illustrates why the PR EOS performs somewhat better near critical cond
• 11.4: Advantages of Using Cubic Equations of State
All cubic equations of state have their foundation in vdW EOS. The use of cubic equations of state has become widespread because of their advantages: (1) Simplicity of application, (2) Only a few parameters need to be determined, and (3) Low computational overhead is required to implement them. This was a critical issue, particularly in the early stages of computers; it is not really anymore. Nevertheless, this feature is still a “plus.”
• 11.5: Solution Techniques for Cubic Expressions and Root Finding
Even though cubic equations of state are explicit in pressure, pressure is not the common unknown to be calculated in the typical problem. In the most common problem, pressure and temperature are known and we want either molar volume (or its reciprocal, molar density) or compressibility factor (the most likely case). Therefore, we are faced very often with the need to solve for the roots of a cubic expression. Here we present a number of approaches that may be followed.
• 11.6: Action Item
11: Cubic EOS and Their Behavior III
The Peng-Robinson EOS has become the most popular equation of state for natural gas systems in the petroleum industry. During the decade of the 1970’s, D. Peng was a PhD student of Prof. D.B. Robinson at the University of University of Alberta (Edmonton, Canada). The Canadian Energy Board sponsored them to develop an EOS specifically focused on natural gas systems. When you compare the performance of the PR EOS and the SRK EOS, they are pretty close to a tie; they are “neck to neck,” except for a slightly better behavior by the PR EOS at the critical point. A slightly better performance around critical conditions makes the PR EOS somewhat better suited to gas/condensate systems.
Peng and Robinson introduced the following modified vdW EOS:
$\left(P+\frac{\alpha a}{\bar{v}^{2}+2 b \bar{v}-b^{2}}\right)(\bar{v}-b)=R T \label{11.1a}$
or explicitly in pressure,
$P=\frac{R T}{\bar{v}-b}-\frac{\alpha a}{\bar{v}^{2}+2 b \bar{v}-b^{2}} \label{11.1b}$
where:
$\alpha=\left[ 1+\left(0.37464+1.54226 \omega-0.26992 \omega^{2}\right)(1-\sqrt{T_{r}})\right]^{2} \label{11.1c}$
Peng and Robinson conserved the temperature dependency of the attractive term and the acentric factor introduced by Soave. However, they presented different fitting parameters to describe this dependency (Equation 4.11c), and further manipulated the denominator of the pressure correction (attractive) term. As we have seen before, coefficients “a” and “b” are made functions of the critical properties by imposing the criticality conditions. This yields:
$a=0.45724 \frac{R^{2} T_{c}^{2}}{P_{c}} \label{11.2a}$
The PR cubic expression in Z becomes:
$b=0.07780 \frac{R T_{c}}{P_{c}} \label{11.2b}$
where:
$A=\frac{\alpha a P}{R^{2} T^{2}} \label{11.3b}$
$B=\frac{b P}{R T} \label{11.3c}$
Similar to SRK, the PR mixing rules are:
$(\alpha a)_{m}=\sum \sum y_{i} y_{j}(\alpha a)_{i j} ;(\alpha a)_{i j}=\sqrt{(\alpha a)_{i}(\alpha a)_{j}}\left(1-k_{i j}\right) \label{11.4a}$
$b_{m}=\sum_{i} y_{i} b_{i} \label{11.4b}$
where binary interaction parameters (kij) again play the important empirical role of helping to better fit experimental data. Due to the empirical character of these interaction parameters, kij’s calculated for PR EOS are unlikely to be the same as the kij’s calculated for SRK EOS for the same pair of molecules.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/11%3A_Cubic_EOS_and_Their_Behavior_III/11.01%3A_Peng-Robinson_EOS_%281976%29.txt
|
Over the years, these EOS have been tested, and some comparisons can be made. As an engineer, you have to be able to decide which EOS best fits your purposes.
Redlich Kwong EOS:
• Generally good for gas phase properties.
• Poor for liquid phase properties.
• Better when used in conjunction with a correlation for liquid phase behavior.
• Satisfactory for gas phase fugacity calculation @ \(P_r < T_r/3\).
• Satisfactory for enthalpy departure and entropy departure calculations.
Soave Redlich Kwong & Peng Robinson EOS
• Serve similar functions as the Redlich Kwong EOS but require more parameters.
• PR obtains better liquid densities than SRK.
• Overall, PR does a better job (slightly) for gas and condensate systems than SRK. However, for polar systems, SRK always makes a better prediction, but in the petroleum engineering business we do not usually deal with those.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
11.03: Critical Compressibility as a Measure of Goodness of an EOS
Some experimental values for critical compressibility (\(Z_c\)) factors are shown below:
• CO2 = 0.2744
• CH4 = 0.2862
• C2H6 = 0.2793
• nC5 = 0.2693
• nC6 = 0.2659
The values of critical compressibility factors shown here are relatively close to each other, but, in actuality, they are different. They are, in fact, substance-dependent. This is a striking finding if we recall our highly-praised Principle of Corresponding States. Didn’t we say that at the same reduced conditions, all substances “must” have, at the very least, the same compressibility factor, Z? Here is a case where we have different substances at the same corresponding states (Pr = Tr = 1, right at the critical point) but different “Z” values. Afterall the Corresponding States Principle is not infallible (as was stated by Pitzer). As we recall, he proposed the introduction of a third parameter (acentric factor) into the corresponding state definition to alleviate these kinds of “problems.”
At the very least, we may say that the values of Zc (compressibility factor at the critical point) of different substances are “close enough” among themselves. That is, they are not “grossly different,” so as to say that the application of the two-parameter corresponding states principle would be outrageous at the critical point. The fact of the matter is, as a consequence of the Corresponding States Principle, all cubic EOS predict a “unique” and “universal” value of Z at the critical point, regardless of the substance. The list below tells us how they perform.
• Ideal EOS = 1.000
• vdW EOS = 0.375
• RK EOS = 0.333
• SRK EOS = 0.333
• PR EOS = 0.301
From the list above, we would have been expecting some kind of “average” Zc of 0.27 or so. But none of the equations of state we have studied is capable of predicting a value that low. The “best” job is done by PR EOS, which provides the “closest” match to the real values observed for most substances. This illustrates why the PR EOS performs somewhat better near critical conditions.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
11.04: Advantages of Using Cubic Equations of State
All cubic equations of state have their foundation in vdW EOS. The use of cubic equations of state has become widespread because of their advantages:
• Simplicity of application
• Only a few parameters need to be determined
• Low computational overhead is required to implement them. This was a critical issue, particularly in the early stages of computers; it is not really anymore. Nevertheless, this feature is still a “plus.”
The engineer using cubic equations of state must also be aware of the disadvantages that they all share. The most important ones are the limited accuracy that they can provide, particularly for complex systems. In these cases, the procurement of empirical adjustments through the use of the binary interaction parameters (kij) is essential.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/11%3A_Cubic_EOS_and_Their_Behavior_III/11.02%3A_Comparative_Assessment_of_RK%2C_SRK%2C_and_PR_EOS.txt
|
Hopefully, we have convinced you that the use of cubic equations of state can represent a very meaningful and advantageous way of modeling the PVT behavior of petroleum fluids. What we need now are the tools that will allow us to get the information that we want out of them. Even though cubic equations of state are explicit in pressure, pressure is not the common unknown to be calculated in the typical problem. In the most common problem, pressure and temperature are known and we want either molar volume (or its reciprocal, molar density) or compressibility factor (the most likely case). Therefore, we are faced very often with the need to solve for the roots of a cubic expression. Here we present a number of approaches that may be followed.
Analytical scheme
Given the cubic polynomial with real coefficients: $x^3 + ax^2 + bx + c = 0$, the first step is to calculate the parameters:
and
Now let $M = R^2 – Q^3$ be the discriminant. We then consider the following cases:
1. If $M<0\left(R^{2}<Q^{3}\right)$, the polynomial has three real roots. For this case, compute $\theta=\arccos \left(\frac{R}{\sqrt{Q^{3}}}\right)$ and calculate the three distinct real roots as: $x_{1}=-\left(2 \sqrt{Q} \cos \frac{\theta}{3}\right)-\frac{a}{3} \label{11.6a}$ $x_{2}=-\left(2 \sqrt{Q} \cos \frac{\theta+2 \pi}{3}\right)-\frac{a}{3} \label{11.6b}$ $x_{3}=-\left(2 \sqrt{Q} \cos \frac{\theta-2 \pi}{3}\right)-\frac{a}{3} \label{11.6c}$
Note that $x_1$, $x_2$, $x_3$ are not given in any special order, and that $\theta$ has to be calculated in radians.
2. If $M>0\left(R^{2}>Q^{3}\right)$, the polynomial has only one real root. Compute: $S=\sqrt[3]{-R+\sqrt{M}} \label{11.7a}$ $T=\sqrt[3]{-R-\sqrt{M}} \label{11.7b}$ and calculate the real root as follows: $x_{1}=S+T-\frac{a}{3} \label{11.7c}$ Two complex roots (complex conjugates) may be found as well. However, they are of no interest for our purposes, and thus no formulas are provided.
Such formulas can be found in the following suggested readings:
• W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in Fortran, 2nd Edition, Cambridge Univ. Press, (1992), p. 179.
• Spiegel, M., Liu, J., Mathematical handbook of formulas and tables, 2nd Edition, Schaum’s Outline Series, McGraw Hill, p.10.
Sometimes, the equations for $S$ and $T$ listed above cause problems while programming. This usually happens whenever the computer/calculator performs the cubic root of a negative quantity. If you want to avoid such a situation, you may compute $S’$ and $T’$ instead:
$S^{\prime}=-\operatorname{sign}(R) \sqrt[3]{a b s(R)+\sqrt{M}}$
$T^{\prime}=Q / S^{\prime}$
(making $T’=0$ when $S’=0$)
where:
$abs(R)$ is the Absolute value of $R$ and $sign(R)$ is (+1) or (– 1) if $R$ is positive or negative respectively.
It may be defined as:
$sign(R) = R/Abs(R)$
and then the real root is:
$x_{1}=S^{\prime}+T^{\prime}-\frac{a}{3}$
Keep in mind the following useful relationships among the roots of any cubic expression:
$x_{1}+x_{2}+x_{3}=-a \label{11.8a}$
$x_{1} x_{2}+x_{2} x_{3}+x_{3} x_{1}=+b \label{11.8b}$
$x_{1} x_{2} x_{3}=-c \label{11.8c}$
Test your understanding of cubic root calculations by analytical means by solving the following examples.
Exercise $1$
With general cubic expressions,
\begin{aligned} &x^{3}-6 x^{2}+11 x-6=0 \Rightarrow\ &x_{1}=1\ &x_{2}=2\ &x_{3}=3 \end{aligned}
\begin{aligned} &x^{3}+7 x^{2}+49 x+343=0\ &x_{1}=-7\ &x_{2}=0+7 i\ &x_{3}=0-7 i \end{aligned}
\begin{aligned} &x^{3}+2 x^{2}+3 x+4=0 \Rightarrow\ &x_{1}=-1.65063\ &x_{2}=-0.174685+1.54687 i\ &x_{3}=-0.174685-1.54687 i \end{aligned}
with EOS in terms of volume ($v$), for a pure component,
$v^{3}-7.8693 v^{2}+13.3771 v-6.5354=0 \nonumber$
Only one real root (one phase) with
$v_{1}=5.7357 \nonumber$
Exercise $2$
$v^{3}-15.6368 v^{2}+30.315 v-14.8104=0$
Solution
Two possible phases (three real roots),
\begin{aligned} &v_{1}=0.807582 \text { (liquid phase) }\ &v_{2}=1.36174 \text { (rejected) }\ &v_{3}=13.4675 \text { (gas phase) } \end{aligned}
with EOS in terms of the compressibility factor (z), for a pure component,
$z^{3}-1.0595 z^{2}+0.2215 z-0.01317=0 \nonumber$
=> One phase:
$z=0.8045 \nonumber$
Exercise $2$
$z^{3}-z^{2}+0.089 z-0.0013=0$
Solution
Two possible phases,
\begin{aligned} &z_{\text {liquid}}=0.0183012\ &z_{x}=0.0786609 \text { (useless) }\ &\mathrm{Z}_{\mathrm{vapor}}=0.903038 \end{aligned}
Numerical Scheme
The Newton-Raphson method provides a useful scheme for solving for a non-explicit variable from any form of equation (not only cubic ones). Newton-Raphson is an iterative procedure with a fast convergence, although it is not always capable of providing an answer — because a first guess close enough to the actual answer must be provided.
In solving for “x” in any equation of the type f(x)=0, the method provides a new estimate (new guess) closer to the actual answer, based on the previous estimate (old guess). This is written as follows:
$x_{n e w}=x_{o l d}-\frac{f\left(x_{o l d}\right)}{f^{\prime}\left(x_{o l d}\right)} \label{11.9}$
Considering a cubic equation $f(x)=x^{3}+a x^{2}+b x+c=0$, the previous equation takes the form:
$x_{n e w}=x_{o l d}-\frac{x_{o l d}^{3}+a x_{o l d}^{2}+b x_{o l d}+c}{3 x_{o l d}^{2}+2 a x_{o l d}+b} \label{11.10}$
The iterations continue until no significant improvement for “$x_{new}$” is achieved, i.e., $| x_{new} – x_{old} |$ < tolerance. An educated guess must be provided as the starting value for the iterations. If you are solving a cubic equation in Z (compressibility factor), it is usually recommend to take $Z = bP/RT$ as the starting guess for the compressibility of the liquid phase and $Z = 1$ for the vapor root.
Semi-analytical Scheme
If you used the previous numerical approach to calculate one of the roots of the cubic expression, the semi-analytical scheme can give you the other two real roots (when they exist). By using the relationships given before, with the value ‘x1’ as the root already known, the other two roots are calculated by solving the system of equations:
$x_{2}+x_{3}=-a-x_{1} \label{11.11a}$
$x_{2} x_{3}=-c / x_{1} \label{11.11b}$
which leads to a quadratic expression.
This procedure can be reduced to the following steps:
1. Let $x^3 + ax^2 + bx + c = 0$ be the original cubic polynomial and “$E$” the root which is already known ($x_1 = E$). Then, we may factorize such a cubic expression as: $(x-E)\left(x^{2}+F x+G\right)=0 \label{11.12a}$ where $F = a + E \label{11.12b}$ $G = – c / E \label{11.12c}$
2. Solve for $x_2$, $x_3$ by using the quadratic equation formulae, $x_1 = E \label{11.13a}$ $x_{2}=\frac{-F+\sqrt{F^{2}-4 G}}{2} \label{11.13b}$ $x_{3}=\frac{-F-\sqrt{F^{2}-4 G}}{2} \label{11.13c}$
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
11.06: Action Item
Problem Set
1. What is the meaning of Zc? (critical compressibility factor). Why do you think it is used as a measure of the performance of an EOS?
2. What do you think should be the value of “Z” at the critical point? Is it “one” (1)? Why? Is this to be expected?
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/11%3A_Cubic_EOS_and_Their_Behavior_III/11.05%3A_Solution_Techniques_for_Cubic_Expressions_and_Root_Finding.txt
|
Module Goal: To establish the basic framework for vapor-liquid equilibrium calculations.
Module Objective: To establish the connection between real production processes and vapor liquid equilibrium calculation.
12: Elementary Vapor-Liquid Equilibrium I
In the next series of modules, we are going to look at how we can apply what we have learned so far to perform vapor/liquid equilibria calculations.
1. First of all, we are going to review engineering systems and how the phenomenon of VLE is related to them,
2. Then we will proceed by looking at how to describe the problem itself,
3. We then will discuss the formulation the problem,
4. And finally, we will discuss solution strategies.
These are the four main topics that we will look at in this module. As far as VLE is concerned, we can list a number of systems that are at the heart of petroleum fluid production that involve this phenomenon:
• Separators
• Reservoir
• Pipelines
• Wellbore
• LNG Processing
• NGL Processing
• Storage
• Oil and LNG Tankers.
Vapor/liquid equilibrium pertains to all aspects of petroleum production with which we are concerned. It is no wonder, then, that we devote a new module to the subject itself.
Consider the case of a typical transmission pipeline. As gas is injected at the inlet, the pressure will drop continuously along the length of the pipeline, due to friction. Even though we usually think of liquid as forming with increasing pressure — i.e., upon compression — we have to recall that the phenomena of retrograde condensation (discussed in Module 4) takes place in hydrocarbon mixtures. Therefore, contrary to expectations, most single-phase natural gasses yield liquid upon expansion. Therefore, as pressure drops in the pipeline, liquid may drop out as the thermodynamic path crosses the dew point line and enters the phase envelope. In this case, what started as a single phase flow became two-phase flow within the system.
We also encounter this phenomenon in gas condensate reservoirs. Your initial reservoir conditions may be outside the phase envelope, but as you deplete the reservoir, your production path may take the system inside the two-phase region. In these previous two examples, the single-most important property with which you are concerned is the dew point. You want to know dew point, since you would like to know at which point of the pipeline or at which stage of production liquid may start to form.
We also may have an oil reservoir, where the initial pressure and temperature conditions place us in the single-phase liquid region. As you produce, you deplete the reservoir and enter the two-phase region by crossing the bubble point curve. At this point, we would like to know the bubble point of the system so that we may anticipate the appearance of a gas phase within an originally-all-liquid reservoir.
In all these cases, by taking a sample of the fluid to the lab, we may be able to find the composition of the fluid. Hence, in these kinds of problems, composition is usually known, and so are temperature and pressure. Your unknowns are the dew point or bubble point condition.
Suppose we are not interested in what is happening in the reservoir, but rather in what is happening at the surface. You would then like to know how much liquid or gas you will have in your separators. In this case, you are no longer interested in bubble points or dew points, but rather the extent of the phases: how much liquid and how much gas the reservoir will be able to deliver at the surface. In this case, the composition of what is coming to your separator may be known, and the pressure and temperature of operation of each separation stage may be specified. We would want to know the quality and the quantity of what is coming out; that is, we need the compositions of the gas and oil that leave the separators and theflow rates of gas (MSCF/D) and oil (STB/D).
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
12.02: Types of VLE Problems
In a typical problem of liquid and vapor coexistence, we are usually required to know one or more of the following:
• The phase boundaries,
• The extent of each phase,
• The quality of each phase.
The main emphasis is on the quantitative prediction of the above. These three represent the three basic types of VLE problems. A more detailed description of each of them is given below.
1. Phase Boundary Determination Problem
These types of problems are either a bubble-point or a dew-point calculation. They are mathematically stated as follows:
• Bubble-point T calculation: Given liquid composition (xi) and pressure (P), determine the equilibrium temperature (T),
• Bubble-point P calculation: Given liquid composition (xi) and temperature (T), determine the equilibrium pressure (P),
• Dew-point T calculation: Given vapor composition (yi) and pressure (P), determine the equilibrium temperature (T),
• Dew-point P calculation: Given vapor composition (yi) and temperature (T), determine the equilibrium pressure (P).
2. Relative Phase Quantity Determination
In this type of problem, overall composition (zi), pressure (P), and temperature (T) are given, and the extent of the phases (molar fractions of gas and liquid) are required.
3. Phase Quality Determination
In this type of problem, overall composition (zi), pressure (P), and temperature (T) are given, and the composition of the liquid and vapor phases is required.
Problems of types 2 and 3 are collectively referred to as flash calculation problems. All three are problems that we encounter in production operations as petroleum engineers. Our focus now is on solving these sorts of problems. We want to use a predictive approach to do so. This is, we want to use mathematical models — the most economical and convenient approach — to accomplish the task.
One of the assumptions that we are making here is that of equilibrium. We assume that at all times, vapor and liquid that are coexisting together are in equilibrium. Are they really in equilibrium? No! Nevertheless, the state of current knolwedge requires us to assume equilibrium so as to be able to proceed.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/12%3A_Elementary_Vapor-Liquid_Equilibrium_I/12.01%3A_Introduction.txt
|
We assume that the system is at steady state and at a state of equilibrium. Adopting these two assumptions is essential to the developing of the equations that we use to solve these problems. These assumptions are convenient for modeling and have proven useful in representing the real phenomena. In conclusion, we claim that the systems maintain a state that resembles equilibrium and does not depart from it greatly. And what does steady state means? Simply stated, we say that a system is at steady state when whatever comes into the system goes out. This is, no accumulation takes place within the system.
Let us consider the equilibrium cell shown in Figure 12.1. “F” moles of a feed enter our equilibrium cell with a composition “zi” and “nc” is the number of components that we have in the mixture. A flash vaporization takes place at a given pressure and temperature, and two streams come out: “V” moles of a vapor of composition “yi” and “L” moles of a liquid of composition “xi”. In steady state, a simple overall balance yields:
Now we define the fractions of gas and liquid to be, respectively:
and(12.1b) (12.1c)
Therefore, if we divide equation (12.1a) by “F”, we get:
(12.2)
The same steady state assumption applies for the mass of each component separately. Here we revisit a concept we used in Module 5, when we studied the lever rule. At that point, we said that the number of moles of a component “i” per mole of mixture in the liquid phase is given by the product “xiaL”, while the number of moles of “i” per mole of mixture in the gas is given by “yiaG”. Since there are “zi” moles of component “i” per mole of mixture coming into the system, the conservation of each component in the system imposes:
where i = 1, 2,… nc (12.3)
Equation (12.3) is true for each of the components in the system. Equation (12.2) can be introduced into equation (12.3) to yield:
(12.4)
One of the concepts that we normally use in vapor-liquid equilibria is that of the equilibrium ratio, Ki. In fact, most of the computations of phase behavior of natural gas mixtures are carried out through the concept of the equilibrium ratio. By definition, the equilibrium ratio of a component “i” in a vapor-liquid mixture is defined as the ratio of the molar composition of that component in the vapor phase to that in the liquid phase,
(12.5)
In earlier literature, this concept was referred to as the equilibrium constant. In actuality, Ki is not constant but a function of the pressure, temperature, and composition of the system. However, equilibrium ratios can be fairly independent of composition when the pressure and temperature conditions are far from critical.
Therefore, today we refer to it as the vapor-liquid equilibrium ratio, Ki. We can introduce this concept into the balance in (12.4), as shown:
(12.6)
Now, solving for yi,
(12.7)
A constraint that mole fractions must satisfy is that they must add up to unity. Since we solved for yi, we can impose that the summation of all molar vapor fractions must be equal to one, i.e.,
(12.8)
If we now substitute (12.7) into (12.8), we get:
(12.9)
This equation is important for us; we call it an objective function because we can use it as the starting point for solving the vapor-liquid equilibrium problems we have posed.
However, as you may be thinking right now, this is not the only choice that we have for an objective function. In fact, we may obtain another objective function if we repeat the previous steps, while solving instead for xi. In this case, we may introduce the concept of equilibrium ratio in (12.5) into (12.6) as follows:
(12.10)
We now solve for xi,
(12.11)
If we apply the constraint that all mole fractions must add up to one,
(12.12)
Both (12.9) and (12.12) are plausible objective functions. Either of them allows us to solve the flash problem that we are dealing with. The variables that make up both equations are:
nc = Number of components,
zi = Overall composition, or composition of the feed,
Ki = Equilibrium ratios of each of the components of the mixture,
ag = Vapor fraction in the system.
What is it that we are looking for? Go back and look at the types of VLE problems that we would like to solve, as we presented them in the previous section. If we are interested in solving the flash problem, we want to know how much liquid and gas we will have inside the flash equilibrium cell. This is, given a liquid-vapor mixture of composition zi, and nc number of components, what percent of the total number of moles is liquid, and what percent is vapor? How do we split it? In this case, we would like to come up with a value for αl and αg respectively.
Equations (12.9) and (12.12) tell us that if we are able to come up with the proper values for the equilibrium ratios, Ki, which are functions of the pressure, temperature, and composition of the system, the only unknown left to solve for would be αg — exactly what we want!
Well, do not rush. We would have to come up with a way of calculating Ki’s first, and this may not be a trivial task. For the time being, let us say we “know” Ki’s. Two questions remain unanswered:
1. First, is it “better” to solve the problem using equation (12.9) or (12.12)? (Recall, either of them would lead us to the answer!).
2. Second, how do we solve for αg? For a complex mixture of many components, “αg” cannot be calculated explicitly.
We will address both of these questions in the next module. As for now, let us give you a hint: we will not use either equation (12.9) or (12.12) to solve the flash problem!
Contributors
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
12.04: Action Item
Answer the following problems, and submit your answers to the drop box in ANGEL that has been created for this module.
Please note:
• Your answers must be submitted in the form of a Microsoft Word document.
• Include your Penn State Access Account user ID in the name of your file (for example, "module2_abc123.doc").
• The due date for this assignment will be sent to the class by e-mail in ANGEL.
• Your grade for the assignment will appear in the drop box approximately one week after the due date.
• You can access the drop box for this module in ANGEL by clicking on the Lessons tab, and then locating the drop box on the list that appears.
Problem Set
1. You are given two different phase envelopes (VLE region) that represent two different gas reservoir fluids. The first of them has a larger PT envelope covering larger P,T ranges. The other covers a fairly narrow P,T range. What do you conclude? Speculate on the type of gas reservoir for each case. Justify your answer.
2. In order to design the appropriate production scheme for a gas reservoir recently discovered, what type of VLE calculation do you think is more valuable? Explain in detail.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/12%3A_Elementary_Vapor-Liquid_Equilibrium_I/12.03%3A_Formulation_of_the_VLE_Problem.txt
|
Module Goal: To establish the basic framework for vapor-liquid equilibrium calculations.
Module Objective: To formulate the basic governing equations for performing flash calculations.
13: Elementary Vapor-Liquid Equilibrium II
In a previous module we derived two different objective functions for the purpose of solving the flash equilibrium problem. Let us take a closer look at these equations:
(13.1a)
(13.1b)
These equations arise from simple mole balances and the concept of equilibrium ratios. As we discussed before, if we are given {zi; i = 1,2,…,n} and, if for some reason, let us say, we are able to obtain all the equilibrium ratios {Ki; i = 1,2,…,n}; the only unknown in these objective functions would be the vapor fraction ‘αg’.
Once we are able to solve for this αg, we will have no problem applying any combination of equations (12.5), (12.7), and (12.11) to solve for all the vapor and liquid compositions at equilibrium {yi, xi}:
(12.5)
(12.7)
(12.11)
With all this information, the VLE problem would be completely solved. With the compositional information and the use of suitable equations of state and correlations, we can then find all other related properties, such as densities, viscosities, molecular weights, etc. This is why we call these the objective functions; once we have solved them, we have achieved the objective of the VLE calculation.
Of course one question remains unanswered — how do you solve for that ‘αg’, which is buried inside those expressions? And secondly, which of the two objective functions that are available to us shall we use? It turns out that the answers to these questions are tied to one another. In fact, the proper objective function that we shall use to solve for ‘αg’ is the same equation that simplifies the process of solving for that unknown.
To come up with these answers, the first thing that we have to notice is that both expressions are non-linear in αg. This means that we cannot express ‘αgexplicitly as a function of the other variables. What do we use to solve equations that are non-linear in one variable? Doesn’t this ring a bell? We apply iterative techniques. And the classical iterative technique is the Newton-Raphson procedure. Now we can provide an answer to both of the questions that we just asked.
A distinctive characteristic of any Newton-Raphson procedure is that the success of the procedure depends greatly upon the choice of the initial guess for the variable considered. In fact, it is very commonly said that for Newton-Raphson to succeed, the initial guess must be as close as possible to the real solution. This ‘illness’ becomes worse when dealing with non-monotonic functions. In a monotonic function, derivatives at every point are always of the same sign — the function either increases or decreases monotonically. For Newton-Raphson, this means that there are neither valleys nor peaks that could lead the procedure to false solutions. If you apply Newton-Raphson to a monotonic and everywhere-continuous function, the success of the procedure is not dependent on the initial guess. In fact, if you apply Newton-Raphson to a monotonic function that is continuous at every single point of the domain as well, it does not matter at all where you start: you will always find the solution. It might take time, but you will be able to converge to a unique solution.
Why does this matter when dealing with equations (13.1)? The fact of the matter is that equations (13.1) are not monotonic, and this does not make things easier. If, as an exercise, you plot them as a function of αg or take the derivative, you will realize that both functions may change the sign of their first derivatives for different values of αg.
This poses a problem, obviously. You will not get a unique solution by applying Newton-Raphson, and you might end up with the wrong solution. Rachford and Rice (1952) recognized this problem and came up with a suggestion. They proposed a new objective function, based on equations (13.1), which simplifies the application of the Newton-Raphson procedure.
They combined equations (13.1) by subtraction to yield:
(13.2)
Hence, the new objective function becomes:
(13.3)
Equation (13.3) is known in the literature as the Rachford-Rice Objective Function. Rachford and Rice combined two very well known objective functions into a single objective function. Are there advantages to this “new” approach?
The wonderful news is that equation (13.3) is monotonic. The implication of this is that equation (13.3) is better suited for Newton-Raphson application than equations (13.1). How do you demonstrate the monotonic character of the Rachford and Rice objective function? To do this, we take the first derivative of the function:
(13.4)
Every item within the summation sign is positive — this is guaranteed by the squares in the numerator and the denominator and the fact that all compositions are positive. Hence, the derivative expression in (13.4) has no choice but to be negative, and the Rachford-Rice Objective function has been proven to be a monotonically decreasing function. With this approach, Rachford-Rice removed a major headache from the Vapor-Liquid Equilibria problem.
A remaining weakness of the Rachford-Rice objective function is that, although monotonic, it is not continuous at all points of the domain. By inspection, you can see that the function has ‘n’ singularities (as many singularities as components in the mixture), because it becomes singular at values of ‘αg’ equal to:
(13.5)
Hence, you may still face convergence problems if the procedure crosses any of the singularities. If, by any means, one is able to keep the Newton-Raphson procedure within values of αg where a physically meaningful solution is possible (within the two asymptotes where the values 0 < αg < 1 are found), the monotonically decreasing feature of the Rachford-Rice equation would guarantee convergence.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/13%3A_Elementary_Vapor-Liquid_Equilibrium_II/13.01%3A_Analysis_of_Objective_Functions.txt
|
We have seen that from a molar material balance applied to a two-phase system in equilibrium, and the definition of Ki, we can derive the Rachford and Rice Objective Function:
(13.3)
Equation (13.3) is a non-linear equation in one variable, and the Newton Raphson procedure is usually implemented to solve it. In general, Newton Raphson is an iterative procedure with a fast rate of convergence. The method calculates a new estimate, αgnew, which is closer to the real answer than the previous guess, αgold, as follows:
(13.6)
Substituting (13.3) and (13.4) into (13.6),
(13.7)
In this iterative scheme, convergence is achieved when
(13.8)
where ε is a small number (ε = 1.0 x 10– 9 is usually adequate). After solving for , the liquid molar fraction and composition of each of the phases can be calculated as follows:
Liquid Molar Fraction: (13.9a)
Percentage of Liquid: (13.9b)
Percentage of Vapor: (13.9c)
Vapor Phase Composition: (12.7)
Liquid Phase Composition: (12.11)
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
13.03: Do We Really Know Ki
In the previous development, we made one crucial assumption. We assumed that, somehow, we knew all the equilibrium ratios. The fact is, however, that we usually don’t. If we do not know all equilibrium ratios, then all of the previous discussion is meaningless. So far, the only conclusion we can draw is that if we happen to know Ki’s, the VLE problem is solvable.
The Ki value of each component in a real hydrocarbon mixture is a function of the pressure, temperature, and also of the composition of each of the phases. Since the compositions of the phases are not known beforehand, equilibrium constants are not known, either. If they were known, the VLE calculation would be performed in a straightforward manner. This is because once the equilibrium constants of each component of the mixture are known for the given pressure and temperature of the system, both gas and liquid molar fractions, αg and al can be calculated by solving the Rachford-Rice Objective Function.
Nevertheless, the good news is that sometimes Ki’s are fairly independent of the phase’s composition. This is true at pressure and temperature conditions away from the critical point of the mixture. Therefore, numerous correlations have been developed throughout the years to estimate the values of Ki for each hydrocarbon component as a function of the pressure and temperature of the system.
To illustrate how Ki may be calculated as a function of pressure and temperature, let us take the case of an ideal mixture. For a mixture to behave ideally, it must be far removed from critical conditions. The fact of the matter is, in an ideal mixture, the partial pressure of a component in the vapor phase (pi) is proportional to the vapor pressure (Psat) of that component when in its pure form, at the given temperature. The constant of proportionality is the molar fraction of that component in the liquid (xi). Then, we have:
(13.10)
Equation (13.10) is known as Raoult’s law. Additionally, if the vapor phase behaves ideally, Dalton’s law of partial pressures applies. Dalton’s law of partial pressures says that the total pressure in a vapor mixture is equal to the sum of the individual contributions (partial pressures) of each component. The partial pressure of each component is a function of the composition of that component in the vapor phase:
(13.11)
Equalizing equation (13.10) and (13.11),
(13.12)
We rearrange equation (13.12) to show:
(13.13)
If we recall the definition of equilibrium ratios, , we readily see:
(13.14)
Since the vapor pressure of a pure substance (Psat) is a function of temperature, we have just shown with equation (13.14) that the equilibrium ratios Ki are functions of pressure and temperature — and not of composition — when we are dealing with ideal substances. Vapor pressure can be calculated by a correlation, such as that of Lee and Kesler.
The estimation of equilibrium ratios (Ki) has been a very intensely researched subject in vapor-liquid equilibria. A number of methods have been proposed in the literature. In the early years, the most common way of estimating equilibrium ratios was with the aid of charts and graphs that provided Ki values as a function of pressure and temperature for various components. Charts provided better estimations of Ki’s than what came from the direct application of Raoult-Dalton’s derivation (equation 13.14). However, the compositional dependency could not be fully captured by the use of charts. Charts remained popular until the advent of computers, when the application of more rigorous thermodynamic models became possible.
Most of the Ki-charts available have been represented by empirical mathematical correlations to make them amenable for computer calculations. You may find a large number of correlations in the literature that would allow you to estimate Ki’s for a range of conditions. A very popular empirical correlation that is very often used in the petroleum and natural gas industry is Wilson’s empirical correlation. This correlation gives the value of Ki as a function of reduced conditions (Pri, Tri: reduced pressure and temperature respectively) and Pitzer’s acentric factor and is written as:
(13.15)
Wilson’s correlation is based on Raoult-Dalton’s derivation (equation 13.14). Therefore, it does not provide any compositional dependency for Ki’s and, as such, it is only applicable at low pressures (away from the critical conditions). We have included this correlation not because of its accuracy, but because it will become the initial guess that is needed to start the Ki-prediction procedure, which we will develop later.
In the following modules, we will develop a rigorous thermodynamic model for predicting equilibrium ratios. As you may anticipate, we will be taking advantage of what we have learned about equations of state in order to build up a phase behavior predictor model. This approach is known as the equation of state approach, or the “fugacity” approach. Even though the “fugacity” approach is the one that we will be covering in detail, you may at some point encounter the fact that equilibrium ratios can also be estimated by using the solution theory or “activity” approach. We will not be discussing the latter, as the former represents the most popular Ki-prediction method in the petroleum and natural gas business.
In order to guarantee a complete understanding of the thermodynamic model which we are about to apply, we need to review the basic concepts of classical thermodynamics. This is the goal for the next couple of modules (Modules 14 through 16). After all of the thermodynamic tools have been given to the reader, we will resume our discussion of Vapor-Liquid Equilibria (Module 17), along with discussing how equations of state come into the picture. Our final goal will be to tie the equilibrium ratio, Ki, to the thermodynamic concepts of chemical potential, fugacity, and equilibrium.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
13.04: Action Item
Answer the following problems, and submit your answers to the drop box in ANGEL that has been created for this module.
Please note:
• Your answers must be submitted in the form of a Microsoft Word document.
• Include your Penn State Access Account user ID in the name of your file (for example, "module2_abc123.doc").
• The due date for this assignment will be sent to the class by e-mail in ANGEL.
• Your grade for the assignment will appear in the drop box approximately one week after the due date.
• You can access the drop box for this module in ANGEL by clicking on the Lessons tab, and then locating the drop box on the list that appears.
Problem Set
1. In the older literature, “Ki” were referred to as the “equilibrium constants.” Now, they are called the “equilibrium ratios.” Why should that make a difference?
2. In the analysis of the Rachford-Rice Objective Function, we assumed that there was only one unknown: the gas molar fraction (αg). It this strictly true?
3. Warren and Adewumi (1992) derived the following analytical expression for flash calculations for a binary mixture:
Please demonstrate that this expression is true.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/13%3A_Elementary_Vapor-Liquid_Equilibrium_II/13.02%3A_Objective_Function_and_Newton-Raphson_Procedure.txt
|
Learning Objectives
• Module Goal: To establish the mathematical framework for thermodynamics of phase equilibrium.
• Module Objective: To highlight the importance of thermodynamic functions as functions of state.
The goal of this module, combined with all the background that we already have built on fluid behavior, is to establish a foundation for us to cultivate a culture of the appropriate utilization of thermodynamics.
14: Thermodynamic Tools I
Thermodynamics plays an important role in science and engineering. Most physical processes of engineering interest operate on thermodynamic principles. Thermodynamics is the science dealing exclusively with the principles of energy conversion. This message is conveyed by the word itself: “thermo” means “heat” — a manifestation of energy — and “dynamics” deals with “motion.” Often times, we do not use thermodynamics as much as we should. This is either because we do not know how useful a tool it can be, or, simply because we do not know how to use it.
The basic ingredients for the utilization of thermodynamics are:
• The ability to identify and define the system that best characterizes a given process
• The availability of relevant information
• Sound engineering judgment
What kinds of problems can be solved with thermodynamics? There are three general classes of thermodynamic problems.
1. System property variation determination. In this case, we are given a process that takes place under certain known constraints. We are required to determine how the system properties vary.
2. Interactions that cause change. In these kinds of problems, the changes desired in the system properties are prescribed. We are required to determine the amount of external interactions that are needed to cause these changes — i.e., how to vary constraints to obtain the required final system state.
3. The best-path problem. Here we are given a system with constraints and a desired variation. We are required to determine the best way to accomplish the change — i.e., the best way to reach a goal.
In science and engineering, mathematical rigor is of essence. The next sections will take us through the mathematics of thermodynamics.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
14.02: Basic Definitions
In thermodynamics, a system is a spatial domain bounded for the purpose of describing a problem; while the surroundings are the entire spatial domain outside of the system. The communication between them is established through the boundaries of the system. The system and the surroundings make up the universe.
The system and the surroundings interact with each other. As we discussed above, one type of thermodynamic problem is that of predicting changes in a system due to interactions with its surroundings. A system is open if it can exchange mass with the surroundings, and closed if it does not exchange mass with the surroundings. A system is adiabatic if it does not exchange heat energy with the surroundings. We called a system isolated if there is neither heat nor mass crossing its boundaries.
Thermodynamic properties can be divided into two general classes: intensive and extensive properties. An intensive property is one whose value is independent of the size, extent, or mass of the system, and includes pressure, temperature, and density. By contrast, the value of an extensive property changes directly with the mass. Mass and volume are examples of extensive properties. Extensive properties per unit mass, such as specific volume, are intensive properties.
A system is homogeneous if it is has uniform properties throughout, i.e. a property such as density has the same value from point to point in a macroscopic sense. A phase is defined as a quantity of matter that is homogeneous throughout. Hence, a homogeneous system is, essentially, a single-phase system. A heterogeneous system is one with non-uniform properties, and hence, is made up of phases which can be distinguished from one another by the presence of interfaces.
Thestate is the thermodynamic coordinate of the system, specified by a number of intensive variables. The degree of freedom is the number of intensive variables needed to define the state of the system. State functions are those whose changes depend on their end states only and are independent of the path between them.
A process is the series of successive, intermediate states that the system goes through in order to go from an initial to a final state. A process is isothermal or isobaric if the temperatures or pressures of all successive steps are the same, respectively. A reversible process is one for which the exchange of energy between the system and its surroundings takes place under vanishing gradients or driving forces; that is, the properties of the system and surroundings are balanced. In a reversible process, each step of the process can be “reversed” and the original states of the system and surroundings can be restored. Any process that does not take place under infinitesimal gradients is irreversible. Strictly speaking, a reversible process is an idealization.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/14%3A_Thermodynamic_Tools_I/14.01%3A_Introduction.txt
|
Thermodynamics cannot tell about the rate (kinetics) of a process, but it can tell whether or not it is possible for a process to occur. For this, we make use of the first and second laws of thermodynamics.
From our basic courses in thermodynamics, we recall that the first law of thermodynamics for a closed system is written as follows:
(14.1)
where,
U = internal energy,
Q = heat added to or extracted from the system,
W = work done by or to the system.
If we want to improve the definition of the first law of thermodynamics stated in (14.1), we need to come up with an expression for the amount of work and heat. The amount of work need to accomplish a reversible process is given by the expression:
(14.2)
where:
P = pressure,
V = volume.
Additionally, we can compute the heat required to accomplish a reversible process by virtue of the second law of thermodynamics:
(14.3)
where:
T = temperature,
S = entropy.
We substitute (14.2) and (14.3) into (14.1) to get:
dU = T dS – P dV (14.4)
which is the fundamental thermodynamic relationship used to compute changes in Internal Energy (U) for a closed system. This is only a restatement of the first law of thermodynamics.
Although equations (14.2) and (14.3) are applicable strictly to reversible processes, equation (14.4) is quite general and does not have such a constraint. Equation (14.4) applies to reversible and irreversible processes. Are you surprised by this? Okay, it is about time that we start to get a feeling for the implications of the fact that most thermodynamic properties are state functions. Internal energy (U) is a state function, and as such, its changes (dU) do not depend on the path that was taken (reversible/irreversible), but on the end points of process. Hence, if taking any path to compute “dU” is okay, why not take the reversible path? We do not have nice, explicit equations to describe work and heat for irreversible processes. Why should we bother? Generally speaking, “reversibility” is a thermodynamic trick that helps us to get away with many thermodynamic manipulations by taking advantage of the properties of state functions.
The formal, fundamental thermodynamic definition of Enthalpy (H) is the following:
H = U + P (14.5)
Enthalpy is a state function as well. If we are interested in computing changes in enthalpy, we write:
dH = dU + P dV + VdP(14.6)
Since we already have a way to compute dU (equation 14.4), we can now write the fundamental thermodynamic relationship used to compute changes of enthalpy in a closed system:
dH = T dS + V dP(14.7)
By the same token, the formal, fundamental definition of Gibbs Free Energy (G) is the following:
G = H – T S (14.8)
from which we see that changes in this property may be calculated as:
dG = dH – T dS – S dT(14.9a)
We now substitute (14.7) to get the fundamental thermodynamic relationship for changes in Gibbs free energy in a closed system:
dG = V dP – S dT(14.9b)
We proceed with Helmholtz Free Energy (A) accordingly. Its definition:
A = U – T S (14.10)
The expression of change,
dA = dU – T dS – S dT (14.11)
We substitute (6.4) into it and get:
dA = - P dV – S dT(14.12)
We have just derived the following fundamental thermodynamic relationships for fluids of constant composition:
dU = T dS – P dV(14.4)
dH = T dS + VdP (14.7)
dG = V dP – S dT(14.9)
dA = – P dV – S dT(14.12)
All these properties (U, H, G and A) are state functions and extensive properties of the system. Notice that the only assumption that we took throughout their development is that the system was closed (for the derivation of 14.4). Hence, these equations strictly apply to systems of constant composition.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
14.04: Functions of State or State Functions
A function of state is one in which the differential change is determined only by the end states and not by intervening states. Most thermodynamic variables are state functions and hence property changes are determined by the end states and not by the process path. Notable exceptions are work and heat.
The most common thermodynamic state functions with which we shall deal include
• Internal Energy (U),
• Entropy (S),
• Enthalpy (H),
• Helmotz Energy (A), and
• Gibbs Energy (G).
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
14.05: Mechanics of Manipulating a Function of State
Given that f(x,y,z) is any state function that characterizes the system and (x,y,z) is a set of independent variable properties of that system, we know that any change Δf will be only a function of the value of “f” at the final and initial states,
(14.13)
Since f=f(x,y,z), we can mathematically relate the total differential change (df) to the partial derivatives ,, and of the function, as follows:
(14.14)
where, in general:
the change of f with respect to x, while y and z are unchanged.
If we want to come up with the total change, Δf, of a property (we want to go from 14.14. to 14.13), we integrate the expression in (14.14) to get:
(14.15)
Let us visualize this with an example. For a system of constant composition, its thermodynamic state is completely defined when two properties of the system are fixed. Let us say we have a pure component at a fixed pressure (P) and temperature (T). Hence, all other thermodynamic properties, for example, enthalpy (H), are fixed as well. Since H is only a function of P and T, we write:
(14.16)
and hence, applying 6.2, any differential change in enthalpy can be computed as:
(14.17)
The total change in enthalpy of the pure-component system becomes:
(14.18)
Now we are ready to spell out the exactness condition, which is the mathematical condition for a function to be a state function. The fact of the matter is, that for a function to be a state function — i.e., its integrated path shown in (14.15) is only a function of the end states, as shown in (14.13) — its total differential must be exact. In other words, if the total differential shown in (14.14) is exact, then f(x,y,z) is a state function. How do we know if a total differential is exact or not?
Given a function Ψ(x,y,z),
(14.19a)
where:
(14.19b)
(14.19c)
(14.19d)
we say that dψ is an exact differential and consequently Ψ(x,y,z) a state function if all the following conditions are satisfied:
(14.20a)
(14.20b)
(14.20c)
Equations (14.20) are called the exactness condition.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/14%3A_Thermodynamic_Tools_I/14.03%3A_Thermodynamics_of_Systems_of_Constant_Composition_%28Closed_Systems%29.txt
|
Now we recall the fundamental thermodynamic relationships for closed systems, which we derived above:
dU = T dS – P dV(14.4)
dH = T dS + VdP(14.7)
dG = V dP – S dT(14.9)
dA = – P dV – S dT(14.12)
where U, H, G, and A are state functions. There is something remarkable about the above expressions: they allow for the direct calculation of the change in a state function as a function of changes in other two. An important lesson to be learned here: when dealing with thermodynamic state properties, we are most interested in changes in the value of state properties rather than their actual values.
As we have said, the above relationships allow us to visualize that the changes of each thermodynamic property are dependent on the change of two others in a closed system.
U = U ( S , V )(14.21a)
H = H ( S , P )(14.21b)
G = G ( P, T )(14.21c)
A = A ( V , T )(14.21d)
Therefore, recalling what we did in equation (14.14), we express the total differential of each of these properties as:
(14.22a) (14.22b)(14.22c) (14.22d)
Comparing these equations term to term, the following realizations can be made:
; (14.23a) ; (14.23b) ; (14.23c) ; (14.23d)
Additionally, we can go one step further. U, H, G, and A are state functions, and as such, their total differentials (equations 14.4, 14.7, 14.9, and 14.12) must be exact. Recall that for a total differential df=Mdx+Ndy to be an exact differential, it must satisfy the equation:
(14.24)
Equation (14.24) is the exactness criteria for a function of two independent variables. It was previously stated in (14.20) for a state function of three independent variables. Its application yields:
(14.25a)
(14.25b)
(14.25c)
(14.25d)
Equations (14.25) are known as Maxwell’s relationships. Maxwell’s relationships are very useful for manipulating thermodynamic equations. For instance, it is always desirable for practical purposes to express thermodynamic properties such as enthalpy (H) and entropy (S) as functions of measurable properties such as pressure (P) and temperature (T):
H = H (P, T)(14.26a)
S = S (P, T)(14.26a)
Starting with the total differential of H and S as a function of P and T, we can prove that the relationships between the parameters H and S and the parameters P and T are given by the expressions:
(14.27a)
(14.27b)
Additionally, since these expressions for dH and dS must also be exact differentials, you could prove that the heat capacity at constant pressure (CP) for an ideal gas (PV = RT) does not depend on pressure. The thermodynamic definition of CP and Cv are:
Heat capacity at constant pressure(14.28a) Heat capacity at constant volume(14.28b)
The heat capacity at constant volume (CV) for an ideal gas does not depend on pressure, either. You could prove this by proving first that Cp=CV+R (R=universal gas constant) for ideal gases, using above tools.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
14.07: Action Item
Answer the following problems, and submit your answers to the drop box in ANGEL that has been created for this module.
Please note:
• Your answers must be submitted in the form of a Microsoft Word document.
• Include your Penn State Access Account user ID in the name of your file (for example, "module2_abc123.doc").
• The due date for this assignment will be sent to the class by e-mail in ANGEL.
• Your grade for the assignment will appear in the drop box approximately one week after the due date.
• You can access the drop box for this module in ANGEL by clicking on the Lessons tab, and then locating the drop box on the list that appears.
Problem Set
1. In you own words, state how you would explain the implications of the 1st law and 2nd law of thermodynamics. Assume that you need to explain this to a 5 year old child that has no idea about thermodynamic jargon.
2. Suppose that most thermodynamic properties were not state functions. What impact would that have in process design?
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/14%3A_Thermodynamic_Tools_I/14.06%3A_Computing_Property_Changes_in_Closed_Systems.txt
|
Module Goal: To establish the mathematical framework for thermodynamics of phase equilibrium.
Module Objective: To establish that there is an unique relationship between partial molar quantities in any mixture.
15: Thermodynamic Tools II
Any function f(x) that possesses the characteristic mapping:
(15.1)
is said to be homogeneous, with respect to x, to degree 1. By the same token, if f(x) obeys the mapping:
(15.2)
then f(x) is homogeneous to degree “k”. In general, a multivariable function f(x1,x2,x3,…) is said to be homogeneous of degree “k” in variables xi(i=1,2,3,…) if for any value of λ,
(15.3)
For example, let us consider the function:
(15.4)
How do we find out if this particular function is homogeneous, and if it is, to what degree? We evaluate this function at x=λx and y= λy to obtain:
(15.5)
hence, the function f(x,y) in (15.4) is homogeneous to degree -1.
In regard to thermodynamics, extensive variables are homogeneous with degree “1” with respect to the number of moles of each component. They are, in fact, proportional to the mass of the system to the power of one (k=1 in equation 15.2 or 15.3). This is, if we triple the amount of mass in the system, the value of any given extensive property will be tripled as well. Notice that this is not the case for intensive properties of the system (such as temperature or pressure), simply because they are independent of mass. Hence, intensive thermodynamic properties are homogeneous functions with degree “0” — in such a case, k=0 in equation (15.2) or (15.3).
From the previous section, it is clear that we are not only interested in looking at thermodynamic functions alone, but that it is also very important to compute how thermodynamic functions change and how that change is mathematically related to their partial derivatives ,, and . Hence, to complete the discussion on homogeneous functions, it is useful to study the mathematical theorem that establishes a relationship between a homogeneous function and its partial derivatives. This is Euler’s theorem.
Euler’s theorem states that if a function f(ai, i = 1,2,…) is homogeneous to degree “k”, then such a function can be written in terms of its partial derivatives, as follows:
(15.6a)
Since (15.6a) is true for all values of λ, it must be true for λ = 1. In this case, (15.6a) takes a special form:
(15.6b)
So far, so good. But…what is the application of all this? Well, first of all, we have to know something more about extensive thermodynamic properties. A very neat thing about them is that they can be written as a function of a sufficient number of independent variables to completely define the thermodynamic state of the system. Such a set is said to be a complete set. As it turns out, any thermodynamic system is completely defined when both the masses of all the substances within it are defined and two additional independent variables are fixed. This is Duhem’s theorem. From a real-life perspective, it is natural to choose pressure and temperature as those “independent variables” — physical quantities that we have a “feel” for and we think we can control — rather than specific volume or entropy. As we will see later, they are also convenient variables of choice because they are homogeneous of degree zero in mass.
Let “” be a given extensive property of a multi-component system. From the previous section, we know that the value of “” must be fixed and uniquely determined once we fix the pressure, temperature, and number of moles of each component in the system. This is,
(15.7a)
Additionally, we recall that extensive properties are homogeneous of degree one with respect to number of moles and homogeneous of degree zero with respect to pressure and temperature. Thus, expression (15.6b) is readily applicable:
(15.7b)
where we have just defined:
(15.7c)
Equation (15.7c) is a very important definition. It defines the concept of a partial molar quantity. A partial molar quantity represents the change in the total quantity due to the addition of an infinitesimal amount of species “i” to the system at constant pressure and temperature. Euler’s theorem gave birth to the concept of partial molar quantity and provides the functional link between it (calculated for each component) and the total quantity. The selection of pressure and temperature in (15.7c) was not trivial. First, they are convenient variables to work with because we can measure them in the lab. But most important, they are intensive variables, homogeneous functions of degree zero in number of moles (and mass). This allowed us to use Euler’s theorem and jump to (15.7b), where only a summation with respect to number of moles survived. The definition of the partial molar quantity followed.
The conventional notation we are going to follow throughout the following section is:
= Total quantity (e.g., total volume, total internal energy, etc),
= Molar quantity, i.e., total quantity per unit mole:
(for a mixture, ),(15.8a)
= Partial molar quantity,
= Mass or specific quantity, i.e., total quantity per unit mass:
(for a mixture, )(15.8b)
We can rewrite equation (15.7b) in terms of molar quantity using the definition in (15.8a),
(15.9)
where:
xi = molar fraction of species “i” = .
Any molar quantity in thermodynamics can be written in terms of the partial molar quantity of its constituents. If we set , we end up with,
(15.10a)
Equivalently, if we set (total volume),
(15.10b)
Notice that for single component systems (xi=1), partial molar properties are just equal to the molar property:
(if xi=1, i=1)(15.11a) (if xi=1, i=1)(15.11b)
This is also a consequence of the definition in (15.7c),
(15.7c)
For a pure component, n=ni, i=1, and:
(for a pure component).(15.12)
The reason for the introduction of the concept of a partial molar quantity is that often times we deal with mixtures rather than pure-component systems. The way to characterize the state of the mixtures is via partial molar properties. This concept provides the bridge between the thermodynamics of systems of constant composition, which we have studied so far, and the thermodynamics of systems of variable composition, which we will deal with in the next section. Basically, the definition in (15.7c):
(15.7c)
allows us to quantify how the total, extensive property changes with additions of ni at constant pressure and temperature. If you look at (15.7b) and (15.9), you will also realize that (15.7c) is just an allocation formula that allows assigning to each species “i” a share of the total mixture property, such that:
(15.7b)
(15.9)
We can play with “” a little more. Let us say that we are now interested in looking at the differential changes of . Since we also know that is a state function, and given the functional relationship in (15.7a), the total differential for is written:
(15.13a)
or
(15.13b)
Basically, equations (15.13) tell us that any change in P, T, or ni will cause a corresponding change in the total property, . This is a reinforcement of what is explicitly declared in (15.7a). If we recall (15.7b), an alternate expression for the total differential in (15.13) is written:
(15.14)
If we subtract (15.14) from (15.13b), we get:
(15.15)
Therefore,
(15.16)
Equation (15.16) is the well-known Gibbs-Duhem equation. It can be applied to any extensive thermodynamic property: U, S, H, G, A, and it must hold true. It represents a thermodynamic constraint between the intensive variables P, T and . Pressure, temperature and partial molar properties cannot vary in just any fashion; any change taking place among them must satisfy (15.16). The change in any one of them can be calculated as a function of the change in the other two by means of the Gibbs-Duhem equation. This equation is the basis for thermodynamic consistency checks of experimental data.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/15%3A_Thermodynamic_Tools_II/15.01%3A_Homogeneous_Functions%2C_Euler%27s_Theorem_and_Partial_Molar_Quantities.txt
|
To extend all of the previous concepts to systems of variable mass, we must now consider at least one new variable: number of moles, n. To account for this effect, you will see below that we will have to introduce a “new” thermodynamic property. Let us take the case of the internal energy. For a constant composition system, we wrote:
(14.4)
and,
(14.22a)
If this system has a single component, and now we allow its mass (and hence, number of moles, n) to change (an open system), the change in U (dU) is no longer just a function of dS and dV. We now have to account for changes in ‘n’, thus:
(15.17)
If we had a binary system, we would have two new variables, ‘n1’ and ‘n2’ (n=n1+n2) and we would have to expand (15.17) accordingly,
(15.18)
Hence, for a multi-component system, we just keep on adding terms ():
(15.19)
Thermodynamics defines this ‘coefficient’ which multiplies the change in the number of moles of each component (dni) as the ‘chemical potential’ of that component(μi). See how the chemical potential is a thermodynamic property that must be defined for the proper description of a system of variable composition — that is, an open system.
Then, we write:
(15.20)
and finally,
(15.21)
The same can be done with the thermodynamic definitions for dH, dG, and dA (the rest of equations 14.22). In fact, the chemical potential may be defined in at least four different and equivalent ways. You can now show that for equations (14.22) to account for systems of variable composition, we would have to expand them into:
(15.22a)(15.22b)(15.22c)
Because thermodynamics defines the ‘coefficient’ which multiplies the change in the number of moles of each component (dni) as the ‘chemical potential’ of that component(μi), we have three new ways to express it:
(15.23a) (15.23b) (15.23c)
Don’t any of these three equations ring a bell? Don’t forgot the definition of (15.7c). Of all the four available definitions for chemical potential, there is one that suits our definition of a partial molar quantity perfectly. Compare equation (15.23b) to (15.7c). What we can say is that the chemical potential of a component “i” is equal to the partial molar Gibbs energy of such component:
(15.24)
Notice that, for a pure component, the chemical potential is equal to the molar Gibbs energy of the substance (see equation 15.9),
(for pure substances)(15.25)
When we introduce the definition of chemical potential into each of equations (15.22), the fundamental thermodynamic expressions that apply to systems of variable composition become:
(15.26a)
(15.26b)
(15.26c)
(15.26d)
It is clear that equations (14.23) still hold — evaluated at constant “n” (total number of moles) — as shown:
(14.23a)
(14.23b)
(14.23c)
(14.23d)
The same reasoning applies to Maxwell’s relationships. From the above expressions, equation (15.22b) is the only one that matches equation (15.13). Notice that the following additional identities can be also identified (see equations 14.20):
; (15.27a)
; (15.27b)
; (15.27c)
; (15.27d)
We can see that chemical potential can be calculated by solving any of these differential expressions. To do this, experimental information is needed on how the other properties (T, V, S, P) change with additions of the given species (ni) under certain restraining conditions.
In some instances, we may have an open system of constant composition. This is the case of a system of a pure component exchanging mass with its surroundings. For such a system, nc = 1 and equations (15.26) are written as:
(15.28a)
(15.28b)
(15.28c)
(15.28d)
Equations (14.23) and (15.27) will still hold with ni = n, μi = μ.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
15.03: Action Item
Answer the following problems, and submit your answers to the drop box in ANGEL that has been created for this module.
Please note:
• Your answers must be submitted in the form of a Microsoft Word document.
• Include your Penn State Access Account user ID in the name of your file (for example, "module2_abc123.doc").
• The due date for this assignment will be sent to the class by e-mail in ANGEL.
• Your grade for the assignment will appear in the drop box approximately one week after the due date.
• You can access the drop box for this module in ANGEL by clicking on the Lessons tab, and then locating the drop box on the list that appears.
Problem Set
1. For a design project involving a binary C1/C2 system, an engineer needs to know the partial molar volumes of C1 and C2 at the same conditions. Due to budget constraints, he was able to measure only one of them. Speculate on how to go about obtaining the second one.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/15%3A_Thermodynamic_Tools_II/15.02%3A_Thermodynamics_of_Systems_of_Variable_Composition_%28Open_Multicomponent_Systems%29.txt
|
Learning Objectives
• Module Goal: To establish the mathematical framework for thermodynamics of phase equilibrium.
• Module Objective: To establish that there is an unique relationship between partial molar quantities in any mixture.
16: Thermodynamic Tools III
We have just seen that the chemical potential is a thermodynamic property which is related to all thermodynamic properties with units of energy. Its most useful definition is given in terms of constant pressure and temperature:
(15.24)
This constitutes the working definition of chemical potential, the one that relates it to the partial molar quantity concept studied before.
Although the mathematical definition of chemical potential can be stated clearly, its “physical” meaning is not as easy to grasp. It is much easier to understand the physical implications of “pressure,” “temperature,” and “internal energy” than it is to undestand the physical interpretation of “chemical potential.” Given equation (15.24), and recognizing Gibbs free energy (G) as the capacity of a system to do work, we may write the following formal definition of chemical potential:
The chemical potential of a component in a given phase is the rate of increase of the capacity of the phase to do work per unit addition of the substance to the phase, at constant temperature and pressure.
We may also quote the definition that J.W. Gibbs provided for it:
If to any homogeneous mass, we suppose an infinitesimal quantity of any substance to be added, the mass remaining homogeneous and its entropy and volume remaining unchanged, the increase of energy of the mass divided by the quantity of the substance added is the potential for that substance in the mass considered.
This definition is closely related to the mathematical definition given in (15.20).
(15.20)
To understand the physical implications of the chemical potential of a species, we have to recall that for any thermodynamic process to be carried out, a driving force must be causing it. For instance, a pressure gradient is the driving force that causes the bulk movement of fluids from one point to the other, and a temperature gradient provides the potential difference needed for heat to flow. We also know that if we have a higher concentration of solute in a homogeneous system, it will diffuse to the zones of lower concentration. Here, the chemical potential is responsible for the diffusion of species within two points in space, or even its exchange between two different phases, without the presence of either pressure or temperature gradients. The chemical potential is the potential describing the ability of species to move from one phase to another.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
16.02: The Thermodynamic Concept of Equilibrium
Intuitively, the concept of equilibrium conveys the message that something “balances out.” Equilibrium describes a state of vanishing driving forces or gradients, where everything remains as it is. If a system is in equilibrium, it retains its current state because there are no driving forces causing anything to change.
If two materials have the same temperature, we say that they are in thermal equilibrium. No exchange of heat takes places because there are no thermal gradients. For instance, a liquid and a vapor phase are in thermal equilibrium when:
Tl = Tv(16.1)
We achieve mechanical equilibrium if two substances are found at the same pressure. No bulk movement of fluids takes place because there are no pressure gradients. A liquid and a vapor phase are in mechanical equilibrium when:
Pl = Pv (16.2)
For a thermodynamic system to be in equilibrium, all intensive (temperature, pressure) and extensive thermodynamic properties (U, G, A, H, S, etc) must be constants. Hence, the total change in any of those properties () must be zero at equilibrium.
Now we would like to have a concept of thermodynamic equilibrium for a vapor-liquid equilibrium. Let us consider a closed, heterogeneous vapor-liquid system. Any changes in a total property of the system will be the result of the changes of that property in the liquid phase plus the changes of that property in the vapor phase.
(16.3)
In this case, liquid and vapor by themselves are not closed systems; they can exchange matter between themselves but not with the surroundings. To elaborate more upon the concept of equilibrium, let’s look at equation (15.26c). Because it is written in terms of changes in pressure and temperature, two measurable laboratory quantities, it is the “friendliest” of all fundamental equations. We write it for both of the phases:
(16.4a) (16.4b)
In (16.3), for , we get:
(16.5)
Hence,
(16.6)
Since at equilibrium all extensive properties, such as G, must remain constant, dG(total) must be zero. For this to hold true, and by inspection of equation (16.6), the conditions for thermodynamic equilibrium are:
dP = 0 [ Mechanical equilibrium ](16.7)
dT = 0 [ Thermal equilibrium ](16.8)
[ μi criteria for equilibrium ](16.9)
It can be also proven that, at equilibrium, the total free energy of the system (G(total)) must take a minimum value; this reinforces the fact that dG(total))=0 at equilibrium. The minimum Gibbs energy criterion for equilibrium is a restatement of the second law of thermodynamics, from which we know that the entropy of a system in equilibrium must be at its maximum, considering all of the possible states for equilibrium.
It is somehow reasonable that for a true equilibrium condition there should be neither pressure nor temperature gradients (equations 16.7 and 16.8). This is because equilibrium is, at the very least, a state of lack of gradients. But what is equation (16.9) trying to tell us? To demystify equation (16.9), we recall that we are dealing with a closed system, hence, the total amount of moles per species:
(16.10)
must be constant (we do not allow for chemical reactions within the system). Thus we write:
(16.11)
Therefore,
(16.12)
(16.12) into (16.11) yields:
(16.13)
For equation (16.13) to hold true,
for all i = 1, 2, … nc(16.14)
We have then arrived at the criteria for vapor-liquid equilibria for a system at constant pressure and temperature: the chemical potential of every species must be the same in both phases. We may generalize this finding to any number of phases, for which the chemical potential of every species must be the same in all phases. The chemical potential being the driving force which moves a species from one phase to the other, equation (16.14) is physically reasonable. If the chemical potential of a species in one phase is the same as that in the other, there is zero driving force and thus a zero net transfer of species at equilibrium.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
16.03: Fugacity
We have seen that, for a closed system, the Gibbs energy is related to pressure and temperature as follows:
(16.15)
For a constant temperature process,
@ constant T(16.16)
For an ideal gas,
(16.17) @ constant T (16.18)
This expression by itself is strictly applicable to ideal gases. However, Lewis, in 1905, suggested extending the applicability of this expression to all substances by defining a new thermodynamic property called fugacity, f, such that:
@ constant T (16.19)
This definition implies that for ideal gases, ‘f’ must be equal to ‘P’. For mixtures, this expression is written as:
@ constant T(16.20)
where and fi are the partial molar Gibbs energy and fugacity of the i-th component, respectively. Fugacity can be readily related to chemical potential because of the one-to-one relationship of Gibbs energy to chemical potential, which we have discussed previously. Therefore, the definition of fugacity in terms of chemical potential becomes:
For a pure substance,
@ const T(16.21a)
(ideal gas limit) (16.21b)
For a component in a mixture,
@ const T(16.22c) = partial pressure (ideal gas limit)(16.22d)
The fugacity coefficient (Φi) is defined as the ratio of fugacity to its value at the ideal state. Hence, for pure substances:
(16.23a)
and for a component in a mixture,
(16.23b)
The fugacity coefficient takes a value of unity when the substance behaves like an ideal gas. Therefore, the fugacity coefficient is also regarded as a measure of non-ideality; the closer the value of the fugacity coefficient is to unity, the closer we are to the ideal state.
Fugacity turns out to be an auxiliary function to chemical potential. Even though the concept of thermodynamic equilibrium which we discussed in the previous section is given in terms of chemical potentials, above definitions allow us to restate the same principle in terms of fugacity. To do this, previous expressions can be integrated for the change of state from liquid to vapor at saturation conditions to obtain:
(16.24a) (16.24b)
For equilibrium, ,hence,
(16.24c)
Therefore:
; i = 1, 2, … nc (16.25)
For equilibrium, fugacities must be the same as well! This is, for a system to be in equilibrium, both the fugacity and the chemical potential of each component in each of the phases must be equal. Conditions (16.14) and (16.25) are equivalent. Once one of them is satisfied, the other is satisfied immediately. Using or to describe equilibrium is a matter of choice, but generally the fugacity approach is preferred.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/16%3A_Thermodynamic_Tools_III/16.01%3A_The_Chemical_Potentials.txt
|
It is clear that, if we want to take advantage of the fugacity criteria to perform equilibrium calculations, we need to have a means of calculating it. Let us develop a general expression for fugacity calculations. Let us begin with the definition of fugacity in terms of chemical potential for a pure component shown in (16.21a):
@ const T (16.26)
The Maxwell’s Relationships presented in equation (15.27c) is written for a pure component system as:
(16.27)
Consequently,
@ const T(16.28)
Substituting (16.28) into (16.26),
@ const T (16.29)
Introducing the concept of fugacity coefficient given in equation (16.23a),
(16.23a) (16.30)
We end up with:
(16.31a)
or equivalently,
(16.31b)
Integrating expression (16.31b),
(16.32)
It is convenient to define the lower limit of integration as the ideal state, for which the values of fugacity coefficient, volume, and compressibility factor are known.
At the ideal state, in the limit P —> 0,
Φ* —>1 lnΦ* —> 0(16.33)
Substituting into (16.32),
(16.34)
Equation (16.34) is the expression of fugacity coefficient as a function of pressure, temperature, and volume. Notice that this expression can be readily rewritten in terms of compressibility factor:
(16.35)
Let us also derive the expression for the fugacity coefficient for a component in a multicomponent mixture. Following a pattern similar to that which we have presented, beginning with the definition of fugacity for a component in terms of chemical potential:
@ const T(16.36)
This time, it is more convenient to use the Maxwell’s Relationships presented in equation (15.27d):
(16.37)
After you introduce the definitions of fugacity coefficient and compressibility factor:
(16.38a)
,(16.38b)
and recalling that our lower limit of integration is the ideal state, for which, at the limit P —> 0:
V* —> , (16.39c)
Φi* —>1 and hence lnΦi* —> 0 ,(16.39a)
Z* —> 1 and hence lnZ* —> 0 ,(16.39b)
it can be proven that:
(16.40)
The multi-component mixture counterpart of equation (16.35) becomes:
(16.41a)
where:
(16.41b)
Equations (16.34), (16.35), (16.40), and (16.41) are very important for us. Basically, they show that fugacity, or the fugacity coefficient, is a function of pressure, temperature and volume:
This tells us that if we are able to come up with a PVT relationship for the volumetric behavior of a substance, we can calculate its fugacity by solving such expressions. It is becoming clear why we have studied equations of state — they are just what we need right now: PVT relationships for various substances. Once we have chosen the equation of state that we want to work with, we can calculate the fugacity of each component in the mixture by applying the above expression. Now that we know how to calculate fugacity, we are ready to apply the criteria for equilibrium that we just studied! That is the goal of the next module.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
16.05: Cubic EOS Fugacity Expressions
Expressions (16.34) and (16.40) are particularly suitable for the calculation of fugacity with P-explicit equations of state, which cubic equations of state are. One can take every cubic EOS we have presented and proceed with the integration, coming up with the expression for fugacity for that particular equation of state. We will spare the reader these derivations. The fugacity expressions for the cubic EOS of most interest to us (SRK and PR EOS) are presented below:
SRK EOS
Pure Substance
(16.42)
Mixtures
(16.43)
PR EOS
Pure Substance
(16.44)
Mixtures
(16.45)
The expressions for A, B, bi, bm, (aα)m, and (aα)ij are the same as given before in the previous modules. (AA)i and (BB)i are calculated as:
(16.46a) (16.47b)
A and B parameters and the Z-factor of each phase are needed in order to calculate the corresponding fugacity coefficients. Now that we know how to calculate fugacity via EOS, and how this concept can be applied for equilibrium calculations (Section 16.2), we are ready to resume our discussion on Vapor-Liquid Equilibrium as we left it in Module 13. In our next module, we will concentrate on Vapor-Liquid Equilibrium via EOS.
Expressions (16.34) and (16.40) are particularly suitable for the calculation of fugacity with P-explicit equations of state, which cubic equations of state are. One can take every cubic EOS we have presented and proceed with the integration, coming up with the expression for fugacity for that particular equation of state. We will spare the reader these derivations. The fugacity expressions for the cubic EOS of most interest to us (SRK and PR EOS) are presented below:
SRK EOS
Pure Substance
(16.42)
Mixtures
(16.43)
PR EOS
Pure Substance
(16.44)
Mixtures
(16.45)
The expressions for A, B, bi, bm, (aα)m, and (aα)ij are the same as given before in the previous modules. (AA)i and (BB)i are calculated as:
(16.46a) (16.47b)
A and B parameters and the Z-factor of each phase are needed in order to calculate the corresponding fugacity coefficients. Now that we know how to calculate fugacity via EOS, and how this concept can be applied for equilibrium calculations (Section 16.2), we are ready to resume our discussion on Vapor-Liquid Equilibrium as we left it in Module 13. In our next module, we will concentrate on Vapor-Liquid Equilibrium via EOS.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
16.06: Action Item
Problem Set
1. Provide a short lay person definition of Chemical Potential and Fugacity. Explain how these variables can be used to describe phase equilibrium.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/16%3A_Thermodynamic_Tools_III/16.04%3A_Expressions_for_Fugacity_Calculation.txt
|
Module Goal: To use cubic equations of state for the description of phase equilibrium.
Module Objective: To integrate the knowledge of qualitative phase behavior and the framework for quantification to actually build a phase equilibrium model.
17: Vapor-Liquid Equilibrium via EOS
Gas and liquid co-existence is common to all petroleum and natural gas applications, as we have mentioned already. Additionally, in our discussions presented in Modules 12 and 13, we tried to mathematically model the problem of vapor-liquid co-existence in equilibrium. As we recall, at that time we concluded that we were lacking tools necessary to complete the definition of the problem. Then we went on and concentrated on building those thermodynamic tools which we needed. Now we are ready to couple the concepts reviewed in Modules 14 through 16 (“Thermodynamic Tools”) with those reviewed in Modules 12 and 13. In order for us to do that, we will need to apply our knowledge of equations of state (studied in Modules 6 through 11).
Once the mathematics of vapor-liquid equilibrium is understood, we will be able to solve for a variety of applications. They will be discussed in the last modules of this course. One of those applications will be to generate any sort of phase diagrams, like the ones that we studied in our earlier discussions.
As you see, everything that we have studied in this course is completely interlinked. That’s one of the main ideas that I want you to glean from these discussions.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
17.02: Equilibrium and Equilibrium Ratios (Ki)
Consider a liquid-vapor in equilibrium. As we have discussed previously, a condition for equilibrium is that the chemical potential of each component in both phases are equal, thus:
(17.1)
We showed that this is equivalent to:
(17.2)
This is, for a system to be in equilibrium, the fugacity of each component in each of the phases must be equal as well. The fugacity of a component in a mixture can be expressed in terms of the fugacity coefficient. Therefore, the fugacity of a component in either phase can be written as:
(17.3a) (17.3b)
Introducing (17.3) into (17.2),
(17.4)
This equilibrium condition can be written in terms of the equilibrium ratio Ki = yi/xi, to get:
(17.5)
Do you recall the problem at the end of Module 13? At that point we needed a more reliable way to calculate the equilibrium ratios that showed up in the Rachford-Rice objective function. We demonstrated that once we know all values of Ki’s, the problem of vapor-liquid equilibrium is reduced to solving the Rachford-Rice objective function, using the Newton-Raphson Procedure.
We can now calculate equilibrium ratios, using (17.5), in terms of fugacity coefficients. We also know that we have an analytic expression for the calculation of fugacity coefficients via EOS — this was shown in the last section of the previous module. This is why we call this module “Vapor Liquid Equilibrium via EOS.”
Is this the end to our problems? Not quite. Take a look at the expression for fugacity coefficients in mixtures both for SRK EOS and PR EOS. It is clear that they are functions of the pressure, temperature, and composition of the phases:
(17.6a) (17.6b)
Do we know the composition of the phases “xi”, “yi” in advance? In a typical flash problem, we are given pressure, temperature and overall composition (zi). What do we want to know? How much gas, how much liquid, and the compositions of the phases: αg, αl, yi, xi. So, we do not know those compositions in advance; therefore, as it stands, we cannot calculate (17.6) or (17.5). Thus far, it seems that the flash problem is unsolvable.
If we are bold enough, we could try to overcome this problem by “guessing” those compositions, and proceed by solving (17.6) and (17.5). With this “rough” estimate for Ki’s, we could solve for “αg” with the procedure outlined in Module 13 (“Objective Function and Newton-Raphson Procedure”). Once “αg” is known, we could back calculate the compositions of the phases using equations (12.7) and (12.11). If we were correct, those compositions would match each other (the “guessed” ones with respect to the “back-calculated”). More than likely, this would not happen, and we would have to make a new “guess.” This is, fundamentally, an iterative procedure. Although this is not what we do, it does illustrate that this problem is solvable by implementing the appropriate iterative scheme.
In equations (17.4) and (17.5), the fugacity of the liquid and vapor phases were computed in terms of the fugacity coefficient. Hence, this method of expressing the equilibrium criteria is known as the dual-fugacity coefficient method. For the sake of completeness, it is necessary to indicate that the fugacity of a component in a mixture can also be expressed in terms of a thermodynamic concept called the activity coefficient. While the fugacity coefficient is seen as a measure of the deviation of behavior with respect to the ideal gas model, the activity coefficient measures the deviation of behavior with respect to the ideal liquid model. This approach is called the dual-activity coefficient method, in which both liquid and vapor phase fugacities are expressed in terms of the activity coefficient and substituted into the equilibrium criteria in (17.2). A mixed activity coefficient-fugacity coefficient method can be also devised by expressing the liquid phase fugacities in terms of activity coefficients and the vapor phase fugacities in terms of fugacity coefficients. Each of the aforementioned methods for the calculation of phase equilibria has its advantages and disadvantages. The dual-fugacity-coefficient method is simpler both conceptually and computationally, but if the equation of state does not predict liquid and vapor densities well, the results may be inaccurate. The activity coefficient method can be more accurate, but it is more complicated to implement. For the rest of the discussion, the dual-fugacity coefficient approach will be used.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/17%3A_Vapor-Liquid_Equilibrium_via_EOS/17.01%3A_Introduction.txt
|
We have essentially reduced the typical VLE problem to that of solving a system of non-linear algebraic equations. The objective function is the nucleus of the VLE calculation. Rather than solve the material balance equations in their elemental form, the use of an objective function will allow for a more stable iteration scheme. We have seen that three objective functions are available for this calculation:
$F_{y}\left(\alpha_{g}\right)=\sum_{i=1}^{n} \frac{z_{i} K_{i}}{1+\alpha_{g}\left(K_{i}-1\right)}-1=0 \label{17.7a}$
$F_{x}\left(\alpha_{g}\right)=\sum_{i=1}^{n} \frac{z_{i}}{1+\alpha_{g}\left(K_{i}-1\right)}-1=0 \label{17.7b}$
$F\left(\alpha_{g}\right)=\sum_{i=1}^{n} \frac{z_{i}\left(K_{i}-1\right)}{1+\alpha_{g}\left(K_{i}-1\right)}=0 \label{17.7c}$
Expression \ref{17.7c} is the Rachford-Rice Objective Function; we have proven that its numerical solution is cleaner than that of equations (17.7a) and (17.7b). We also discussed that the key problem in solving for “αg” is the need for values for the equilibrium ratios (Ki’s). With the aid of equations (17.5) and (17.6) these problems can be solved simultaneously by iteration techniques.
In the problem we have posed, we have (3n+1) unknowns:
i=1,2, … n
yi i=1, 2, … n
xi i=1, 2, … n
and αg.
And we also have (3n+1) equations:
$K_{i}=\frac{\phi_{i}^{L}\left(P, T, x_{i}\right)}{\phi_{i}^{V}\left(P, T, y_{i}\right)} i=1,2, \ldots n \label{17.8a}$
$y_{i}=K_{i} x_{i} \quad i=1,2, \dots n \label{17.8b}$
$(x_{i}=\frac{z_{i}}{1+\alpha_{g}\left(K_{i}-1\right)} \quad i=1,2, \ldots n \label{17.8d}$
$\sum_{i=1}^{n} \frac{z_{i}\left(K_{i}-1\right)}{1+\alpha_{g}\left(K_{i}-1\right)}=0 \label{17.8d}$
where P, T, and zi are the values that we know. See how this is only a restatement of the original set of equations that contain the equilibrium conditions, material balances, and molar fraction constraints:
i=1,2, … n(17.9a)
i=1,2, … n (17.9b)
i=1,2, … n (17.9c)
or (17.9d)
Since we have the same number of equations as unknowns, this system is determined and has a unique solution. To solve this system of simultaneous non-linear equations, there are two categories of solution techniques:
• Newton-type methods,
• Substitution-type methods.
Newton-type methods for more than one unknown require finding (n x n) elements of a Jacobian matrix at every iteration step. This could be computationally expensive. Additionally, the Newton Raphson method requires a very good initial guess (i.e., close to the actual solution) to guarantee convergence to the true values. This is not always possible, especially at the start of the procedure.
The most popular method, and the easiest to implement, is the substitution type. However, substitution type methods can be quite slow for some conditions of interest. In these sort of cases, we either switch to a Newton-Raphson solver (that has a much better rate of convergence) or implement an acceleration procedure to the substitution method itself.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
17.04: Successive Substitution Method (SSM)
In a substitution-type method, we start with initial guesses for all of the unknowns and loop around the equations to obtain “better” approximations for each of them. We test the goodness of the solution at every time step by comparing the new, better approximation to the previous guess. If the correction is small under certain convergence criteria, the procedure is stopped and we use the results from the last iteration as the final answer.
As we discussed in the previous section, reliable values for the equilibrium constants (Ki’s) must be obtained before we can solve the Rachford-Rice Objective Function. Generally, first estimates for equilibrium constants are calculated by using Wilson’s empirical equation (equation 13.15). However, Wilson’s correlation yields only approximate values for equilibrium ratios. Here, we apply thermodynamic equilibrium considerations in order to obtain more reliable predictions for Ki’s.
Let us recall that for a system to be in equilibrium, any net transfer (of heat, momentum, or mass) must be zero. For this to be true, all potentials (temperature, pressure, and chemical potential) must be the same in all of the phases. Provided that the temperature and pressure of both phases are the same, a zero net transfer for all components in the mixture results when all chemical potentials are the same. A restatement of this: all fugacities for all components in each phase are equal. Since fugacity is a measure of the potential for transfer of a component between two phases, equal fugacities of a component in both phases results in zero net transfer. SSM (Successive Substitution Method) takes advantage of this fact. Equation (17.5) showed that the fugacity criterion for equilibrium allows writing the equilibrium ratios Ki as a function of fugacity coefficients as follows:
$K_{i}=\frac{\phi_{i}^{L}}{\phi_{i}^{V}} \label{17.10}$
Equation (17.10) presupposes equality of fugacity ($f_{i}^{V}=f_{i}^{L}$), but during an iterative procedure, fugacities may not be equal ($f_{i}^{V} \neq f_{i}^{L}$) until convergence conditions have been attained. Therefore, if fugacities are not equal (convergence has not been achieved), Equation \ref{17.10} becomes:
$K_{i}=\frac{\phi_{i}^{L}}{\phi_{i}^{V}}=\frac{f_{i}^{L} /\left(x_{i} P\right)}{f_{i}^{V} /\left(y_{i} P\right)}=\frac{y_{i}}{x_{i}}\left(\frac{f_{i}^{L}}{f_{i}^{V}}\right) \label{17.11}$
Using the above equation, a correction step can be formulated to improve the current values of Ki. The SSM-updating step is written as:
\begin{align} K_{i}^{n+l} &=\left(\frac{y_{i}}{x_{i}}\right)^{n}\left(\frac{f_{l i}}{f_{g i}}\right)^{n} \[4pt] &=K_{i}^{n}\left(\frac{f_{l i}}{f_{g i}}\right)^{n} \label{17.12a} \end{align}
SSM updates all previous equilibrium ratios (Ki) using the fugacities predicted by the equation of state. This iteration method requires an initial estimation of Ki-values — for which Wilson’s correlation is used. It can easily be concluded that the convergence criteria will be satisfied whenever the fugacity ratios of all the components in the system are close to unity. Such condition is achieved when the following inequality is satisfied:
$\sum_{i}^{n}\left(\frac{f_{l i}}{f_{g i}}-1\right)^{2}<10^{-14} \label{17.12b}$
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/17%3A_Vapor-Liquid_Equilibrium_via_EOS/17.03%3A_Solution_Algorithms_for_VLE_Problems.txt
|
When the system is close to the critical point and fugacities are strongly composition-dependent, a slowing-down of the convergence rate of the SSM (Successive Substitution Method) is to be expected. In an attempt to avoid slow convergence problems, some methods have been proposed. Among the most popular are the Minimum Variable Newton Raphson (MVNR) Method and the Accelerated and Stabilized Successive Substitution Method (ASSM).
The ASSM is basically an accelerated version of the SSM procedure, and thus follows a similar theory. Such procedure is implemented to accelerate the calculation of Ki-values, especially in the region close to critical point where the use of the SSM alone will not be efficient. The ASSM technique was presented by Rinses et al. (1981) and consists of the following steps:
1. Use the SSM technique to initiate the updating of the Ki-values the first time.
2. Check all following criteria at every step during iterations using the SSM: (17.13a)
agnew - agold | < 0.1 (17.13b)
(17.13c)
0 < agnew < 1(17.13d)
These criteria show that you have sufficient proximity to the conditions to ensure the efficiency of the method. Rri is the ratio of liquid fugacity to gas fugacity of the i-th component and ‘αg’ is molar gas fraction of the two-phase system.
(17.14)
3. If the system satisfies ALL above criteria, the iteration technique is then switched from the SSM to the ASSM. Otherwise, SSM is used for the update of the Ki-values. The following expressions are used to update Ki-values in ASSM:
Ki new= Kiold Rri λi
where λi =[ ( Rriold -1) / (Rriold - Rrinew ) ]
In some cases, using a constant acceleration value of λi =2 is good enough.
4. Once all the criteria in step (2) are satisfied, skip step (2) for the subsequent iterations and use the ASSM technique to update Ki-values until convergence is attained, unless it does not give acceptable new estimates (as stated next).
5. When ASSM is used, it must always be tested to show that it leads to an improved solution (i.e., that it brings fugacity ratios closer to unity). If not, it must be rejected and switched back to SSM.
Even though we are outlining Risnes et al.’s version of the accelerated Successive Substitution Method, there are several other published algorithms whose main purpose have also been to accelerate the successive substitution method. Fussel and Yanosik (1978), Michelsen (1982), and Mehra et al. (1983) are examples of such attempts. Risnes et al. version is the easiest and most straightforward to implement, but it is subjected to limitations.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
17.06: The Stability Criteria
Interestingly enough, one of the most difficult aspects of making VLE calculations may not be the two-phase splitting calculation itself, but knowing whether or not a mixture will actually split into two (or even more) phases for a pressure and temperature condition.
A single-phase detection routine has to be simultaneously introduced at this stage to detect whether the system is in a true single-phase condition at the given pressure and temperature or whether it will actually split into two-phases. Several approaches may be used here: the Bring-Back technique outlined by Risnes et al. (1981), and Phase Stability Criteria introduced by Michelsen (1982), among others. Here we describe Michelsen’s stability test.
Michelsen (1982) suggested creating a second-phase inside any given mixture to verify whether such a system is stable or not. It is the same idea behind the Bring-Back procedure (Risnes et al., 1981), but this test additionally provides straightforward interpretation for the cases where trivial solutions are found
(Ki’s —> 1). The test must be performed in two parts, considering two possibilities: the second phase can be either vapor-like or liquid-like. The outline of the method is described below, following the approach presented by Whitson and Brule (2000).
1. Calculate the mixture fugacity (fzi) using overall composition zi.
2. Create a vapor-like second phase,
1. Use Wilson’s correlation to obtain initial Ki-values.
2. Calculate second-phase mole numbers, Yi: (17.15)
3. Obtain the sum of the mole numbers, (17.16)
4. Normalize the second-phase mole numbers to get mole fractions:(17.17)
5. Calculate the second-phase fugacity (fyi) using the corresponding EOS and the previous composition.
6. Calculate corrections for the K-values:
(17.18)
(17.19)
7. Check if:
1. Convergence is achieved:(17.20)
2. A trivial solution is approached:(17.21)
If a trivial solution is approached, stop the procedure.
If convergence has not been attained, use the new K-values and go back to step (b).
3. Create a liquid-like second phase,
Follow the previous steps by replacing equations (17.15), (17.16), (17.17), and (17.18) by (17.22), (17.23), (17.24), and (17.25) respectively.
(17.22)
(17.23)
(17.24)
(17.25)
The interpretation of the results of this method follows:
• The mixture is stable (single-phase condition prevails) if:
• Both tests yield S < 1 (SL < 1 and SV < 1),
• Or both tests converge to trivial solution,
• Or one test converges to trivial solution and the other gives S < 1.
• Only one test indicating S > 1 is sufficient to determine that the mixture is unstable and that the two-phase condition prevails. The same conclusion is made if both tests give S > 1, or if one of the tests converges to the trivial solution and the other gives S > 1.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
17.07: Action Item
Problem Set
1. Provide a comprehensive flowchart of how to build a flash calculation model for a multicomponent mixture. Assume that the data you have available is pressure, temperature, fluid composition, and all the properties of the pure components making up the mixture. This flowchart should be clear enough so that anyone could code it and generate results!
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/17%3A_Vapor-Liquid_Equilibrium_via_EOS/17.05%3A_The_Accelerated_Successive_Substitution_Method_%28ASSM%29.txt
|
Learning Objectives
• Module Goal: To highlight the important properties used to characterize natural gas and condensate systems.
• Module Objective: To present the most popular models for estimating properties of natural gas and condensate systems.
Fluid properties are used to characterize the condition of a fluid at a given state. A reliable estimation and description of the properties of hydrocarbon mixtures is fundamental in petroleum and natural engineering analysis and design. Fluid properties are not independent, just as pressure, temperature, and volume are not independent of each other. Equations of State provide the means for the estimation of the P-V-T relationship, and from them many other thermodynamic properties can be derived. Compositions are usually required for the calculation of the properties of each phase. For a VLE system, using the tools we have discussed in the previous lectures, we are ready to predict some important properties of both the liquid (condensate) and vapor (natural gas) phases — by means of the known values of composition of both phases. Some of the most relevant are discussed next.
• 18.1: Molecular Weight
The molecular weight (MW) of each of the phases in a VLE system is calculated as a function of the molecular weight of the individual components (MWi), provided that both the composition of the gas (yi) and the liquid (xi) are known.
• 18.2: Density
Density is customarily defined as the amount of mass contained in a unit volume of fluid. Density is the single-most important property of a fluid, once we realize that most other properties can be obtained or related to density. Both specific volume and density — which are inversely proportionally related to each other — tell us the story of how far apart the molecules in a fluid are from each other.
• 18.3: Specific Gravity
Specific gravity is defined as the ratio of fluid density to the density of a reference substance, both defined at the same pressure and temperature. These densities are usually defined at standard conditions (14.7 psia and 60°F). For a condensate, oil or a liquid, the reference substance is water and for a natural gas, or any other gas for this matter, the reference substance is air.
• 18.4: API Gravity
Petroleum and Natural Gas Engineers also use another gravity term which is called API gravity.
• 18.5: Volumetric Factors (Bo and Bg)
Due to the dramatically different conditions prevailing at the reservoir compared to the conditions at the surface, we do not expect that 1 barrel of fluid at reservoir conditions could contain the same amount of matter as 1 barrel of fluid at surface conditions. Volumetric factors were introduced in petroleum and natural gas calculations to readily relate the volume of fluids that are obtained at the surface (stock tank) to the volume that the fluid actually occupied in the reservoir.
• 18.6: Isothermal Compressibilities
For liquids, the value of isothermal compressibility is very small because a unitary change in pressure causes a very small change in volume for a liquid. For natural gases, isothermal compressibility varies significantly with pressure.
• 18.7: Surface Tension
Surface tension is a measure of the surface free energy of liquids, i.e., the extent of energy stored at the surface of liquids. Although it is also known as interfacial force or interfacial tension, the name surface tension is usually used in systems where the liquid is in contact with gas. Qualitatively, it is described as the force acting on the surface of a liquid that tends to minimize the area of its surface, resulting in liquids forming droplets with spherical shape, for instance.
• 18.8: Action Item
18: Properties of Natural Gas and Condensates I
The molecular weight (MW) of each of the phases in a VLE system is calculated as a function of the molecular weight of the individual components (MWi), provided that both the composition of the gas (yi) and the liquid (xi) are known:
$M W_{g}=\sum_{i=1}^{n} y_{i} M W_{i} \label{18.1a}$
$M W_{l}=\sum_{i=1}^{n} x_{i} M W_{i} \label{18.1b}$
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/18%3A_Properties_of_Natural_Gas_and_Condensates_I/18.01%3A_Molecular_Weight.txt
|
Density is customarily defined as the amount of mass contained in a unit volume of fluid. Density is the single-most important property of a fluid, once we realize that most other properties can be obtained or related to density. Both specific volume and density — which are inversely proportionally related to each other — tell us the story of how far apart the molecules in a fluid are from each other. For liquids, density is high — which translates to a very high molecular concentration and short intermolecular distances. For gases, density is low — which translates to low molecular concentrations large intermolecular distances.
The question then is: Given this, how can we obtain this all-important property called density? This takes us back to Equations of State (EOS). Since very early times, there have been correlations for the estimation of density of the liquids (oil, condensates) and gases/vapors (dry gases, wet gases). In modern times, equations of state (EOS) are a natural way of obtaining densities. The density of the fluid ‘f’ is calculated using its compressibility factor (Zf) as predicted by an appropriate equation of state. From the real gas law, the density can be expressed as:
$\rho_{f}=\frac{P}{R T}\left(\frac{M W_{f}}{Z_{f}}\right) \label{18.2}$
where: MWf is the molecular weight of fluid ‘f’. Expression \ref{18.2} is used for both the gas and liquid density. In either case, the proper value for MWf (either MWg or MWl) and Zf (either Zg or Zo) has to be used. This takes us back to the discussion of equations of state. From Equation \ref{18.2} it is clear that all that we need is the Z-factor.
The all-important parameter to calculate density is the Z-factor, both for the liquid and vapor phases. The relation between liquid behavior and Z-factor is not obvious, because Z-factor has been traditionally defined for gases. However, we can get “Z” for liquids. “Z” is, indeed, a measure of departure from the ideal gas behavior. Fair enough, for defining “Z” for liquids, we still measure the departure of liquid behavior from ideal gas behavior. A “liquid state” is a tremendous departure from ideal-gas conditions, and as such, “Z” for a liquid is always very far from unity. Typical values of “Z” for liquids are small.
Equations of State have proven very reliable for the estimation of vapor densities, but they do not do as good a job for liquid densities. There is actually a debate among different authors about the reliability of Z-factor estimations for liquids using EOS. In fact, people still believe the EOS are not reliable for liquid density predictions and that we should use correlations instead. However, Peng-Robinson EOS provides fair estimates for vapor and liquid densities as long as we are dealing with natural gas and condensate systems.
Empirical correlations for Z-factor for natural gases were developed before the advent of digital computers. Although their use is in decline, they can still be used for fast estimates of the Z-factor. The most popular of such correlations include those of Hall-Yarborough and Dranchuk-Abou-Kassem.
Chart look-up is another means of determining Z-factor of natural gas mixtures. These methods are invariably based on some type of corresponding states development. According to the theory of corresponding states, substances at corresponding states will exhibit the same behavior (and hence the same Z-factor). The chart of Standing and Katz is the most commonly used Z-factor chart for natural gas mixtures.
Methods of direct calculation using corresponding states have also been developed, ranging from correlations of chart values to sophisticated equation sets based on theoretical developments.
However, the use of equations of state to determine Z-factors has grown in popularity as computing capabilities have improved. Equations of state represent the most complex method of calculating Z-factor, but also the most accurate. A variety of equations of state have been developed to describe gas mixtures, ranging from the ideal EOS (which yields only one root for the vapor and poor predictions at high pressures and low temperatures), cubic EOS (which yields up to three roots, including one for the liquid phase), and more advanced EOS such as BWR and AGA8.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/18%3A_Properties_of_Natural_Gas_and_Condensates_I/18.02%3A_Density.txt
|
Specific gravity is defined as the ratio of fluid density to the density of a reference substance, both defined at the same pressure and temperature. These densities are usually defined at standard conditions (14.7 psia and 60°F). For a condensate, oil or a liquid, the reference substance is water:
$\gamma_{o}=\frac{\left(\rho_{0}\right)_{s c}}{\left(\rho_{w}\right)_{s c}} \label{18.3}$
The value of water density at standard conditions is 62.4 lbm/ft3 approximately. For a natural gas, or any other gas for this matter, the reference substance is air:
$\gamma_{g}=\frac{\left(\rho_{g}\right)_{s c}}{\left(\rho_{a i r}\right)_{s c}} \label{18.3a}$
Or, equivalently, substituting Equation (18.2) evaluated at standard conditions ($Z_{s c} \approx 1$ for most gases),
$\gamma_{g}=\frac{M W_{g}}{M W_{a i r}} \label{18.3b}$
where the value of the molecular weight for air is $MW_{air} = 28.96\, lbm/lbmol$. Specific gravity is nondimensional because both numerator and denominator have the same units.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
18.04: API Gravity
Petroleum and Natural Gas Engineers also use another gravity term which is called API gravity. It is used for liquids (e.g., condensates) and is defined as:
$^o API=\frac{141.5}{\left.\gamma_{o}\right|_{sc}}-131.5 \label{18.4}$
By definition (see Equation 18.3), the specific gravity of water is unity. Therefore, water has an API gravity of 10. The API gravity of 10 is associated with very heavy, almost asphaltic, oils. Light crude oils have an API greater than or equal to 45°. Condensate gravities range between 50° and 70° API. Liquid condensates are normally light in color.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
18.05: Volumetric Factors (Bo and Bg)
Due to the dramatically different conditions prevailing at the reservoir when compared to the conditions at the surface, we do not expect that 1 barrel of fluid at reservoir conditions could contain the same amount of matter as 1 barrel of fluid at surface conditions. Volumetric factors were introduced in petroleum and natural gas calculations in order to readily relate the volume of fluids that are obtained at the surface (stock tank) to the volume that the fluid actually occupied when it was compressed in the reservoir.
For example, the volume that a live oil occupies at the reservoir is more than the volume of oil that leaves the stock tank at the surface. This may be counter-intuitive. However, this is a result of the evolution of gas from oil as pressure decreases from reservoir pressure to surface pressure. If an oil had no gas in solution (i.e., a dead oil), the volume that it would occupy at reservoir conditions is less than the volume that it occupies at the surface. In this case, only liquid compressibility plays a role in the change of volume.
The formation volume factor of a natural gas ($B_g$) relates the volume of 1 lbmol of gas at reservoir conditions to the volume of the same lbmol of gas at standard conditions, as follows:
$B_{g}=\frac{\text { Volume of } 1 \text { lbmol of gas at reservoir conditions, RCF }}{\text { Volume of } 11 \text { bmol gas at standard conditions, SCF }} \label{18.5}$
Those volumes are, evidently, the specific molar volumes of the gas at the given conditions. The reciprocal of the specific molar volume is the molar density, and thus, Equation \ref{18.5} could be written:
$B_{g}=\frac{\bar{v}_{g} / res}{\bar{v}_{g} /sc}=\frac{\bar{\rho}_{g} / s c}{\bar{\rho}_{g} / r e s}=\frac{\left.\left(\rho_{g} / M W_{g}\right)\right|_{sc}}{\left.\left(\rho_{g} / M W_{g}\right)\right|_{r e s}} \label{18.6)}$
Introducing the definition for densities in terms of compressibility factor,
$B_{g}=\frac{\frac{P_{s c}}{R T_{s c} Z_{s c}}}{\frac{P}{R T Z}} \label{18.7}$
Therefore, recalling that $Z_{s c} \approx 1$,
$B_{g}=\frac{P_{s c}}{T_{s c}} \frac{Z T}{P}=0.005035 \frac{Z T}{P}_{[R C F / S C F]} \label{18.8}$
Gas formation volume factors can be also expressed in terms of [RB/SCF]. In such a case, 1 RB = 5.615 RCF and we write:
$B_{g}=0.005035 \frac{Z T}{P}_{[R C F / S C F]} \label{18.9}$
The formation volume factor of an oil or condensate (Bo) relates the volume of 1 lbmol of liquid at reservoir conditions to the volume of that liquid once it has gone through the surface separation facility.
$B_{o}=\frac{\text { Volume of lbmol of liquid at reservoir conditions, RB }}{\text { Volume of that lbmol after going through seperation, STB }} \label{18.10}$
The total volume occupied by 1 lbmol of liquid at reservoir conditions (Vo)res can be calculated through the compressibility factor of that liquid, as follows:
$\left(V_{o}\right)_{r e s}=\left(\frac{n Z_{o} R T}{P}\right)_{r e s} \label{18.11}$
where $n = 1 \,lbmol$.
Upon separation, some gas is going to be taken out of the liquid stream feeding the surface facility. Let us call “nst” the moles of liquid leaving the stock tank per mole of feed entering the separation facility. The volume that 1 lbmol of reservoir liquid is going to occupy after going through the separation facility is given by:
$\left(V_{o}\right)_{r e s}=\left(\frac{n_{s t} Z_{o} R T}{P}\right)_{s c} \label{18.12}$
where $n = 1 \,lbmol$.
Here we assume that the last stage of separation, the stock tank, operates at standard conditions. Introducing Equations \ref{18.12} and \ref{18.11} into \ref{18.10}, we end up with:
$B_{o}=\frac{\left(\frac{n Z_{o} R T}{P}\right)_{r e s}}{\left(\frac{n_{s t} Z_{0} R T}{P}\right)_{s c}} \label{18.13}$
or,
$B_{o}=\frac{1}{n_{s t}} \frac{\left(Z_{o}\right)_{r e s}}{\left(Z_{o}\right)_{s c}} \frac{T}{P} \frac{P_{s c}}{T_{s c}} \label{18.14}$
Please notice that $(Z_o)_{sc}$ - unlike $Z_{sc}$ for a gas - is never equal to one. Oil formation volume factor can be also seen as the volume of reservoir fluid required to produce one barrel of oil in the stock tank.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/18%3A_Properties_of_Natural_Gas_and_Condensates_I/18.03%3A_Specific_Gravity.txt
|
The isothermal compressibility of a fluid is defined as follows:
$c_{f}=-\frac{1}{V}\left(\frac{\partial V}{\partial \rho}\right)_{T} \label{18.15}$
This expression can be also given in term of fluid density, as follows:
$c_{f}=-\frac{1}{\rho}\left(\frac{\partial \rho}{\partial P}\right)_{T} \label{18.16}$
For liquids, the value of isothermal compressibility is very small because a unitary change in pressure causes a very small change in volume for a liquid. In fact, for slightly compressible liquid, the value of compressibility ($c_o$) is usually assumed independent of pressure. Therefore, for small ranges of pressure across which $c_o$ is nearly constant, Equation \ref{18.16} can be integrated to get:
$c_{o}\left(p-p_{b}\right)=\ln \left(\frac{\rho_{o}}{\rho_{o b}}\right) \label{18.17}$
In such a case, the following expression can be derived to relate two different liquid densities ($\rho_{o}$, $\rho_{ob}$,ob) at two different pressures (p, pb):
$\rho_{o}=\rho_{o b}\left[1+c_{o}\left(p-p_{b}\right)\right] \label{18.18}$
The Vasquez-Beggs correlation is the most commonly used relationship for $c_o$.
For natural gases, isothermal compressibility varies significantly with pressure. By introducing the real gas law into Equation \ref{18.16}, it is easy to prove that, for gases:
$c_{g}=\frac{1}{P}-\frac{1}{Z}\left(\frac{\partial Z}{\partial P}\right)_{r} \label{18.19}$
Note that for an ideal gas, cg is just the reciprocal of the pressure. “cg” can be readily calculated by graphical means (chart of Z versus P) or by introducing an equation of state into Equation \ref{18.19}.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
18.07: Surface Tension
Surface tension is a measure of the surface free energy of liquids, i.e., the extent of energy stored at the surface of liquids. Although it is also known as interfacial force or interfacial tension, the name surface tension is usually used in systems where the liquid is in contact with gas.
Qualitatively, it is described as the force acting on the surface of a liquid that tends to minimize the area of its surface, resulting in liquids forming droplets with spherical shape, for instance. Quantitatively, since its dimension is of force over length (lbf/ft in English units), it is expressed as the force (in lbf) required to break a film of 1 ft of length. Equivalently, it may be restated as being the amount of surface energy (in lbf-ft) per square feet.
Katz et al. (1959) presented the Macleod-Sudgen equation for surface tension ($\sigma$) calculations in dynes/cm for hydrocarbon mixtures:
$\sigma^{1 / 4}=\sum_{i=1}^{n} \operatorname{Pch}_{i}\left[\frac{\rho_{i}}{62.4\left(M W_{l}\right)} x_{i}-\frac{\rho_{g}}{62.4\left(M W_{g}\right)} y_{i}\right] \label{18.20}$
where:
• $Pch_i$ is the Parachor of component “i”,
• $x_i$ is the mole fraction of component “i” in the liquid phase,
• $y_i$ is the mole fraction of component “i” in the gas phase.
In order to express surface tension in field units (lbf/ft), multiply the surface tension in (dynes/cm) by $6.852177 \times 10^{-3}$. The parachor is a temperature independent parameter that is calculated experimentally. Parachors for pure substances have been presented by Weinaug and Katz (1943) and are listed in Table 18.1.
Table 18.1: Parachors for pure substances (Weinaug and Katz, 1943)
Component Parachor
CO2
78.0
N2
41.0
C1
77.0
C2
108.0
C3
150.3
iC4
181.5
nC4
189.9
iC5
225.0
nC5
231.5
nC6
271.0
nC7
312.5
nC8
351.5
Weinaug and Katz (1943) also presented the following empirical relationship for the parachor of hydrocarbons in terms of their molecular weight:
$P c h_{i}=-4.6148734+2.558855 M W_{i}+3.404065 \cdot 10^{-4} M W_{i}^{2}+\frac{3.767396 \cdot 10^{3}}{M W_{i}} \label{18.21}$
• This correlation may be used for pseudo-components or for those hydrocarbons not shown in Table 18.1.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
18.08: Action Item
Problem Set
1. Which of all the properties we have studied so far would you use to compare and distinguish among wet natural gas, dry natural gas, and gas-condensate systems? Explain why.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/18%3A_Properties_of_Natural_Gas_and_Condensates_I/18.06%3A_Isothermal_Compressibilities.txt
|
Module Goal: To highlight the important properties used to characterize natural gas and condensate systems.
Module Objective: To present the most popular models for estimating properties of natural gas and condensate systems.
• 19.1: Heat Capacities
• 19.2: Joule-Thomson Coefficient
Whether or not a gas cools upon expansion or compression — that is, when subjected to pressure changes — depends on the value of its Joule–Thomson coefficient. This is not only important for natural gas pipeline flow, but also for the recovery of condensate from wet natural gases.
• 19.3: Viscosity
Whether you are interested in flow in pipes or in porous media, one of the most important transport properties is viscosity. Fluid viscosity is a measure of its internal resistance to flow. The most commonly used unit of viscosity is the centi-poise.
• 19.4: Action Item
19: Properties of Natural Gas and Condensates II
The constant volume heat capacity is defined by:
$C_{v}\left(\frac{\partial U}{\partial T}\right)_{V} \label{19.1}$
To see the physical significance of the constant volume heat capacity, let us consider a 1 lbmol of gas within a rigid-wall (constant volume) container. Heat is added to the system through the walls of the container and the gas temperature rises. It is evident that the temperature rise ($ΔT$) is proportional to the amount of heat added,
$Q∝ΔT \label{19.2}$
Introducing a constant of proportionality “cv”,
$Q=c_{v} \Delta T \label{19.3}$
In our experiment, no work was done because the boundaries (walls) of the system remained unchanged. Applying the first law of thermodynamics to this closed system, we have:
$Q \propto \Delta T \label{19.4}$
Therefore, for infinitesimal changes,
$C_{v}=\left(\frac{\partial U}{\partial T}\right)_{V} \label{19.5}$
As we have seen, constant volume heat capacity is the amount of heat required to raise the temperature of a gas by one degree while retaining its volume.
Let us now consider the same 1 lbmol of gas confined in a piston-cylinder equipment (i.e., a system with non-rigid walls or boundaries). When heat is added to the system, the gas temperature rises and the gas expands so that the pressure in the system remains the same at any time. The piston displaces a volume $V$ and the gas increases its temperature in $\Delta T$ degrees. Again, the temperature rise ($\Delta T$) is proportional to the amount of heat added, and the new constant of proportionality we use here is “cp”,
$Q=c_{P} \Delta T \label{19.6}$
This time, some work was done because the boundaries (walls) of the system changed from their original position. Applying the first law of thermodynamics to this closed system, we have that:
$\Delta U=Q-W \label{19.7}$
If the pressure remained the same both inside and outside the container, the system made some work against the surroundings in the amount of W=P$W=P_{\Delta} V$. Introducing Equation \ref{19.7} into Equation \ref{19.6},
$\Delta U+P \Delta V=c_{P} \Delta T \label{19.8}$
The left hand side of this equation represents the definition of enthalpy change ($\Delta H$) for a constant-pressure process. Therefore:
$\Delta H=c_{P} \Delta T \label{19.9}$
Finally, for infinitesimal changes,
$c_{P}=\left(\frac{\partial H}{\partial T}\right)_{V} \label{19.10}$
The function “$c_p$” is called the constant pressure heat capacity. The constant pressure heat capacity is the amount of heat required to raise the temperature of a gas by one degree while retaining its pressure.
The units of both heat capacities are (Btu/lbmol-°F) and (cal/gr-°C). Their values are never equal to each other, not even for ideal gases. In fact, the ratio “cp/cv” of a gas is known as “k” — the heat capacity ratio — and it is never equal to unity. This ratio is frequently used in gas-dynamics studies.
$k=\frac{c_{p}}{c_{v}} \label{19.11}$
Heat capacities can be calculated using equations of state. For instance, Peng and Robinson (1976) presented an expression for the departure enthalpy of a fluid mixture, shown below:
$\ddot{H}=H-H^{*}=R T(Z-1)+\frac{T \frac{d(a \alpha)_{m}}{d T}-(a \alpha)_{m}}{2 \sqrt{2 b_{m}}} \ln \left(\frac{Z+(\sqrt{2}+1) B}{Z-(\sqrt{2}-1) B}\right) \label{19.12}$
The value of the enthalpy of the fluid (H) is obtained by adding up this enthalpy of departure (shown above) to the ideal gas enthalpy (H*). Ideal enthalpies are sole functions of temperature. For hydrocarbons, Passut and Danner (1972) developed correlations for ideal gas properties such as enthalpy, heat capacity and entropy as a function of temperature. Therefore, an analytical relationship for “cp” can be derived taking the derivative of Equation \ref{19.12}, as shown below:
where:
$C_{P}=\left(\frac{\partial H}{\partial T}\right)_{P}$
and is the ideal gas $C_P$, also found in the work of Passut and Danner (1972).
The second derivative of $(a \alpha)_{m}$ with respect to temperature can be calculated through the expression:
$\frac{d^{2}}{d T^{2}}(a \alpha)_{m}=-\frac{0.45244 R^{\circ}}{2 \sqrt{T}} \sum_{i} \sum_{j} c_{i} c_{j}\left(1-k_{i j}\right) \left[f\left(w_{j}\right)\left(\frac{\alpha_{i}^{0.5} T_{ci}}{P_{ci}^{0.5}}\right)\left(\frac{T_{cj}}{P_{cj}}\right)^{0.5} \psi_{i}+f\left(w_{i}\right)\left(\frac{\alpha_{j}^{0.5} T_{cj}}{P_{cj}^{0.5}}\right)\left(\frac{T_{ci}}{P_{ci}}\right)^{0.5} \psi_{j}\right] \label{19.14a}$
where,
$\psi_{i}=-\frac{f\left(w_{i}\right)}{2 \sqrt{T_{c i} T \alpha_{i}}}-\frac{1}{2 T} \label{19.14b}$
For the evaluation of expression \ref{19.13}, the derivative of the compressibility factor with respect to temperature is also required. Using the cubic version of Peng-Robinson EOS, this derivative can be written as:
$\left(\frac{\partial Z}{\partial T}\right)_{P}=-\left(\frac{\left(\frac{\partial \Omega_{2}}{\partial T}\right)_{P} Z^{2}+\left(\frac{\partial \Omega_{3}}{\partial T}\right)_{P} Z+\left(\frac{\partial \Omega_{4}}{\partial T}\right)_{P}}{3 Z^{2}+2 \Omega_{2} Z+\Omega_{3}}\right) \label{19.15}$
where
\begin{aligned} &\left(\frac{\partial \Omega_{2}}{\partial T}\right)_{P}=\left(\frac{\partial B}{\partial T}\right)_{P}\ &\left(\frac{\partial \Omega_{3}}{\partial T}\right)_{P}=\left(\frac{\partial A}{\partial T}\right)_{P}-6 B\left(\frac{\partial B}{\partial T}\right)_{P}-2\left(\frac{\partial B}{\partial T}\right)_{P}\ &\left(\frac{\partial \Omega_{4}}{\partial T}\right)_{P}=-\left[A\left(\frac{\partial B}{\partial T}\right)_{P}+B\left(\frac{\partial A}{\partial T}\right)_{P}-2 B\left(\frac{\partial B}{\partial T}\right)_{P}-3 B^{2}\left(\frac{\partial B}{\partial T}\right)_{P}\right]\ &\left(\frac{\partial A}{\partial T}\right)_{P}=\frac{A}{(a \alpha)_{m}} \frac{d(a \alpha)_{m}}{d T}-2 \frac{A}{T}\ &\left(\frac{\partial B}{\partial T}\right)_{P}=-\frac{B}{T} \end{aligned}
“cp” and “cv” values are thermodynamically related. It can be proven that this relationship is controlled by the P-V-T behavior of the substances through the relationship:
$c_{p}-c_{v}=T\left(\frac{\partial V}{\partial T}\right)_{P}\left(\frac{\partial P}{\partial T}\right)_{V} \label{19.16}$
For ideal gases, $P V=n R T$ and Equation 18.28 collapses to:
$c_{p}-c_{v}=R \label{19.17}$
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/19%3A_Properties_of_Natural_Gas_and_Condensates_II/19.01%3A_Heat_Capacities.txt
|
One remarkable difference between flow of condensate (or liquid) and natural gases through a pipeline is that of the effect of pressure drop on temperature changes along the pipeline. This is especially true when heat losses to the environment do not control these temperature variations. Natural gas pipelines usually cool with distance (effect commonly called ‘Joule–Thomson cooling’), while oil lines heat. The reason for such dissimilarity pertains to the different effect that pressure drop has on the entropy of a natural gas than on the entropy of an oil mixture. Katz (1972) and Katz and Lee (1990) presented a very enlightening discussion on this regard.
Whether or not a gas cools upon expansion or compression — that is, when subjected to pressure changes — depends on the value of its Joule–Thomson coefficient. This is not only important for natural gas pipeline flow, but also for the recovery of condensate from wet natural gases. In the cryogenic industry, turboexpanders are used to subject a wet gas to a sudden expansion (sharp pressure drop) in order to cool the gas stream beyond its dew point and recover the liquid dropout.
Thermodynamically, the Joule–Thomson coefficient is defined as the isenthalpic change in temperature in a fluid caused by a unitary pressure drop, as shown:
$\eta=\left(\frac{\partial T}{\partial P}\right)_{H} \label{19.18}$
Using thermodynamic relationships, alternative expressions can be written. For example, using the cycling rule we may write:
or
\begin{align} \left(\frac{\partial H}{\partial P}\right)_{T} &=-\left(\frac{\partial H}{\partial T}\right)_{P}\left(\frac{\partial T}{\partial P}\right)_{H} \label{19.20} \[4pt] &=-c_{P} \eta \label{19.21} \end{align}
We have also seen that we can express enthalpy changes in terms of pressure, temperature and volume changes:
$\left(\frac{\partial H}{\partial P}\right)_{T}=\left[\tilde{v}-T\left(\frac{\partial \tilde{v}}{\partial T}\right)_{P}\right] \label{19.22}$
Additionally, the following identity can be derived:
$\eta=\frac{R T^{2}}{P c_{P}}\left(\frac{\partial Z}{\partial T}\right)_{P} \label{19.23}$
All together, we have several ways of calculating the Joule–Thompson coefficient for a fluid, as shown next:
$\eta=\left(\frac{\partial T}{\partial P}\right)_{H}=\frac{1}{c_{P}}\left[T\left(\frac{\partial \tilde{v}}{\partial T}\right)_{P}-\tilde{v}\right]=-\frac{1}{c_{P}}\left(\frac{\partial H}{\partial P}\right)_{T}=\frac{R T^{2}}{P c_{P}}\left(\frac{\partial Z}{\partial T}\right)_{P} \label{19.24}$
Once the constant pressure specific heat “cp” is calculated as discussed in the previous lecture, all the entries in the previous expression are known and the Joule–Thomson coefficient can be analytically calculated. An interesting observation from all above expressions for “Contact your instructor if you are unable to see or interpret this graphic.” is that the Joule–Thompson coefficient of an ideal gas is identically equal to zero. However, real fluids take positive or negative Joule–Thompson values.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
19.03: Viscosity
What other properties are we interested in? We are interested in flow properties. Whether you are interested in flow in pipes or in porous media, one of the most important transport properties is viscosity. Fluid viscosity is a measure of its internal resistance to flow. The most commonly used unit of viscosity is the centipoise, which is related to other units as follows:
1 cp = 0.01 poise = 0.000672 lbm/ft-s = 0.001 Pa-s
Natural gas viscosity is usually expected to increase both with pressure and temperature. A number of methods have been developed to calculate gas viscosity. The method of Lee, Gonzalez and Eakin is a simple relation which gives quite accurate results for typical natural gas mixtures with low non-hydrocarbon content. Lee, Gonzalez and Eakin (1966) presented the following correlation for the calculation of the viscosity of a natural gas:
$\mu_{g}=1 \cdot 10^{-4} k_{v} \exp\left(x_{v}\left(\frac{\rho_{g}}{62.4}\right)^{y_{v}}\right) \label{19.25a}$
where:
$k_{v}=\frac{\left(9.4+0.02 M W_{g}\right) T^{1.5}}{209+19 M W_{g}+T} \label{19.25b}$
$y_{v}=2.4-0.2 x_{v} \label{19.25c}$
In this expression, temperature is given in (°R), the density of the fluid ($\rho_{g}$) in lbm/ft3 (calculated at the pressure and temperature of the system), and the resulting viscosity is expressed in centipoises (cp).
The most commonly used oil viscosity correlations are those of Beggs-Robinson and Vasquez-Beggs. Corrections must be applied for under-saturated systems and for systems where dissolved gas is present in the oil. However, in compositional simulation, where both gas and condensate compositions are known at every point of the reservoir, it is customary to calculate condensate viscosity using Lohrenz, Bray & Clark correlation Clark correlation. It this type of simulation, it is usual to calculate gas viscosities based on Lohrenz, Bray & Clark correlation as well. This serves the purpose of guaranteeing that the gas phase and condensate phase converge to the same value of viscosity as they approach near-critical conditions.
Lohrenz, Bray and Clark (1964) proposed an empirical correlation for the prediction of the viscosity of a liquid hydrocarbon mixture from its composition. Such expression, originally proposed by Jossi, Stiel and Thodos (1962) for the prediction of the viscosity of dense-gas mixtures, is given below:
where
• $\mu$= fluid viscosity (cp),
• $\mu^{*}$ = viscosity at atmospheric pressure (cp),
• $\xi_{m}$ = mixture viscosity parameter (cp-1),
• $\rho_{T}$ = reduced liquid density (unitless),
Lohrentz et al. original paper presents a typographical error in Equation \ref{19.26}. Here it is written as originally proposed by Jossi, Stiel and Thodos (1962). All four parameters listed above have to be calculated as a function of critical properties in order to apply Equation \ref{19.26}. Lohrentz et al. original paper uses scientific units, here we present the equivalent equations in field (English) units.
For the viscosity of the mixture at atmospheric pressure ($\mu^{*}$), Lohrentz et al. suggested using the following Herning & Zipperer equation:
$\mu^{i}=\frac{\displaystyle \sum_{i} z_{i} \mu_{i}^* \sqrt{M W_{i}}}{ \displaystyle \sum_{i} z_{i} \sqrt{M W_{i}}} \label{19.27}$
where:
• $z_j$ = mole composition of the i-th component in the mixture,
• $MW_i$ = molecular weight of the i-th component (lbm/lbmol)
• $\mu_{i}^{*}$ = viscosity of the i-th component at low pressure (cp):
Moreover,
$\mu_{i}^*=\frac{34 \cdot 10^{-5} T_{r i}^{0.94}}{\xi_{i}}$
if $T_{ri} ≤ 1.5$ and
$\mu_{i}^*=\frac{17.78 \cdot 10^{-5}\left(4.5 T_{r i}-1.67\right)^{0.625}}{\xi_{i}}$
if $T_{ri} > 1.5$.
where:
$T_{ri}$ is the reduced temperature for the i-th component (T/Tci) and $MW_i$ is the viscosity parameter of the i-th component, given by:
$\xi_{i}=\frac{5.4402 T_{c i}^{1 / 6}}{\sqrt{M W_{i}} P_{c i}^{2 / 3}}$
For the mixture viscosity parameter ($\xi m$), Lohrentz et al. applied an equivalent expression to that shown above but using pseudo-properties for the mixture:
$\xi m=\frac{5.4402 T_{p c}^{1 / 6}}{\sqrt{M W_{l}} P_{p c}^{2 / 3}} \label{19.28}$
where
• $T_{pc}$ = pseudocritical temperature (oR),
• $P_{pc}$ = pseudocritical pressure (psia),
• $MW_l$ = liquid mixture molecular weight (lbm/lbmol).
The reduced density of the liquid mixture ($\rho_{r}$) is calculated as:
$\rho_{r}=\frac{\rho_{l}}{\rho_{p c}}=\left(\frac{\rho_{l}}{M W_{l}}\right) V_{p c} \label{19.29}$
where
• $\rho_{p c}$ is the mixture pseudocritical density (lbm/ft3),
• $V_{pc}$ is the mixture pseudocritical molar volume (ft3/lbmol),
All mixture pseudocritical properties are calculated using Kay’s mixing rule, as shown:
$T_{p c}=\sum z_{i} T_{c i} \label{19.30a}$
$P_{p c}=\sum z_{i} P_{c i} \label{19.30b}$
$V_{p c}=\sum z_{i} V_{c i} \label{19.30c}$
“$z_i$” pertains to the fluid molar composition, $T_{ci}$ is given in oR, Pci in psia, and Vci in ft3/lbmol. When the critical volumes are known in a mass basis (ft3/lbm), each of them is to be multiplied by the corresponding molecular weight. In the case of lumped C7+ heavy fractions, Lorentz et al. (1969) presented a correlation for the estimation C7+ critical volumes.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
19.04: Action Item
Problem Set
1. Gas metering is a most important activity in the natural gas business. Among the properties we have studied, which one would you emphasize in terms of accuracy for most common gas meters?
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/19%3A_Properties_of_Natural_Gas_and_Condensates_II/19.02%3A_Joule-Thomson_Coefficient.txt
|
Learning Objectives
• Module Goal: To highlight some of the important applications of phase behavior in production operations.
• Module Objective: To describe the use of flash calculations in separator optimization and gas-condensate reservoir description.
20: Engineering Applications I
Optimal design and the safe and efficient operation of hydrocarbon production handling and processing systems strongly depends on accurate knowledge of fluid phase behavior. In fact, in contrast with other disciplines, the practice of petroleum and natural gas engineering centers around understanding the interaction between fluids and various environments, including the reservoir, pipeline, separator, pumps and compressors, etc. Another distinguishing characteristic is the complexity of the fluid involved here — petroleum. Whereas, for instance, mechanical engineers deal mainly with water (single-component system) and air (considered ideal for most applications), here we are dealing with complex hydrocarbon mixtures where the composition dependence of thermophysical properties is very strong.
Another complication is the wide range of pressures and temperatures associated with the processes of interest, from ultralow temperatures (LNG) to as much as 210 °F and pressures ranging from atmospheric to several thousand psia. Within these ranges, the fluid can transcend the three principal phases, namely gas, liquid, and solid, and worse yet, any combination of these.
The combination of the complex mixtures involved, the wide compositional variability from reservoir to reservoir, from system to system, and the wide range of pressures and temperatures to which systems are often subjected (e.g. a pipeline) make the phase behavior of these systems a very challenging undertaking. Unless one has a good descriptive and predictive understanding of the fluid’s phase behavior, their interactions and responses cannot be successfully described.
In these last two modules of the course, we will examine some of the applications of our current knowledge of phase behavior and thermodynamics in Petroleum and Natural Gas Engineering. The message we would like to provide is very simple: the phase behavior of the hydrocarbon system must be fully grasped in order to fully understand the responses of condensate and natural gas systems and optimize their performance. For example, maximization of condensate yield is virtually impossible without the tools for accurate prediction of just how much liquid will exist under given conditions of pressure, temperature and composition. Therefore, having advanced predictive tools for the characterization of hydrocarbon phase behavior with the highest accuracy possible is the key to mastering the economics of hydrocarbon systems. In the next sections, we will explore some specific areas where the mastering of phase behavior concepts is a must.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
20.02: Design and Optimization of Separators
Once oil and gas are brought to the surface, our main goal becomes that of transportion of the oil and gas from the wellhead to the refinery (for final processing) in the best possible form. All equipment and processes required to accomplish this are found at the surface production facility. Hence, all surface production starts right at the wellhead. Starting at the wellhead, the complex mixture of produced fluids makes its way from the production tubing into the flow line. Normally, many wells are drilled to effectively produce the hydrocarbons contained in the field. From each of these wells emerge one or more flow lines depending on how many layers are being produced simultaneously. Depending on the physical terrain of the area and several other environmental factors, each of the flow lines may be allowed to continue from the wellhead to a central processing facility commonly referred as a production platform or a flow station. If not, all the flow lines or several of them empty their contents into a bigger pipeline called a bulk header, which then carries the fluids to the production platform. The combination of the wellhead, the flow lines, bulk headers, valves and fittings needed to collect and transport the raw produced fluid to the production platform is referred to as the gathering system.
The gathered fluids must be processed to enhance their value. First of all, fluids must be separated into their main phasial components—namely, oil, water, and natural gas. The separation system performs this function. For this, the system is usually made up of a free water knock-out (FWKO), flow line heater, and oil-gas (two-phase) separators. We will be looking at the design of this last component.
The physical separation of these three phases is carried out in several steps. Water is separated first from the hydrocarbon mixture (by means of the FWKO), and then the hydrocarbon mixture is separated into two hydrocarbon phases (gas and oil/condensate). A successful hydrocarbon separation maximizes production of condensate or oil, and enhances its properties. In field applications, this is accomplished by means of stage separation. Stage separation of oil and gas is carried out with a series of separators operating at consecutively reduced pressures. Liquid is discharged from a higher-pressure separator into the next-lower-pressure separator. The purpose of stage separation is to obtain maximum recovery of liquid hydrocarbons from the fluids coming from the wellheads and to provide maximum stabilization of both the liquid and gas effluents.
Surface Production Facility: The physical installation where fluids coming from the wellhead are separated into three main constituents: water, oil, and natural gas.
Figure 20.1: Purpose of a Surface Production Facility
Usually it is most economical to use three to four stages of separation for the hydrocarbon mixture. Five or six may payout under favorable conditions, when — for example — the incoming wellhead fluid is found at very high pressure. However, the increase in liquid yield with the addition of new stages is not linear. For instance, the increase in liquids gained by adding one stage to a single-stage system is likely to be substantial. However, adding one stage to a three or four stage system is not as likely to produce any major significant gain. In general, it has been found that a three stage separating system is the most cost effective. Figure 20.2 shows this typical configuration.
Figure 20.2: Three-stage Surface Separation Facility
Under the assumption of equilibrium conditions, and knowing the composition of the fluid stream coming into the separator and the working pressure and temperature conditions, we could apply our current knowledge of VLE equilibrium (flash calculations) and calculate the vapor and liquid fractions at each stage. However, if we are looking at designing and optimizing the separation facility, we would like to know the optimal conditions of pressure and temperature under which we would get the most economical profit from the operation. In this context, we have to keep in mind that stage separation aims at reducing the pressure of the produced fluid in sequential steps so that better and more stock-tank oil/condensate recovery will result.
Separator calculations are basically performed to determine:
• Optimum separation conditions: separator pressure and temperature
• Compositions of the separated gas and oil phases
• Oil formation volume factor
• Producing Gas-Oil ratio
• API gravity of the stock tank oil
Let us look at the case of three-stage separation. In general, temperature conditions in the surface separation facility are very much determined by the atmospheric condition and incoming stream temperatures. As for pressures, the very first separator pressure is controlled by the gathering lines coming from well heads, thus there is not much room for playing with pressure in the first separator. The same arguments are valid for the last stage of separation (stock tank), which usually operates at atmospheric conditions. Therefore, we are only left with the middle separator for optimization.
As it turns out, the key to designing a three stage separation system is finding the optimum pressure at which to operate the second separator. The question that we would answer is “what is the pressure that will result in the best quality liquid going out of the stock tank for sales?” We do not want to do this empirically. This is, we do not want to play with the second stage separator pressure in the field, until we ultimately find the optimum condition. What we can do, using our phase behavior knowledge, is to find this optimum middle stage pressure applying our understanding of VLE equilibrium.
Figure 20.3 shows the typical effect of playing with the middle separator pressure on the quality and quantity of produced oil/condensate at the stock tank. Quality and quantity are measured in terms of properties, such as API and Bo, and the overall GOR at the separation facility.
Figure 20.3: Selection of Optimum Middle Separator Pressure
The optimum value of pressure for the middle stage is the one that produces the maximum liquid yield (by minimizing GOR and Bo) of a maximum quality (by maximizing stock-tank API gravity). The smaller the value of GOR and Bo, the larger the liquid yield. The higher the API gravity of the stock-tank fluid, the more profitable its commercialization. From Figure 20.2, we see that this condition is found at neither extreme (low/high) values of middle stage pressure. There is, in fact, an optimal value for middle stage pressure. This is the value we are looking for.
The Phase Behavior model that we have described throughout these series of lectures provides the basic framework for the type of calculations required here. Additionally, we discussed how API and Bo can be calculated using the output of the phase behavior model. While doing the calculations for a 3-stage separating system, keep in mind that we have minimal control over feed pressure, as we do not want to inhibit the well (high-pressure separator). We do not control the sales line pressure (stock-tank pressure) either. The control that we do have is the operating pressure of the middle separator.
Recall that finding the optimum pressure calls for, in part, finding the minimum gas to oil ratio (GOR, SCF/STB). We are dealing, in this case, with total GOR. The total GOR is the cumulative amount of gas from all three separators divided by the amount of liquid/condensate leaving the stock tank. During our discussion on Bo-calculations, we called “nst” the moles of liquid leaving the stock tank per mole of feed entering the separation facility. This number can be obtained by sequentially flashing 1 lbmol of feed through each of the separation stages. Recalling the definition of GOR,
$G O R=\frac{\text { Total Volume of Gas produced (in standard cubic feet) }}{\text { Total Volume of Liquid produced (in stock-tank barrels) }}=\frac{\left(V_{g}\right)_{S C}}{\left(V_{o}\right)_{s r} 15.615} \label{20.1}$
where:
$\left(V_{o}\right)_{s r}=\frac{n_{s t}\left(Z_{o}\right)_{st} R T_{s t}}{P_{s t}}$
$\left(V_{g}\right)_{s c}=379.4 \frac{S C F}{l b m o l} \cdot\left(n_{g}\right) T O T A L=379.4 \cdot\left(1-n_{s t}\right)$
[basis: 1 lbmol of feed]
Therefore,
$G O R=\frac{5.615}{379.4} R\left(\frac{n_{s t}}{1-n_{s t}}\right)\left(\frac{T_{s t}}{P_{s t}}\right)\left(Z_{o}\right)_{s c}[S C F / S T B] \label{20.2}$
Usually, the stock tank is considered to operate at standard conditions (psc, Tsc). Now you are ready to make your own surface separation design!
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/20%3A_Engineering_Applications_I/20.01%3A_Introduction-_Phase_Behavior_as_the_Quintessential_Tool.txt
|
Phase behavior is relevant to every aspect of petroleum and natural gas engineering. According to the complexity of the reservoir fluid phase behavior, reservoir modeling is classified under two distinct groups: black-oil simulation and compositional simulation. Often times, it is safe to assume that reservoir fluid behavior is only a function of pressure and independent of composition. This simplified behavior is typical of “black-oil” systems. In this case, reservoir hydrocarbon fluids are assumed to be comprised of two components, namely oil and gas. The model allows for certain amount of gas to be in solution with the oil at reservoir conditions. The amount of dissolved gas increases with, and is a sole function of, pressure for conditions below the bubble point. Above the bubble point pressure, the oil-component carries all the available gas in the reservoir, and a “variable bubble point” algorithm is usually implemented to predict conditions for the release of the dissolved gas. For this “black-oil” simplified model to be valid, actual oil and gas phases should maintain a fixed-composition throughout the process simulated in the reservoir. In certain cases, the assumption of fixed oil and gas composition is no longer valid: for instance, depletion of gas-condensate reservoirs and volatile-oil reservoirs, and processes that aim at the vaporization or miscible flooding of the in-situ fluids by fluid injection. More complex fluid behavior requires treating all hydrocarbon phases as nc-component mixtures and thus performing a “compositional simulation.”
Although the need for taking compositional dependence of thermodynamic and hydrodynamic parameters into account in reservoir description has been recognized for a long time, the actual implementation has not been realized until relatively recently. One of the main reasons is due to the lack of simple and reliable methods of predicting phase behavior under the conditions of interest. Two things have happened within the last three decades that have changed the situation: They are (1) the availability of fast and relatively inexpensive computational power to carry out the great number of calculations involved and (2) the development of samples and fairly good equations of state.
Simply put, a compositional reservoir simulator is a dynamic integration of the fluid dynamic porous media model and the phase behavior model, neither of which is subordinated to the other. In fact, in the early days of compositional reservoir simulation, a non-dynamic integration of these two was the norm. The more recent models actually attempt a full dynamic integration. The need for integration had been recognized earlier particularly for gas condensate reservoir, gas cycling processes, and volatile oil systems.
Early developments in reservoir engineering analysis relied on zero-dimensional or tank material balances for the evaluation and forecasting of reservoir performance. In 1936, Schilthuis devised what we now regard as the classical material balance equation. Schilthuis-type material balances are only valid for black-oil systems and are not applicable for reservoir fluids with complex behavior such as gas condensates and volatile oils. Compositional considerations were incorporated into zero-dimensional modeling in the decade of 1950’s with the works of Allen and Row (1950), Brinkman and Weinaug (1957), Reudelhuber and Hinds (1957), Jacoby and Berry (1957), and Jacoby et al. (1959). These can be regarded as the first generation of compositional simulators. Even though zero-dimensional simulations have been largely overthrown by more sophisticated numerical simulation techniques, they are still considered the most simple and fundamental tool available for the analysis of reservoir performance.
The zero-dimensional models make two principal assumptions. The first is to basically ignore the two-way coupling between fluid’s thermophysical properties and the hydrodynamic characteristics. The reservoir is treated as a perfectly mixed tank reactor with uniform properties. We assume no dimensions, and that one single value of pressure and temperature describes the average behavior of the entire reservoir. The second assumption is to neglect the hydrodynamic interactions between the flowing gas and liquid phases. In other words, the zero-dimensional compositional model relies on phase behavior as the most crucial effect controlling the description of recovery performance.
There is no doubt that significant insights, albeit qualitative, are provided by these studies. Zero-dimensional modeling provides a less expensive tool for the engineer to gain some insight into the expected performance of a gas condensate reservoir under depletion. Sometimes we spend a great deal of time dealing with the numerics of the dimensional compositional simulators, and we may forget that phase behavior is the single most important constituent of the depletion performance of, for instance, gas-condensate systems. Nevertheless, the reservoir engineer must keep in mind that this type of analysis does not provide the most accurate reservoir description. For instance, the effect of heterogeneities on reservoir performance cannot be studied with a zero-dimensional model. Again, the goal is to take a look at the qualitative insights that fluid PVT behavior can provide us.
The typical depletion sequence that a zero-dimensional compositional simulator follows is described below. This classical analysis treats gas condensate performance as constant volume depletion (CVD) in a PVT cell. The ultimate output of the model is basically comprised of GOR prediction and the compositions of the gas and condensate surface effluents. We start with a single-phase gas reservoir fluid, of a known composition, at an initial reservoir pressure and temperature. We flash this fluid several times through a series of pressure depletion stages until abandonment conditions are found. Reservoir volume is kept constant throughout the depletion process.
The typical depletion sequence that a zero-dimensional compositional simulator follows are listed below:
1. Calculate the density and molecular weight of the initial reservoir fluid. With this information, and knowing the initial reservoir volume (Vti), calculate the initial amount of gas in place (lbmols). An alternative approach is to start with a fixed amount of lbmols of reservoir fluid, and calculate the corresponding initial reservoir volume (Vti). We assume a volumetric reservoir and thus the initial reservoir volume (Vti) is kept constant throughout the calculations.
2. Using the reservoir fluid composition, distribute the initial moles of reservoir fluids into components, and store such quantities for material balance accounting.
3. Depletion step: Lower the reservoir pressure by a given amount (typically, 200 psi). Flash the reservoir fluid at the new pressure, calculate amount of moles in the gas and liquid phases (“GasT” and “LiqT”) and the densities and molecular weights of each of the phases at the new condition.
4. Expansion: With molecular weight, density, and total molar amount of each of the phases, calculate the new total volume that the fluids occupy at the new condition (Vexp).
5. Calculate the excess volume of fluids, taking the difference between the new volume upon expansion (Vexp) and the reservoir volume (Vti). This represents the volume of fluid that must have been withdrawn by the well in order to reach the newly imposed pressure condition, i.e. Vws = Vexp – Vti(20.3)
6. Calculate the percentage of liquid in the well stream using mobility ratio considerations. Trial and error procedure is necessary for the liquid accounting. The total amount of liquid available upon depletion (LiqT) must be distributed between the wellstream (LiqWS) and the liquid remaining in the reservoir (LiqR). Additionally, the moles of reservoir liquid “LiqR” can be expressed in terms of oil/condensate saturation. Oil saturations define the mobility of the gas and liquid phases. These interrelations are shown below: $LiqT = LiqWS + LiqR \label{20.4a}$S_{o}=\frac{M W_{o}}{P_{o} V t_{i}} L i q R \label{20.4b}$\lambda_{o}=\frac{k_{r o}\left(S_{o}\right)}{\mu_{o}} ; \lambda_{g}=\frac{k_{r g}\left(S_{g}\right)}{\mu_{g}} ; \lambda_{t} \lambda_{o}+\lambda_{g} ; S_{g}=1-S_{o} \label{20.4c}$ $\left(V_{w s}\right)_{l i q}=\frac{\lambda_{o}}{\lambda_{t}} V_{w s}=\frac{M W_{o}}{P_{o}} \operatorname{LiqW} S \label{20.4d}$
7. $LiqWS$ is a function of mobility ratio, which is a function of LiqR (through So). As a first guess, take LiqR = LiqT and calculate the corresponding LiqWS. Make new guesses (decreasing the value in every new trial) until the liquid balance in Equation (20.4a) is satisfied.
8. Once $LiqWS$ and (Vws)liq are known, obtain the total volume of gas withdrawn from the reservoir by subtracting the liquid volume (Vws)liq from the total wellstream volume (Vws). Express this gas volume in moles using reservoir gas density and molecular weight. Calculate the total number of moles of the wellstream.
9. Material Balance Accounting: Calculate the number of moles of each component remaining in the reservoir. To do this: subtract the number of moles leaving the well stream from the number in the reservoir before flashing for each of the components. Calculate the new overall composition of the reservoir fluid.
10. Calculate the overall composition of the produced well stream, by mixing the composition of gas and liquid coming along.
11. Surface Production Facility: Flash the incoming wellstream composition through the train of separations. Calculate the total amount of gas and liquid leaving the separation facility and GOR. Calculate the percentage of recovery from the reservoir.
12. A depletion loop has been completed. Go back to step 3 until abandonment pressure is reached (typically, 600 psia).
13. Plot liquid production, gas production, GOR, and recovery from the reservoir as a function of pressure depletion, from initial reservoir conditions to abandonment conditions.
The basic VLE calculations required here can be performed using the Peng-Robinson EOS and equilibrium considerations. Liquid viscosities can be calculated through the Lohrenz, Bray, and Clark correlation, and gas viscosities can be calculated using the Lee-Gonzalez correlation. Fluid densities are obtained directly through the Peng-Robinson EOS.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
20.04: Action Item
Problem Set
1. API, GOR and Bo are the parameters that we use to optimize a train of separators. Speculate on why all three parameters seem to predict the same optimal separator pressure.
2. Although zero-dimensional modeling can be used to model behavior of gas-condensate systems, there are some pitfalls in the description. What are these pitfalls? In what cases can we overlook them?
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/20%3A_Engineering_Applications_I/20.03%3A_Compositional_Modeling_of_Gas-Condensate_Reservoirs-The_Zero-dimensional_Approach.txt
|
Learning Objectives
• Module Goal: To highlight some of the important applications of phase behavior in production operations.
• Module Objective: To highlight the use of phase behavior for the description of gas pipelines, gas metering and hydrate systems.
• 21.1: Natural Gas Pipeline Modeling
• 21.2: The Hydrate Problem
• 21.3: Gas Metering
Gas measurement is another area of hydrocarbon engineering where accurate prediction of the P-V-T properties of the working fluid is especially critical. One of the most widely used meters used in the measurement of gas flow is the orifice meter. Orifice meters are classified as inferential meters because the gas volume is calculated from readings of pressure variation as the gas passes through an orifice, and it is not obtained by a direct reading.
• 21.4: Action Item
21: Engineering Applications II
Once natural gas is produced and processed, few to several hundred miles may lie in between it and its final consumers. A cost-effective means of transport is essential to bridge the gap between the producer and consumer. In the technological arena, one of the challenges pertains to the capacity of the industry to ensure continuous delivery of natural gas while its demand is steadily increasing. Thus, it is no wonder that pipelines have become the most popular means of transporting natural gas from the wellhead to processing — and from there to the final consumer — since it better guarantees continuous delivery and assures lower maintenance costs.
Phase Behavior (P-V-T data) is crucial for all our engineering designs. Accurate prediction of the P-V-T properties of natural gases is especially critical when dealing with pipeline design, gas storage, and gas measurement. While describing natural gas pipeline design, it is necessary to distinguish between two cases: the design of pipelines for transportation of regular dry gases (no liquid, single-phase transportation) and the design of pipelines for transportation of wetter gases — where multiphase conditions due to condensate dropout may are possible.
The major variables that affect the design of gas pipelines are: the projected volumes that will be transported, the required delivery pressure (subject to the requirements of the facilities at the consumer end), the estimated losses due to friction, and the elevation changes imposed by the terrain topography. Overcoming such losses will likely require higher pressure than the one available when the gas is being produced. Thus, forcing a given gas rate to pass through a pipeline will inevitably require the use of compressor stations.
Loss in mechanical energy results from moving fluids through pipelines. Energy losses in a pipeline can be tracked by virtue of the pressure and temperature changes experienced by the flowing stream. Design equations relate pipeline pressure drop with the gas flow rate being transported. The following is the general equation for a single-phase gas pipeline flow in steady state:
$q_{g}=c\left(\frac{T_{b}}{P_{b}}\right)(E f f) \sqrt{\frac{1}{f}} \sqrt{\frac{d^{5}\left(p_{1}^{2}-p_{2}^{2}\right)}{g \cdot L \cdot T \cdot Z}} \label{21.1}$
Note that flow rate is proportional to the inverse square root of compressibility (Z). For near-ideal conditions, the effect of compressibility on flow rate is likely to be small. But for high-pressure flows, Z may deviate greatly from 1. Under these conditions, inaccuracy in the prediction of Z may lead to a substantial error in the calculated flow rate and thus a completely wrong pipeline sizing for design purposes.
Once a pipeline is deployed, it has a more or less a fixed operational region. An upper and lower set of operational conditions allowable within the pipeline (in terms of pressure and temperature) will exist. On the one hand, the upper allowable condition will be set by the pipe strength, pipe material, diameter, and thickness. These will dictate the maximum pressure that the pipe can endure without failure (i.e., maximum operating pressure). On the other hand, maximum pressure and temperature of the compressor station discharge (which feeds the inlet of the pipe) will also contribute to set this upper level. It is clear that the conditions at the discharge of the compressor station cannot go beyond the maximum operating pressure of the pipe — otherwise the pipe will fail. The minimum or lower pressure and temperature condition of the operational region will be assigned by contractual agreement with the end consumer. The foregoing description of the operational region is shown schematically as the shaded area in Figure 21.1.
Figure 21.1: Pipeline operational curve and transported gas phase envelope
In natural gas flow, pressure and temperature changes (P-T trace) may cause formation of a liquid phase owing to partial condensation of the gaseous medium. Retrograde phenomenon — typically found in multi-component hydrocarbon systems — takes place by allowing condensation of the gas phase and liquid appearance even under expansion of the flowing stream. The same phenomenon may also cause vaporization of the liquid phase such that it reenters the gas phase. Liquid and gas phase composition are continuously changing throughout the pipe due to the unceasing mass transfer between the phases. In general, the amount of heavies in the stream determines the extent of the retrograde behavior and liquid appearance. Figure 21.1 shows a P-T trace or operational curve for a given pipeline, which is always found within the pipeline operational region.
Figure 21.1 also shows four typical phase envelopes for natural gases, which differ in the extent of their heavy components. For a given composition, the prevailing pressure and temperature conditions will determine if the fluid state is all liquid (single-phase), all gas (single-phase) or gas-liquid (two-phase). Each envelope represents a thermodynamic boundary separating the two-phase conditions (inside the envelope) from the single-phase region (outside). Each envelope is made of two curves: the dew point curve (right arm, where the transition from two-phase to single-gas occurs) and the bubble point curve (left arm, where the transition from single-liquid to two-phase occurs). Both arms meet at the critical point, which is shown in Figure 21.1. The wetness of the gas is an important concept that helps to explain the different features presented in Figure 21.1. This concept pertains to the amount of heavy hydrocarbons (high molecular weight) that are present in the gas composition. In Figure 21.1, the driest gas — i.e., the least wet — can be recognized as that whose left and right arms are the closest to each other, having the smallest two-phase region (gas A). In this figure, it can be seen that the right arm is extremely susceptible to the presence of heavies in the natural gas composition. Depending on the gas composition, the pipeline operational region can be either completely free of liquid (gas A, the driest) or partially submerged in the two-phase region (gas B, C). If the gas is wet enough, the pipeline will be entirely subjected to two-phase conditions (gas D, the wettest). One may describe the sensitivity of the right arm to heavies as having a hook-seizing effect: the larger the extent of heavies in the natural gas, the more the ‘hook’ is able to seize part of the pipeline operational region. In conclusion, since the operational region is more or less given by contractual and design considerations, the liquid presence in a pipeline is ultimately dictated by the properties of the gas that is being transported.
In the preceding figure, a pipeline handling a dry gas (gas A) will be operating a single-phase mode from its inlet through its outlet. For this case, any of the popular single-phase gas equations (Weymouth, Panhandle type, AGA equation) can be used for design purposes and to help to predict the actual operational curve (P-T trace). If a richer gas comes into the system (gas C), it will show a single-phase condition at the inlet, but after a certain distance the pressure and temperature conditions will be within the two-phase region. The case might also be that the system is transporting a wetter gas (gas D), in which case it would encounter two-phase conditions both at the inlet and at the outlet of the pipe.
Penn State has devoted a great deal of effort in the development of two-fluid models for the description of multi-phase flow condition in natural gas pipelines. In this approach, mass, momentum, and energy equations are solved simultaneously. Some simplifying assumptions are made based on engineering judgment. For instance, the knowledge of averaged flow field characteristics and fluid properties at every point of the pipeline is usually more meaningful than a detailed profile of the said properties within the cross section. Hence, generally speaking, the two-fluid model always deals with conservation equations written only in one dimension for pipeline flow (the direction of the flow along the pipe), employing cross-sectional-averaged values for each term. The use of averaged quantities absorbs the variations across the pipe section. Pressures and temperatures are assumed to be the same in both phases at any given point of the pipe. Additionally, since the main interest is to focus on normal operation conditions, the further simplification of steady state conditions is invoked.
As we have discussed, phase behavior is a crucial component in pipeline design. Not only because we need to account for gas volumetric behavior in the design equations (through, for instance, Z-factor calculations), but also because it provides a means for predicting whether multi-phase flow conditions are to be found. Liquid appearance in natural gas pipelines is as undesirable as it is inevitable. On one side, the fluid phase behavior and prevailing conditions make it inevitable. On the other, the condensate subjects the gas pipe to an increasing and undesirable energy loss. Thus, a proper pipeline design must account for the effect of condensate formation on the performance of the gas line.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/21%3A_Engineering_Applications_II/21.01%3A_Natural_Gas_Pipeline_Modeling.txt
|
Natural gas hydrates are solid crystalline compounds of snow appearance with densities smaller than that of ice. Natural gas hydrates are formed when natural gas components, for instance methane, ethane, propane, isobutene, hydrogen sulfide, carbon dioxide, and nitrogen, occupy empty lattice positions in the water structure. In this case, it seems like water solidifying at temperatures considerably higher than the freezing point of water.
Gas hydrates constitute a solid solution—gas being the solute and water the solvent—where the two main constituents are not chemically bounded. Figure 21.2 presents a typical phase diagram for a mixture of water with a light, pure hydrocarbon (HC), similar to that presented by McCain (1990).
Figure 21.2: Phase Diagram for a Water/Hydrocarbon (HC) System
There are a number of points on the diagram in Figure 21.2 that are noteworthy. First of all, hydrate formation is clearly favored by low temperature and high pressure. The three-phase critical point is point C on the diagram that represents the condition where the liquid and gas hydrocarbon merge into a single hydrocarbon phase in equilibrium with liquid water. Point Q2 is the upper quadruple point, where four phases (liquid water, liquid hydrocarbon, gaseous hydrocarbon, and solid hydrate) are found in equilibrium. Point Q1, the lower quadruple point, typically occurs at 32 °F (ice freezing point) where four phases (ice, hydrate, liquid water, and hydrocarbon gas) are found in equilibrium. In this context, phases are not pure as they contain some amount of the other substances according to their mutual solubility.
For practical applications, the most important equilibrium line is the Q1Q2 segment. It represents the conditions for hydrate formation or dissociation, a critical piece of information for most industrial applications where hydrates are involved. When we focus on this zone, the phase behavior of water/hydrocarbon system is simplified to the schematics shown in Figure 21.3.
Figure 21.3: Phase Behavior of Water/Hydrocarbon System (Q1Q2 segment)
Phase Behavior thermodynamics is usually invoked for the prediction of the Q1Q2 hydrate formation/dissociation line. The first two methods of prediction were proposed by Katz and coworkers, and are known as the Gas Gravity Method (Katz, 1945) and the Ki-value Method (Carson and Katz, 1942). Both methods allow calculating the P-T equilibrium curves for three phases: liquid water, hydrate and natural gas. These methods yield initial estimates for the calculation and provide qualitative understanding of the equilibrium; the latter method being the more accurate of the two. The third method relies on Statistical Mechanics for the prediction of equilibrium. It is recognized as the most accurate of all three-phase calculations as it is more comprehensive and detailed.
The key circumstances that are essential for hydrate formation can be summarized as:
1. Presence of “free” water. No hydrate formation is possible if “free” water is not present. Here, we understand the importance of removal of water vapor from natural gas, so that in case of free water occurrence there is likelihood of hydrate formation.
2. Low temperatures, at or below the hydrate formation temperature for a given pressure and gas composition.
3. High operating pressures.
4. High velocities, or agitation, or pressure pulsations, in other words turbulence can serve as catalyst.
5. Presence of H2S and CO2 promotes hydrate formation because both these acid gases are more soluble in water than the hydrocarbons.
The best and permanent remedy for the hydrate formation problems is the dehydration of the gas. Sometimes, it is quite possible that hydrates will form at the well site or in the pipeline carrying natural gas to the dehydration unit, so that the need for well head techniques arises. At well site, two techniques are appropriate:
1. Heating the gas stream and maintaining flow lines and equipment at temperature above the hydrate point,
2. In cases where liquid water is present and the flowlines and equipment cannot be maintained above hydrate temperature, inhibiting hydrate formation by injecting additives that depress both hydrate and freezing temperatures.
The most common additives are methanol, ethylene glycol, and diethylene glycol. Methanol injection is very beneficial in cases where a low gas volume does not permit the dehydration processing. It is also extremely useful in cases where hydrate problems are relatively mild, infrequent, or periodic, in cases where inhibitor injection is only a temporary phase in the field development program, or where inhibition is done in conjunction with a primary dehydration system.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/21%3A_Engineering_Applications_II/21.02%3A_The_Hydrate_Problem.txt
|
Gas measurement is another area of hydrocarbon engineering where accurate prediction of the P-V-T properties of the working fluid is especially critical. One of the most widely used meters used in the measurement of gas flow is the orifice meter. Orifice meters are classified as inferential meters because the gas volume is calculated from readings of pressure variation as the gas passes through an orifice, and it is not obtained by a direct reading.
The orifice meter is arranged so that the flowing gas is constricted at a particular location by a thin orifice plate very accurately gauged and calibrated so as to be in a concentric position in the pipe. The reduction of the cross section of the flowing gas stream in passing through the orifice increases the velocity head at the expense of the pressure head, and the reduction in pressure between the taps is measured by manometers (or a recording meter). A typical orifice meter is shown in Figure 21.4.
Among the advantages of using oriface meters for gas measurement purposes are the following facts:
• They are simple in design and have no moving parts.
• They are relatively accurate.
• They are easy to install and maintain.
• They cover a wide range of capacity.
• They represent a low cost.
• There is a great deal of experience in their use.
Among the disadvantages of oriface meters are the following facts:
• They represent an intrusive measurement technique and a flow restriction that translates into a large energy loss.
• The orifice hole can be eroded by sand or corrosive fluids.
• The hole may be obstructed by wax or hydrate.
Among the advantages of using orifice meters for gas measurement purposes are the fact that they are simple in design and have no moving parts, relatively accurate, easy to install and maintain, cover a wide range of capacity, represent a low cost, and there is a great deal of experience in their use. Among the disadvantages of orifice meters, are the fact that they represent an intrusive measurement technique and a flow restriction that translates into a large energy loss, the orifice hole can be eroded by sand or corrosive fluids, and the hole may be obstructed by wax or hydrate.
Bernoulli’s equation is then used as the basis for correlating the increase in velocity head with the decrease in pressure head. In the calculation of gas flow rate using an orifice meter, two quantities must be measured: the static pressure (i.e. the line pressure) and the differential pressure (i.e. the pressure drop across the orifice plate). The following is the basic equation for gas flow through an orifice meter:
$q_{g}=\frac{\pi}{4} C Y_{1} d^{2} \sqrt{\frac{2 \Delta p}{\left(1-\beta^{4}\right) \rho}}=\frac{\pi}{4} C Y_{1} d^{2} \sqrt{\frac{2 \Delta p Z R T}{\left(1-\beta^{4}\right) p(M W)}} \label{21.2}$
In Equation \ref{21.2}, flow rate is a function of gas compressibility factor ($Z$). Again, for high-pressure flows, an error in the compressibility factor could result in an erroneously calculated flow rate. If you have some error on Z-factor, this automatically translates into error in the gas meter. Accurate phase behavior prediction techniques are a must in gas metering.
In the Natural Gas Industry, the point of gas exchange between the buyer and the seller is called custody transfer. During custody transfer operations, accurate measurements of the quantity and quality of the exchanged gas are of crucial importance because of its economical implications. Economic transactions are based on volumetric rate measurements, which are regulated to be made at the same base conditions. Industry base conditions or standard conditions (SC) are usually taken as P = 14.7 psia and T = 60.0 °F. A low percent inaccuracy in the Z-factor calculation of a gas in transfer can easily translate into thousands of dollars of losses on a daily basis! In fact, flow rate estimations can prove extremely sensitive to values of compressibility factor. This is why the gas industry does not accept Z-factor predictions with a range of uncertainty larger than + 0.01 % for custody transfer operations.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
21.04: Action Item
Problem Set
1. Write an essay that describes the applicability of the knowledge gained in this course to several other areas in the petroleum and natural gas business. Provide specific examples.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/21%3A_Engineering_Applications_II/21.03%3A_Gas_Metering.txt
|
Chlorosilanes are a group of reactive, chlorine-containing chemical compounds, related to silane and used in many chemical processes. Each such chemical has at least one silicon-chlorine bond.
• Elemental Silicon
Silicon is the next element in the carbon family and shares some of the same atomic structure and tetrahedral bonding tendencies. Unlike carbon, silicon compounds do not have the ability to form double or triple bonds. In crystalline form, silicon does not have the different forms as carbon ( i.e., either diamond, graphite or fullerine graphite). Crystalline silicon forms only a cubic diamond-like structure. Solid silicon can either be amorphous, single-crystal, or polycrystalline.
• Silicon-Carbon-Chlorine molecules
Silicon and carbon atoms do not generally bond together at moderate temperatures, but rather only at high temperature to make silicon carbide, in an electric arc furnace. SiC is very non-reactive ceramic material. However, silicon and carbon can be bonded in conjunction with chlorine at more moderate temperatures, where the carbon atom is the part of an organic group (e.g., methyl, ethyl, phenyl).
• Silicon-Carbon-Oxygen molecules
Silicon-carbon-oxygen bonded molecules do not occur naturally, are completely synthetic and are generically known as silicones or coupling agents. They are the reaction product of organo-chlorosilanes and di-hydroxy organics. Because of the ability to customize organo-chlorosilane intermediates, the variety of silicone molecules available is almost infinite. An additional degree of variation is in the degree of cross-linking of such smaller chain silicones into macro-molecules.
• Silicon-Chlorine Bonded Molecules
Silicon - chlorine bonded molecules do not occur naturally and are completely synthetic. This family includes silane, monochlorosilane (MCS), dichlorosilane (DCS), trichlorosilane (TCS), and silicon tetrachloride (STC), and is generically known as chlorosilanes.
• Silicon-Oxygen Bonded Molecules
Silicon-oxygen bonded molecules occur naturally as silicates, in mineral formations like quartzite and sand, and can be extracted into soluble form by reaction with strong alkalis. However, by going thru a chlorosilane hydrolysis route as synthetically produced compounds, silicon-oxygen bonded molecules have found significant application in food-stuffs, paints, moisture absorption, and most notably as silicones.
Chlorosilane Chemistry
Silicon is the next element in the carbon family (Group IV/IVA) and shares some of the same atomic structure and tetrahedral bonding tendencies. Unlike carbon, silicon compounds do not have the ability to form double or triple bonds. In crystalline form, silicon does not have the different forms as carbon ( i.e., either diamond, graphite or fullerine graphite). Crystalline silicon forms only a cubic diamond-like structure. Solid silicon can either be amorphous, single-crystal, or polycrystalline.
Unusual or Noteworthy Properties
• Ease of crystal-pulling: In the molten state ( nominally 1414C), liquid silicon exhibits a 2 C temperature range where it can be super-cooled. It has an unusually high surface tension of 720 dynes/cm at its nominal melting point, but that can increase up to 830 dynes/cm when it is the super-cooled molten state. Like ice, molten silicon expands about 8% when solidified. These characteristics allow purified molten silicon to be "pulled" upwards from a molten pool, as a large diameter single-crystal cylinder. After subsequent slicing into thin "wafers", and polishing, these wafers are the main structural elements of most solid-state electronic devices ( like computer chips) and photovoltaic solar cells.
• Variable resistivity: By allowing parts-per-billion levels of phosphorus or boron (i.e., dopants) to be present in the high purity crystal, the electrical resistivity of silicon can be highly customized. These dopant levels can also be adjusted to deposit molecularly thin layers epitaxially. This allows for highly responsive semi-conductors to be made, or to produce the familiar solar cells.
• Photovoltaic Effect: Silicon is the only element that has a photovoltaic response to natural sunlight ( other photovoltaics require combinations of elements - some of which are toxic or rare). A significant amount of the sun's visible light frequencies will cause direct current to flow, if a purified silicon wafer is given polarity. Optimally a silicon solar cell can convert 25% of the incident sunlight to electrical energy, although commercial solar cells only operate in the 15-20% efficiency range.
Main Applications and Uses
1. Solar Cells
The solar cell application requires the doping of a polished wafer, typically 100-150 mm diameter x 100-500 microns thick that has an optimal resistivity of 1 ohm-cm. The upper surface of the wafer is ideally negative, made so by epitaxially depositing a thin layer of "N" type silicon, using a Group V hydride gas dopant such as phosphine or arsine. The bottom surface is made positive by epitaxially depositing a thin layer of "P" type silicon, using diborane ( a Group IIIA hydride). More about the construction of solar cells can be found at < http://pveducation.org/pvcdrom/desig...ell-parameters > . The attached graphic illustrates the details.
2. Semi-Conductors
The semi-conductor application of pure silicon is more complex, and typically features thinner wafers of 40-100 microns. Individual “NPN” or “PNP” transistors are no longer used, but rather systems of programmable transistors, called integrated circuits ( IC). Like the solar cell application, the silicon wafer can be given polarity by epitaxial growth using either “N” or “P” dopants, but only in extremely precise regions on the wafer – sometimes only 20-50 atoms in thickness or width. Or a region’s polarity can be altered by targeted ion implantation.
Using a system of photo masking, and etchant gases, extremely precise amounts of silicon are removed or added to the silicon wafer. The distinction between semi-conductor “device” and the interconnecting “wire” has been lost. Modern IC’s are now so sophisticated, large, and fast that they referred to as processors, such as the latest Intel Pentium 64-bit processor. It has cycle speeds of over 3 GHz ( 3 billion times per second), and consists of over 1 billion individual transistors. Linear signal speed approaches the speed of light (about 3x109 meters/second). The attached file “Integrated Circuit Semi-Conductors” provides greater detail.
3. Metallurgical and Alloying
Silicon can be readily extracted from naturally occurring quartzite ore by reacting it with charcoal in an electric arc furnace at 1600C. The carbon from the charcoal combines with the oxygen in the quartzite, liberating the silicon metal in molten form. Carbon Dioxide gas is formed. Cast into ingots and crushed into granules, this metallurgical silicon is added to many molten metal alloys to form silicides that give the metal toughness ( e.g., ferric silicide helps steel to be tougher).
|
textbooks/eng/Chemical_Engineering/Supplemental_Modules_(Chemical_Engineering)/Elemental_Silicon.txt
|
Silicon and carbon atoms do not generally bond together at moderate temperatures, but rather only at high temperature to make silicon carbide, in an electric arc furnace. SiC is very non-reactive ceramic material. However, silicon and carbon can be bonded in conjunction with chlorine at more moderate temperatures, where the carbon atom is the part of an organic group (e.g., methyl, ethyl, phenyl).
Silicon-carbon-chlorine bonded molecules do not occur naturally, are completely synthetic and are generically known as organo-chlorosilanes. More specifically when the organic grouping(s) are methyl, they are referred to as methyl chlorosilanes. A variety of aliphatic and aromatic substitutions for methyl groups are common. Organo-chlorosilanes can be thought of as chlorosilanes (see Silicon-Chlorine bonded molecules), but with an organo-functional group replacing either a hydrogen or chlorine atom. For example the methyl chlorosilane family includes methyl, dimethyl, trimethyl, and tetramethyl silane; methyl, dimethyl and trimethyl monochlorosilane; methyl and dimethyl dichlorosilane; and methyl trichlorosilane.
Unusual/Noteworthy Properties
1. Many of the properties of organo-chlorosilanes are similar to their chlorosilane counterpart (see Silicon-Chlorine bonded molecules). Organo-chlorosilanes are likewise incompatible with water or any hydroxyl compound – whether organic or inorganic in nature. These compounds are also similarly incompatible with oxygen. The products of such a hydroxyl compound (e.g., water, alcohol, glycol) and/or oxygen reaction include various silicon-carbon-oxygen bonded molecules, and a combination of hydrogen and hydrogen chloride.
Depending on the degree of substitution of organo-functional groups (e.g., methyl, ethyl, propyl, phenyl) for hydrogen and chlorine, the personal hazard level is reduced somewhat from that of chlorosilanes. Likewise the reactivity of organo-chlorosilanes with hydroxyl compounds is lessened somewhat from that of chlorosilanes. This lessened hazard level and reactivity does not mean that organo-chlorosilanes should not be treated with the same respect as chlorosilanes.
The degree of lessened reactivity has much to do with the steric hindrance of the organic grouping. Therefore propyl chlorosilanes will react slower than methyl chlorosilanes, given the same number of hydrogen and chlorine atoms that are bonded to the core silicon atom.
1. Organo-chlorosilanes have many of the same odd physical properties (e.g., viscosity, thermal conductivity, surface tension) as chlorosilanes, but to a somewhat lesser amount than might be expected. Again, this reduced level of oddity is a function of the amount of substitution of chlorine atoms for organic groupings in the molecular structure.
It would be inappropriate for these compounds to be viewed as organic compounds that have a chlorosilyl substitution. Instead, their nature is that of chlorosilanes with organic substitutions. In many references, the physical properties of organo-chlorosilanes are found in the “organics” section – as opposed to being in the “inorganics” section. This is merely an issue of the author’s organization of topics and properties.
Organo-chlorosilanes form covalent bonds and occupy a middle ground as being both somewhat organic and somewhat inorganic. It is from this intermediate position that they have so many applications ( see below section in this module on applications and uses) .
1. Organo-chlorosilanes are similar to chlorosilanes in their tetrahedral molecular structure. A methyl (or other organo-functional group) substitution for a chlorine atom alters the structure, both in its steric nature as well as its polarity. For example, methyl trichlorosilane has its three chlorine atoms at a significantly lesser 3-D angle than those of trichlorosilane. Trichlorosilane has a dipole moment of 0.86 Debye, whereas methyl trichlorosilane has a dipole moment of 1.91 Debye.
As in chlorosilane chemistry, disproportionation occurs in organo-chlorosilanes. There can be either Si-Cl swaps with Si-H bonds (leaving the Si-organic grouping untouched); or Si-Cl swaps with Si-organic groups (leaving the Si-H bonds untouched). The means by which the catalyst is conditioned appears to influence which type of disproportionation is favored.
There is more interest in moving around the Si-organic groupings, in order to maximize the production of organo-chlorosilanes with two organic groupings, and two chlorine atoms attached to the core silicon atom, such as (CH3)2-Si-Cl2 = dimethyl dichlorosilane. Having two chlorine atoms will maximize the chain length of its later reaction product. ( Although there is utility in using three chlorines/molecule to increase cross-linking, or just one chlorine/molecule to control chain length, in secondary reactions)
An example of such a disproportionation reaction would be to dehydrate a tertiary methylamine ion-exchange catalyst and treat with anhydrous HCl to activate the amine’s N-Cl adducts. Two by-products of a methyls reaction (trimethyl monochlorosilane and methyl trichlorosilane) are then disproportion reacted to make a more desirable product, dimethyl dichlorosilane.
$(CH_3)_3SiCl + (CH_3)SiCl_3 \rightarrow 2 (CH_3)_2SiCl_2$
1. The synthesis of organo-chlorosilanes actually begins with the preparation of a consumable catalyst, Cu3Si (aka, eta-phase copper silicide). Elemental copper and metallurgical silicon are combined with a zinc promoter, and then batchwise reacted with HCl at 300°C. The solid Cu3Si is milled to the correct size for fluidization, and then small amounts fed into a fluidized bed reactor along with metallurgical-grade silicon, HCl and the appropriate organic chloride. For example, to add just methyl groups to the silicon, use methyl chloride as the organic reactant; or to add just ethyl groups, use chloroethane. This synthesis is called the Rochow reaction < http://en.wikipedia.org/wiki/Direct_process >.
A wide variety of organo-chlorosilane compounds can be custom-synthesized by altering the relative amounts of organic chlorides, the temperature of the reaction, and with fine-tuning by disproportionation. When finally reacted to make coupling agents or silicones, the mixture will dictate the final product molecular weight and chemical resistance. See the module “Silicon-Carbon-Oxygen bonded molecules” for more details.
Main Applications/Uses
1. The main application of organo-chlorosilanes is in the manufacture of silicones. For more detail see “Silicon-Carbon-Oxygen bonded molecules”. Silicones are made by the reaction of organo-chlorosilanes with glycols, alcohols and organic acids like acetic acid. Each chlorine atom reacts with the hydrogen atom of a hydroxyl group, to form HCl and an -Si-O-C- linkage. By varying the amount of cross-linkage and end groupings, the properties of these silicones are adjusted.
By leaving significant amounts of reactants in the end product (most commonly acetic acid), the final polymerization can be delayed for long periods of time. The most common example of that is ambient temperature house-hold caulk ( as is used around bath-tubs, sinks, and countertops).
Silicone products can be oils, greases, gels, or solid, depending on the degree of cross-linkage. Once “cured”, there is no solvent for silicones, which makes them rugged enough for a variety of conditions. They are also water, UV, and mold/mildew resistant.
In our modern culture silicones are pervasive, being found in wide variety of products from paints and coatings, to lubricants and finishes, as well as synthetic rubber products. Their long life and minimal biological activity has made them acceptable consumer products for several decades.
1. A secondary use of organo-chlorosilanes is in the manufacture of specialty pharmaceuticals and cosmetics. The customization of molecular structure allows for controlled skin absorption with minimal allergenic reaction. The addition of acetate end groups has been found to limit their biologic rejection and allow use as human implants and prosthetics.
2. Organo-chlorosilanes can serve as coupling agents, to bond together organics and inorganics, making unusual products. An example of that is in the surface treatment that creates thin film chromatographic media, or in the surface treatment of thin film liquid crystal displays.
The flexible ultra-thin circuit boards of modern cell phones utilize such coupling agents. The treatment of fiberglass with such coupling agents allows it to act as a filler in making various moldable reinforced plastic components.
A final category of applications is in structural co-polymers, such as used in automotive and construction. By co-polymerizing organo-chlorosilanes with other plastic monomers (e.g, acrylate, polyester, urethane, epoxy, phenolic and vinyl chloride), a wide variety of light and flexible parts can be made. This has been widely employed by auto manufacturers to comply with regulations on reduced fuel usage, and to make various residential products.
|
textbooks/eng/Chemical_Engineering/Supplemental_Modules_(Chemical_Engineering)/Silicon-Carbon-Chlorine_molecules.txt
|
Silicon-carbon-oxygen bonded molecules do not occur naturally, are completely synthetic and are generically known as silicones or coupling agents. They are the reaction product of organo-chlorosilanes ( see also the module on silicon-carbon-chlorine bonded molecules) and di-hydroxy organics ( e.g., glycols and organic di-acids). Because of the ability to customize organo-chlorosilane intermediates, the variety of silicone molecules available is almost infinite. An additional degree of variation is in the degree of cross-linking of such smaller chain silicones into macro-molecules with molecular weights into the tens of thousands.
Unusual/Noteworthy Properties
The properties of Si-C-O molecules are characterized by several aspects: the base organo-chlorosilane; the di-hydroxy chain continuing compounds; the chain-ending compound; and the cross-linking compound.
Once the above recipe options are selected, the only additional aspects are that of its curing, or final reaction. Typically the smaller chain silicone molecules are solvated by a volatile organic compound that will have some function in the final polymer, such as acetic acid. As an example of the above, consider the following recipe:
• Di-methyl dichlorosilane is the organo-chlorosilane
• Ethylene glycol
React and remove most of the by-product HCl by distillation while maintaining some mild agitation, and using acetic acid as a solvent to prevent over-heating. The following polymerization reaction occurs:
Figures
Main Applications/Uses
The main commercial and residential applications of longer chain silicones are in manufacture of synthetic oils, waxes, coatings, and caulking. Secondarily, small-to medium chain silicones can act as a bridge between organic and inorganic functional groups. In this application, they are known as coupling agents. Lastly they can serve to prevent water from wetting surfaces, and therefore act as concrete curing agents, and water-proofing agents.
Silicon-Chlorine Bonded Molecules
Silicon - chlorine bonded molecules do not occur naturally and are completely synthetic. This family includes silane, monochlorosilane (MCS), dichlorosilane (DCS), trichlorosilane (TCS), and silicon tetrachloride (STC), and is generically known as chlorosilanes.
Unusual/Noteworthy Properties
Chlorosilanes are incompatible with water, and are stable only in a completely zero-water environment. With the exception of silicon tetrachloride, these compounds are also incompatible with oxygen. Water and chlorosilanes can be viewed much the same as matter and anti-matter, since any mixture will cause mutual annihilation and result in a significant exotherm. In such case, the compound in the greater molar amount will remain to absorb the exotherm. The products of water and/or oxygen reaction include various silicon-oxygen bonded molecules, and a combination of hydrogen and hydrogen chloride.
STC is non-reactive with oxygen, there is no combustion or further oxidation possible. Combustibility/explosablility increases as the chlorosilane molecule becomes more hydrogenated/less chlorinated. Silane gas is pyrophoric in all but the most pure form, and has a similar TNT-equivalency as methane. Impurities in silane of the order of parts per billion will result in silane being pyrophoric (as reported by others), including disilane and disiloxane.
The reaction of chlorosilanes with moisture is immediate and intense. They will react with moisture-of-hydration of any salts, or with moisture adsorbed onto metal surfaces or in the pores of materials. Spills on skin will involve de-fleshing of exposed areas. Breathing these compounds will result in destruction of mucous membranes and lung tissue. Extreme caution must be exercised in handling these compounds. DCS is probably the greatest exposure hazard of the homologue family, in that it is still quite reactive with any moisture source, plus it has the explosive and ground-hugging nature of ether. Silane is generally treated a non-corrosive pyrophoric gas, but when sufficiently purified can have an ignition delay that leads to a detonation.
Properties
Chlorosilanes have unusual physical properties, combining low viscosity, low thermal conductivity, low surface tension and low heat capacity, but a having a moderate liquid density. They also have low latent heats of vaporization and high volatility. Collectively this means that they are poor conductors of heat, and will leak easily, and can form explosive vapors clouds when released to the atmosphere. They will diffuse through all rubber sealant compounds and conventional plastics, swelling and making them brittle. Their low lubricity requires use of special types of canned pumps: magnetic-driven rotary pumping equipment is insufficient.
Teflon and other fluorocarbons are suitable sealants, but only in virgin form and at low temperature. If such fluorocarbons are made more flexible by use of plasticizers, chlorosilanes will dissolve them.
Chlorosilanes are non-corrosive to carbon steel and other metals, but will leach out the phosphorus typically found in metal alloys. Therefore when used to make electronic silicon, acid-pickled or electropolished stainless steel is used as a material of construction. For sealants in electronic service, nickel-plated O-rings are used in lieu of elastomers. Metal-to-metal seals are always preferred.
Bonding
Although classified as inorganic compounds, chlorosilanes feature covalent bonding and a nominal tetrahedral molecular structure, much the same as chloromethanes. Silane and STC have no dipole moment. The TCS molecule has a small dipole moment of 0.86 Debye, with the three chlorine atoms spreading out their bond angles almost to the 120° of a planar shape. MCS has the largest dipole moment of the family at 1.31 Debye. DCS has a “Vee” shaped molecular arrangement, with a dipole moment of 1.17 Debye. The differing molecular shapes contribute to significant binary interactions in their mixture properties.
Analogous to chloromethanes, chlorosilanes can form aliphatic structures consisting of 2-4 silicon-silicon bonds (e.g., tetrasilane and octochloro-trisilane). Higher chlorosilane molecules with five or more silicon members tend to form ring compounds, similar to their carbon analogs. However there is no evidence of a stable silicon-silicon double or triple bond.
One of the more unusual properties of chlorosilanes is their ease of disproportionation. In such a reaction for example, two TCS molecules will form one DCS and one STC molecule. Similar reactions occur with DCS and MCS. A wide variety of catalysts serve to drive these disproportionation reactions, including medium chain-length amines, imines, nitriles, and chlorinated Lewis Acids ( e.g, FeCl3 and AlCl3). Even the smallest amount of such catalysts in liquid or vapor phase chlorosilanes will suffice to cause disproportionation. Disproportionation will occur at low temperatures, but does not occur in the solid phase.
This disproportionation property causes much angst with analytical chemists, since they cannot readily obtain pure samples for calibration of their analyzers unless they take special precautions in pre-cleaning, use special preparation of their analyzers and use specially constructed sample containers.
However, the ease of disproportionation allows chlorosilanes to be the vehicle of choice for manipulating silicon compounds for purification purposes. The decomposition of purified chlorosilanes results in all modern semi-conductors, integrated circuits and most photo-voltaic solar panels. See Applications/Uses, below.
Chlorosilane synthesis always begins with the formation of TCS by the reaction of impure silicon with HCl. The synthesis can either be done directly ( Si + 3HCl→ TCS + H2) or indirectly using a recycle STC and hydrogen process ( Si + 3STC + 2H2 → 4TCS).
In the more modern indirect method, the STC and hydrogen cause in-situ HCl to be formed, which reacts to form TCS. The unreacted STC and H2 are recycled back. Because fewer unwanted by-products are formed by the indirect method, it is more commonly used.
After forming TCS, disproportionation and purification steps can be used to make DCS, MCS, or silane. Since the disproportionation of item 5 makes STC as a by-product, the indirect TCS synthesis method allows this STC to be recycled. See also the module “Silicon-Carbon Chlorine molecules”
Main Applications/Uses
The main application of chlorosilanes is in the purification of silicon, from an impure form to a highly purified form for semi-conductor use, and for solar photo-voltaic use.
Quartzite ore is an abundantly occurring mineral, mined at many locations globally. By combining with either coke or charcoal in an electric-arc furnace, the silica content of the quartzite is reduced to make impure silicon plus carbon dioxide.
Then the impure silicon is converted to TCS via one of two routes ( see above Properties, item 6) and the TCS is either purified or converted to DCS or silane by disproportionation (see above Properties, item 5). Such purification is a combination of high-reflux distillation, adsorption and use of molecular sieves. After decomposition and recycle of the by-products, the resulting silicon has several important uses ( see module “Elemental Silicon). Without the chemical route through chlorosilanes, the modern electronic age would not have occurred, and computers would still be driven by vacuum tubes. Photovoltaic panels would still be laboratory curiosities.
High temperature combustion of STC with hydrogen @ 800-900C, followed by rapid air quenching results in the production of stable amorphous silicon monoxide solid, otherwise known as fumed silica. See the module on Silicon-Oxygen bonded molecules.
Chlorosilanes are used to modify the reactions in the manufacture of silicones as a cross-linking agent, especially TCS and DCS.
|
textbooks/eng/Chemical_Engineering/Supplemental_Modules_(Chemical_Engineering)/Silicon-Carbon-Oxygen_molecules.txt
|
Silicon-oxygen bonded molecules occur naturally as silicates, in mineral formations like quartzite and sand, and can be extracted into soluble form by reaction with strong alkalis. However, by going thru a chlorosilane hydrolysis route as synthetically produced compounds, silicon-oxygen bonded molecules have found significant application in food-stuffs, paints, moisture absorption, and most notably as silicones.
Unusual/Noteworthy Properties
1.In its natural form silicon-oxygen bonded compounds are referred to as silicas or silicates and exhibit strongly hydrophobic tendencies.The silicate ion (\(\ce{SiO_3^{-2}}\)) bonds ionicaly with alkali metals like sodium, potassium, and calcium. Water can be trapped within the interstices of silica solids, but the solids' surfaces are not truly wetted, nor will silica dissolve. Some solid silicate minerals can have complex molecular forms with water of hydration. An example is the sodium-alumina silicates known as zeolites.
The purified form of quartzite rock is known as quartz, which has a triagonal crystalline structure and is chemically written as SiO2. Fused quartz is commonly used as a structural material to make crucibles for holding molten silicon. As it is slowly wetted and dissolved over a period of days by molten silicon, its solute form is as silicon monoxide (\(SiO\)). The dissolved \(SiO\) exhibits a significant vapor pressure , and when cooled will nucleate to form sub-micronic fibroids which remain hydrophobic. Other high temperature reactions with silicas will produce these fibroids, that when breathed will result in silicosis.
2. Synthetic forms of silicon-oxygen bonded molecules can be created by the liquid-phase reaction of chlorosilanes or organo-chlorosilanes with water or alcohols, or by gas-phase combustion of chlorosilanes with hydrogen. As simple monomers and dimers, these compounds are normally hydrophilic and referred to as siloxanes or silanols (the silicon analogs to ketones and alcohols). In polymeric form, these cross-linked silicon-oxygen bonded compounds are generally hydrophobic and referred to as silicones. Most silicones feature a sizeable organic content, so more is discussed in the module on silicon-carbon-oxygen bonded molecules.
When chlorosilanes are combusted with hydrogen and the flame is rapidly quenched in the presence of water vapor, the stable silicon oxide is silicon monoxide (SiO). This SiO will remain meta-stable at ambient temperature as a very light fluffy solid, with a bulk density of about 0.02 g/cc. This form of silica is known as fumed silica ( not to be confused with the silica fume that causes silicosis) and has most unusual properties. When mixed with water in a 1:5 ratio and whipped, it will form a gelatinous solid that resembles Jello. When this gelatinous form is partially dried, it forms beads known as silica gel. Fumed silica has been used for over 80 years as a thickener for both commercial purposes ( like paints and greases) and in edible applications like fast-food milk-shakes. See the below section of this module for more about its many uses.
When synthetic silicon-oxygen bonded compounds are formed by hydrolysis (i.e, reaction generically with any organic or inorganic compound that has one or more ionic OH- group), the pH of the hydrolyzing compound dictates the degree to which it is hydrophilic or hydrophobic. Chlorosilane acidic hydrolysis produces hydrophilic siloxanes. To the extent that the hydrolysis reaction is performed in a more alkaline environment, the siloxane will tend to be more hydrophobic.
Once the Si-O bonding type is formed (either hydrophobic or hydrophilic), subsequent acidification or alkaline adjustment does not alter the nature of the Si-O bonding.
Main Applications/Uses
1. The main uses of naturally occurring silicon-oxygen bonded molecules like silica, silicates, and quartz are as:
1. Silica & Sand: Abrasives; as a concrete component; soil drainage enhancement; talcum and cosmetics
2. Silicates: High temperature refractories; detergent ingredient, surfactants, water glass ( after filling with fumed silica)
3. Quartzite & Quartz: Gemstones, acid & cut-proof countertops,raw material for producing crude and high purity silicon metal ( see the module on elemental silicon)
2. The main uses of synthetically produced silicon-oxygen bonded molecules are as:
a. Siloxanes
Building block intermediates in preparation of pharmaceuticals; wetting agents for organic chemicals
b. Fumed silica ( see also http://en.wikipedia.org/wiki/Fumed_silica)
Thickening agent for paints, lubricants, roof tar, asphalt, fiberglass resin, colloidal suspension of industrial chemicals. With fumed silica, the viscosity enhancement is adjustable by the amount of thixotroping (mechanical agitation or whipping). With extreme thixotroping, a liquid solution can become a gel.
Thickening agent for food-stuffs, like milk-shakes, catsup, salad dressings; and for toiletries like tooth-paste and lotions
c .Silica gel
Drying agents and packaged dessicants; acidic chemical absorbents; when purified, in chromatographic separation media
d. Silicones
Caulking; moisture-proofing coatings and sealants; building adhesives; synthetic lubricants ( multiple automotive applications); surface tension reduction /surfactants. See also the module on siiicon-carbon-oxygen bonded molecules).
|
textbooks/eng/Chemical_Engineering/Supplemental_Modules_(Chemical_Engineering)/Silicon-Oxygen_Bonded_Molecules.txt
|
“No single thing abides, but all things flow.” - Heraclitus
01: Introduction
This book began as lecture notes for an Oregon State University course in fluid mechanics, designed for beginning graduate students in physical oceanography. Because of its fundamental nature, this course is often taken by students outside physical oceanography, e.g., atmospheric science, civil engineering, physics and mathematics.
In later courses, the student will discover esoteric fluid phenomena such as internal waves that propagate through the sky, water phase changes that govern clouds, and planetary rotation effects that control large-scale winds and ocean currents. In contrast, this course concerns phenomena that we have all been familiar with since childhood: flows you see in sinks and bathtubs, in rivers, and at the beach. In this context, we develop the mathematical techniques and scientific reasoning skills needed for higher-level courses and professional research.
Prerequisites are few: basic linear algebra, differential and integral calculus and Newton’s laws of motion. As we go along we discover the need for the more advanced tools of tensor analysis.
The science of fluid mechanics is vast. Most books on the topic are concerned with technological applications, e.g., flow through pipes and machinery, that have little relevance in nature. But even among naturally occurring flows we cannot, and should not, try to cover everything. What I have done here is to identify three canonical flow structures that are common in nature (Figure \(1\)):
• vortices
• waves
• hydraulic jumps
The inner workings of these three phenomena involve all of the basic flow processes, and their study demands a thorough understanding of the theory. The goal, then, is to learn what we need to know to thoroughly understand vortices, waves and hydraulic jumps. Master this, and you will be well prepared to study the much broader range of fluid phenomena found in nature.
The discussion is in three, roughly equal parts:
• Chapters 2-4 are mainly mathematical; we review some advanced aspects of linear algebra and then develop the tools of tensor analysis.
• Chapters 5 and 6 are the crux. The mathematical tools from part 1 are used to develop a theoretical description of flow. The development is thorough, rigorous and (I hope) intuitive.
• In chapters 7-9, we apply the theory to our three common phenomena. Besides exercising the analytical skills we have developed, these examples allow us to test the assumptions that underlie the theory by comparing the results with our everyday experience. In some cases we find that the theory is inadequate, laying the groundwork for further study.
Homework exercises are included (chapter 12) and are integral to the course. The main text is designed to be covered in 40 hours of lectures. Appendices give auxiliary information and additional topics that can be covered or assigned. It is expected that students will devote an addition 80 hours to homework and independent study. Instructors are invited to contact the author ([email protected]) for additional materials such as suggested assignments, solutions and possible exam questions.
This book is also intended for self-study, with detailed explanations and frequent exercises to confirm your understanding. If you take this route, feel free to email me with any questions that may arise.
We will not shy away from proofs of the mathematical results we encounter; indeed, I expect the student to demand them. Professionals with graduate degrees are expected not only to know facts, but to understand why they are true and how they came to be known. Be skeptical. To believe something just because a professor said it is to invite error. As a young student, I was taught that the continents do not move and that the planet Mercury always keeps the same face toward the Sun, two statements that we now know are untrue. I absolutely guarantee that, at some point in each student’s education, and perhaps in this book, a “fact” will be learned that will turn out to be total hogwash. Be on the lookout.
Smaller errors in logic or mathematics turn up all the time. I have frequently had a math error corrected, or learned a clearer way to explain a difficult concept, thanks to an alert student. That is true in every course but perhaps more so in this one, because every student arrives with an intuitive feel for the fluid phenomena that motivate the analysis. If you have taken this course, thank you. I have learned from you.
There is yet room for improvement - if you spot something wrong or unclear, please let me know and I will fix it.
Bill Smyth, December, 2019
[email protected]
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/01%3A_Introduction/1.01%3A_Preface.txt
|
“Empty your mind. Be formless, shapeless, like water. When you put water in a bottle, it becomes the bottle; when you put water in a teapot, it becomes the teapot. Water can flow or it can crash. Be water, my friend.” - Bruce Lee
Nearly all materials are either solid or fluid. The distinction depends on what happens if you apply a force that acts to deform (e.g., bend or twist) the material, and then remove the force. A solid will return to its original shape, whereas a fluid will continue to deform. Deformation that continues after the force is removed is called flow. Therefore, a fluid is simply a material that flows.
Real fluids are made of molecules far too small for us to observe directly. Those molecules are in constant motion. Besides traveling through space, they spin, wiggle and change their shape. In air at sea level, the mean translational speed is several hundred m/s, close to the speed of sound (not coincidentally). As a result of this motion, molecules occasionally collide. A useful measure of molecule spacing is the average distance (or mean free path) between collisions. Typical values are \(10^{−9}\) m in water and \(50×10^{-9}\) m in air.
What we perceive as the “motion” of a fluid is in fact the average motion of many individual molecules. For example, if you hold out your hand to test the wind, the motion you sense is really the average over all of the air molecules striking your hand, i.e., billions of molecules. If you subtract out that average velocity, what’s left is the independent motion of each molecule, as well as whatever little dance the molecule may execute as it travels. On the macroscopic level, we experience those residual motions in two ways: as pressure and as heat.
What we observe as a fluid, then, is indistinguishable from a continuous medium (a continuum) which has the properties of velocity, pressure and temperature at each point in space. Modeling a fluid in this way excuses us from having to account for the behavior of each individual molecule, a dramatic simplification. It also allows us to bring to bear the powerful analytical machinery of the calculus. On the other hand, the continuum view introduces some concepts that don’t arise in the familiar physics of solid bodies, e.g., stress, strain and advection. It is for these phenomena that tensor analysis is needed.
Despite the power of the continuum idea, we’ll occasionally find it helpful to remember that the fluid is really made up of molecules. A fluid parcel is defined as a collection of molecules occupying a simply connected region of space (i.e., a single, continuous blob) that is much bigger than the mean free path. The fluid parcel can move and change its shape, but it is always composed of the same molecules. Another name for a fluid parcel is a material volume.
While a fluid parcel is three-dimensional, we can define two-, one- and zero-dimensional analogues. A material surface is a two dimensional surface that is always made up of the same molecules. This is a good model for the surface of a lake or ocean. A material line is a one-dimensional curve that bends and twists with the flow but is nonetheless always composed of the same molecules. A material point, more commonly called a fluid particle is a fluid parcel of infinitesimal size. More specifically, its size is much smaller than the length scales of interest, e.g. the size of a container that bounds the fluid, but still much larger than the mean free path, so that the continuum view makes sense.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/01%3A_Introduction/1.02%3A_What_is_a_fluid%3F_The_continuum_hypothesis.txt
|
We’ll first get ourselves into the right frame of mind by reviewing the basic concepts of linear algebra.1 In the process, we’ll introduce notational conventions that will be used in the rest of the book. In particular, we’ll define an index notation based on the Einstein summation convention, an extremely convenient device for simplifying lengthy calculations. Later, the student will need to be fluent in this notation.
Most of these definitions and facts should be familiar. Highlighted text indicates concepts that are especially important and/or likely to be new to students with the minimal prerequisite background.
• A scalar $a$ is, in the simplest definition, a single number
• A vector $\vec{v}$ is, in the simplest definition, a list of numbers.2
$\text { Column vector } \vec{v}=\left(\begin{array}{c} {v_{1}} \ {v_{2}} \ {\vdots} \ {v_{N}} \end{array}\right), \quad \text { Row vector } \vec{v}=\left(v_{1}, v_{2}, \cdots v_{N}\right)\label{eqn:1}$
In index notation, $v_i$ is the i th component of $\vec{v}$, for $i=1,2,3,··· ,N$.
• Vectors can be added by adding corresponding elements: $\vec{w}=\vec{u}+\vec{v}$, or $w_i = u_i +v_i$, for $i = 1,2,3,···,N$.
• A vector can be multiplied by a scalar: $\vec{u} = a\vec{v}$, or $u_i = av_i$.
• Dot product (or scalar product or inner product): $\vec{u} ·\vec{v} = \sum_{i=1}^N u_iv_i$.
• Einstein summation notation: $\vec{u}·\vec{v} = u_iv_i$; summation over the repeated index is implied. The repeated index is called a dummy index. The Einstein notation is also called index notation.
• The magnitude (or length, or absolute value) of a vector: $|\vec{v}| = \sqrt{\vec{v}·\vec{v}} = \sqrt{v_iv_i} = \sqrt{v^2_i}$.
• A unit vector has magnitude equal to 1. We will identify unit vectors with a hat rather than a vector symbol, e.g., $\hat{e}$, and $|\hat{e}| = 1$.
• Thinking geometrically, the dot product of $\vec{u}$ and \vec{v} can be written in terms of the magnitudes of the two vectors and the angle between them, $\theta: \vec{u}·\vec{v} = |\vec{u}||\vec{v}| \cos\theta$.
• Orthogonal vectors are vectors at right angles to each other, i.e., $\theta = \pm\pi/2$, so $\cos\theta = 0$, so $\vec{u}·\vec{v} = 0$.
• A unit vector in the direction of $\vec{v}$ can be defined as $\hat{e}_v =\vec{v}/|\vec{v}|$. $\hat{e}_v$ is parallel to $\vec{v}$, and $|\hat{e}_v| = 1$.
• Projection: The component of $\vec{u}$ in the direction of $\vec{v}$ is given by $\vec{u}·\hat{e}_v = |\vec{u}|\cos\theta$ (see figure $1$).3
• The cross product of two vectors gives a third vector: $\vec{u}\times\vec{v}=\vec{w}$.
• The magnitude $|\vec{w}|$ is given by $|\vec{u}||\vec{v}||\sin\theta|$, where =$\theta$ is the angle separating the two vectors.
• The direction of $\vec{w}$ is perpendicular to both $\vec{u}$ and $\vec{v}$ in the sense specified by the right-hand rule: if right-hand fingers curl from $\vec{u}$ to $\vec{v}$, thumb points to $\vec{w}$.
Test your understanding of this section by completing exercise 1.
1Be water, my friend.” The quote from Bruce Lee at the beginning of section 1.2 is not entirely facetious. Mathematics is a game of symbol manipulation, like tic-tac-toe or checkers, but for some reason it is extraordinarily effective at representing physical reality. Because of that, we sometimes set aside our intuitive, visceral understanding of our physical environment in order to focus on those symbols. The danger is that we get lost in the game, forgetting that it’s just a means to an end. To counteract this tendency, spend time watching water. Meditate on it - its inner dynamics, its ebb and flow. Strive to understand it intuitively, without symbols, numbers or words.
2Our definitions of scalars and vectors will become more specific when used in the context of Cartesian coordinates.
3This quantity is more specifically called the scalar projection. Some texts also define the vector projection, which is the scalar projection times the unit vector $\hat{e}_v$. Here we use only the scalar projection, and we call it “the projection” for simplicity.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/02%3A_Review_of_Elementary_Linear_Algebra/2.01%3A_Scalars_and_vectors.txt
|
A matrix $\underset{\sim}{A}$ is a two-dimensional table of numbers, e.g.,
$A_{i j}=\left[\begin{array}{lll} {A_{11}} & {A_{12}} & {A_{13}} \ {A_{21}} & {A_{22}} & {A_{23}} \end{array}\right]\label{eq:1}$
A general element of the matrix $\underset{\sim}{A}$ can be written as $A_{ij}$, where $i$ and $j$ are labels whose values indicate the address. The first index ($i$, in this case) indicates the row; the second ($j$) indicates the column. A matrix can have any size; the size of the example shown in Equation $\ref{eq:1}$ is denoted $2\times3$. A matrix can also have more than two dimensions, in which case we call it an array.
Below are some basic facts and definitions associated with matrices.
• A vector may be thought of as a matrix with only one column (or only one row).
• Matrices of the same shape can be added by adding the corresponding elements, e.g., $C_{ij} = A_{ij} +B_{ij}.$
• Matrices can also be multiplied: $\underset{\sim}{C}=\underset{\sim}{A} \underset{\sim}{B}$. For example, a pair of $2\times2$ matrices can be multiplied as follows:
$\left[\begin{array}{cc} {A_{11}} & {A_{12}} \ {A_{21}} & {A_{22}} \end{array}\right]\left[\begin{array}{cc} {B_{11}} & {B_{12}} \ {B_{21}} & {B_{22}} \end{array}\right]=\left[\begin{array}{cc} {A_{11} B_{11}+A_{12} B_{21}} & {A_{11} B_{12}+A_{12} B_{22}} \ {A_{21} B_{11}+A_{22} B_{21}} & {A_{21} B_{12}+A_{22} B_{22}} \end{array}\right].\label{eq:2}$
• Using index notation, the matrix multiplication $\underset{\sim}{C}=\underset{\sim}{A}\underset{\sim}{B}$ is expressed as $C_{ij}=A_{ik}B_{kj}\label{eq:3}$
Note that the index $k$ appears twice on the right-hand side. As in our previous definition of the dot product (section 2.1), the repeated index is summed over.
• In Equation $\ref{eq:3}$, $k$ is called a dummy index, while $i$ and $j$ are called free indices. The dummy index is summed over. Note that the dummy index appears only on one side of the equation, whereas each free index appears on both sides. For a matrix equation to be consistent, free indices on the left- and right-hand sides must correspond. Test your understanding of these important distinctions by trying the exercises in (2) and (4).
• Matrix multiplication is not commutative, i.e., $\underset{\sim}{A}\underset{\sim}{B}\neq\underset{\sim}{B}\underset{\sim}{A}$ except in special cases. In index notation, however, $A_{ik}B_{jk}$ and $B_{kj}A_{ik}$ are the same thing. This is because $A_{ik}$, the $i$, $k$ element of the matrix $A$, is just a number, like 3 or 17, and two numbers can be multiplied in either order. The results are summed over the repeated index $k$regardless of where $k$ appears.
• The standard form for matrix multiplication is such that the dummy indices are adjacent, as in Equation $\ref{eq:3}$.
2.2.1 Matrix-vector multiplication
A matrix and a vector can be multiplied in two ways, both of which are special cases of matrix multiplication.
• Right multiplication: $\vec{u}=\underset{\sim}{A}\vec{v}, \; \text{or} \; u_i = A_{ij}v_j.\label{eq:4}$
Note the sum is on the second, or right index of $\underset{\sim}{A}$.
• Left multiplication: $\vec{u}=\underset{\sim}{A}\vec{v}, \; \text{or} \; u_j = v_iA_{ij}. \nonumber$
The sum is now on the first, or left, index of $\underset{\sim}{A}$.
As in other cases of matrix multiplication, $\vec{v}\underset{\sim}{A}\neq\underset{\sim}{A}\vec{v}$, but $v_iA_{ij}=A_{ij}v_i$.
2.2.2 Properties of square matrices
A square matrix has the same number of elements in each direction, e.g., the matrices appearing in (\ref{eq:2\)). From here on, we will restrict our discussion to square matrices.
• The main diagonal of a square matrix is the collection of elements for which both indices are the same. In figure $1$, for example, the main diagonal is outlined in red.
• The transpose of a matrix is indicated by a superscript “T”, i.e., the transpose of $\underset{\sim}{A}$ is $\underset{\sim}{A}^T$. To find the transpose, flip the matrix about the main diagonal: $A_{ij}^T = A_{ji}$ as shown in figure $1$.
• In a diagonal matrix, only the elements on the main diagonal are nonzero, i.e., $A_{ij}=0$ unless $i=j$.
• The trace is the sum of the elements on the main diagonal, i.e., $\operatorname{Tr}(\underset{\sim}{A}) = A_{ii}$. For the example shown in figure $1$, the trace is equal to 5. Note that $Tr(\underset{\sim}{A}^T)=Tr(\underset{\sim}{A})$.
• For a symmetric matrix, $\underset{\sim}{A}^T=\underset{\sim}{A}$ or $A_{ji}=A_{ij}$. We say that a symmetric matrix is invariant under transposition. Note that a diagonal matrix is automatically symmetric.
• For an antisymmetric matrix, transposition changes the sign: $\underset{\sim}{A}^T= −\underset{\sim}{A}$ or $A_{ji}=−A_{ij}$.
• Identity matrix:
$\delta_{i j}=\left\{\begin{array}{ll} {1} & {\text { if } i=j} \ {0} & {\text { if } i \neq j} \end{array} \quad \text { e.g. }\left[\begin{array}{ll} {1} & {0} \ {0} & {1} \end{array}\right]\right.\label{eq:5}$
Multiplying a matrix by $\underset{\sim}{\delta}$ leaves the matrix unchanged:$\underset{\sim}{\delta}\underset{\sim}{A}=\underset{\sim}{A}$. The same is true of a vector: $\underset{\sim}{\delta}\vec{v}=\vec{v}$. The identity matrix is often called $I$.
Exercise $1$
Is an identity matrix diagonal?
What is the trace of a $2\times2$ identity matrix? A $3\times3$ identity matrix?
Is an identity matrix symmetric, antisymmetric, both, or neither?
• The determinant of $\underset{\sim}{A}$, denoted $\operatorname{det}(\underset{\sim}{A})$, is a scalar property that will be described in detail later (section 15.3.3). For the general $2\times$ matrix
$A=\left[\begin{array}{ll} {A_{11}} & {A_{12}} \ {A_{21}} & {A_{22}} \end{array}\right],\label{eq:6}$
the determinant is
$\operatorname{det}(\underset{\sim}{A})=A_{11} A_{22}-A_{21} A_{12}. \nonumber$
The pattern illustrated in figure $2a$ provides a simple mnemonic. For the general $3\times3$ matrix
$\underset{\sim}{A}=\left[\begin{array}{lll} {A_{11}} & {A_{12}} & {A_{13}} \ {A_{21}} & {A_{22}} & {A_{23}} \ {A_{31}} & {A_{32}} & {A_{33}} \end{array}\right], \nonumber$
the determinant is obtained by multiplying diagonals, then adding the products of diagonals oriented downward to the right (red in figure 2.3b) and subtracting the products of diagonals oriented upward to the right (blue in figure $2b$):
$\operatorname{det}\left[\begin{array}{lll} {A_{11}} & {A_{12}} & {A_{13}} \ {A_{21}} & {A_{22}} & {A_{23}} \ {A_{31}} & {A_{32}} & {A_{33}} \end{array}\right]=A_{11} A_{22} A_{33}+A_{12} A_{23} A_{31}+A_{13} A_{21} A_{32}-A_{31} A_{22} A_{13}-A_{32} A_{23} A_{11}-A_{33} A_{21} A_{12} \nonumber$
The determinant can also be evaluated via expansion of cofactors along a row or column. In this example we expand along the top row:
$\operatorname{det}\left[\begin{array}{lll} {A_{11}} & {A_{12}} & {A_{13}} \ {A_{21}} & {A_{22}} & {A_{23}} \ {A_{31}} & {A_{32}} & {A_{33}} \end{array}\right]=A_{11} \times \operatorname{det}\left[\begin{array}{cc} {A_{22}} & {A_{23}} \ {A_{32}} & {A_{33}} \end{array}\right]-A_{12} \times \operatorname{det}\left[\begin{array}{cc} {A_{21}} & {A_{23}} \ {A_{31}} & {A_{33}} \end{array}\right]+A_{13} \times \operatorname{det}\left[\begin{array}{cc} {A_{21}} & {A_{22}} \ {A_{31}} & {A_{32}} \end{array}\right] \nonumber$
• Matrix inverse: If $\underset{\sim}{A}\underset{\sim}{B}=\underset{\sim}{\delta}$ and $\underset{\sim}{B}\underset{\sim}{A}=\underset{\sim}{\delta}$ then $\underset{\sim}{B}=\underset{\sim}{A}^{-1}$ and $\underset{\sim}{A}=\underset{\sim}{B}^{-1}$. For the $2\times2$ matrix Equation $\ref{eq:6}$, the inverse is
$\underset{\sim}{A}^{-1}=\frac{1}{\operatorname{det}(\underset{\sim}{A})}\left[\begin{array}{cc} {A_{22}} & {-A_{12}} \ {-A_{21}} & {A_{11}} \end{array}\right] \nonumber$
Exercise $2$
Multiply this by $\underset{\sim}{A}$ as given in Equation $\ref{eq:6}$ and confirm that you get the identity matrix. Unfortunately, there is no simple formula for the $3\times3$ case.
• Orthogonal matrix: $\underset{\sim}{A}^{-1}=\underset{\sim}{A}^T$.
• Singular matrix: $\operatorname{det}(\underset{\sim}{A})=0$. A singular matrix has no inverse.
Test your understanding of this section by completing exercises 2, 3, 4, 5 and 6. (Exercise 7 is outside the main discussion and can be done any time.)
2.03: Systems of linear equations
• $\vec{u}=\underset{\sim}{A}\vec{v}$, or $\underset{\sim}{A}\vec{v}=\vec{u}$, represents a set of linear equations that can (usually) be solved for $\vec{v}$. If the matrix is square and $\operatorname{det}(\underset{\sim}{A})\neq 0$, then $\vec{v}=\underset{\sim}{A}^{-1}\vec{u}$.
• A homogeneous set of equations has the form $\underset{\sim}{A}\vec{v}=0$, i.e., it has $\vec{v}=0$ as a solution (like homogeneous differential equations). In this case, nonzero solutions for $\vec{v}$ exist only if $\operatorname{det}(\underset{\sim}{A})=0$.
2.04: Eigenvalues and eigenvectors
Every $M\times M$ square matrix $\underset{\sim}{A} has \(M$ eigenvalue-eigenvector pairs:
$A \vec{v}^{(m)}=\lambda^{(m)} \vec{v}^{(m)} \quad(\text { no sum on } m) \label{eq:1}$
The superscript ($m$) is the “mode number" an index running from 1 to $M$ that labels the eigenvalue-eigenvector pairs. To find $\lambda^{(m)$ and $\vec{v}^{(m)}$, $m=1,2,...,M$, rewrite Equation $\ref{eq:1}$ as:
$\underset{\sim}{A}\vec{v}^{(m)}=\lambda^{(m)}\underset{\sim}{\delta}\vec{v}^{(m)}\Rightarrow\quad\left[\underset{\sim}{A}-\lambda^{(m)}\underset{\sim}{\delta}\right] \vec{v}^{(m)}=0\label{eq:2}$
where $\underset{\sim}{\delta}$ is the $M\times M$ identity matrix. This is a homogeneous set of equations, and can therefore only have a nontrivial solution if the determinant is zero:
$\operatorname{det}\left(\underset{\sim}{A}-\lambda^{(m)}\underset{\sim}{\delta}\right)=0\label{eq:3}$
Because $\underset{\sim}{A}$ is an $M\times M$ matrix, Equation $\ref{eq:3}$ can be written as an $M^{th}$ order polynomial for the eigenvalues $\lambda$, referred to as the characteristic polynomial. It has $M$ solutions for $\lambda$. These may be complex, and they are not necessarily distinct. Each particular eigenvalue solution $\lambda^{(m)}$ has an associated non-zero eigenvector $\vec{v}^{(m)}$ that is obtained by substituting the value of $\vec{v}^{(m)}$ into Equation $\ref{eq:2}$ and solving the resulting set of $M$ equations for the $M$ components of $\vec{v}^{(m)}$.
An important property of Equation $\ref{eq:2}$ is that both sides of the equation can be multiplied by an arbitrary constant $c$ and the equation still holds. So, if $\vec{v}^{(m)}$ is an eigenvector of $\underset{\sim}{A}$ with eigenvalue $\lambda^{(m)}$, then $c\vec{v}^{(m)}$ is also an eigenvector of $A$ with the same eigenvalue $\lambda^{(m})$. The eigenvectors are therefore defined only to within an arbitrary scalar multiple. In many cases, we choose the eigenvectors to be unit vectors by making the appropriate choice for $c$.
Interesting tidbits:
• The sum of the eigenvalues equals the trace.
• The product of the eigenvalues equals the determinant.
You will prove these in due course.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/02%3A_Review_of_Elementary_Linear_Algebra/2.02%3A_Matrices.txt
|
• 3.1: Measuring space with Cartesian coordinates
A convenient way to measure space is to assign to each point a label consisting of three numbers, one for each dimension.
• 3.2: Vectors and Scalars
• 3.3: Cartesian Tensors
We have seen how to represent a vector in a rotated coordinate system. Can we do the same for a matrix? The basic idea is to identify a mathematical operation that the matrix represents, then require that it represent the same operation in the new coordinate system. We’ll do this in two ways: first, by seeing the matrix as a geometrical transformation of a vector, and second by seeing it as a recipe for a bilinear product of two vectors.
03: Cartesian Vectors and Tensors
A convenient way to measure space is to assign to each point a label consisting of three numbers, one for each dimension. We begin by choosing a single point to serve as the origin. The location of any point can now be quantified by its position vector, the vector extending from the origin to the point in question. We’ll name this position vector $\vec{x}$.
Next, choose three basis vectors, each beginning at the origin and extending away for some distance. The only real restriction on this choice is that the vectors must not all lie in the same plane. It will be easiest if we choose the basis vectors to be unit vectors, in which case we’ll name them $\hat{e}^{(1)}$, $\hat{e}^{(2)}$ and $\hat{e}^{(3)}$. It’s also easiest if we choose the vectors to be mutually orthogonal:
$\hat{e}^{(i)} \cdot \hat{e}^{(j)}=\delta_{i j}\label{eq:1}$
Finally, it’s easiest if we choose the system to be right-handed, meaning that if you take your right hand and curve the fingers in the direction from $\hat{e}^{(1)}$ to $\hat{e}^{(2)}$, your thumb will point in the direction of $\hat{e}^{(3)}$ (Figure $1$). Interchanging any two basis vectors renders the coordinate system left-handed. This “handedness’’ property is also referred to as parity.
Every position vector $\vec{x}$ can be expressed as a linear combination of the basis vectors:
$\vec{x}=x_{1} \hat{e}^{(1)}+x_{2} \hat{e}^{(2)}+x_{3} \hat{e}^{(3)}=x_{i} \hat{e}^{(i)} . \label{eq:2}$
The component $x_k$ can be isolated by projecting $\vec{x}$ onto $\hat{e}^{k}$:
$\vec{x} \cdot \hat{e}^{(k)}=x_{i} \hat{e}^{(i)} \cdot \hat{e}^{(k)}=x_{i} \delta_{i k}=x_{k}, \nonumber$
therefore
$x_{k}=\vec{x} \cdot \hat{e}^{(k)}.\label{eq:3}$
The components of the position vector are called the Cartesian coordinates of the point.
A note on terminology
A position vector in a Cartesian coordinate system can be expressed as $\{x_1,\,x_2,\,x_3\}$ or, equivalently, as $\{x,\,y,\,z\}$. The numerical index notation is useful in a mathematical context, e.g., when using the summation convention. In physical applications, it is traditional to use the separate letter labels $\{x,\,y,\,z\}$. In the geosciences, for example, we commonly denote the eastward direction as $x$, northward as $y$ and upward as $z$. (Note that this coordinate system is right-handed.) Similarly, the basis vectors $\{\hat{e}^{(1)},\,\hat{e}^{(2)},\,\hat{e}^{(3)}\}$ are written as $\{\hat{e}^{(x)},\,\hat{e}^{(y)},\,\hat{e}^{(z)}\}$. As we transition from mathematical concepts to real-world applications, we will move freely between these two labeling conventions.
Cartesian geometry on the plane
Advanced concepts are often grasped most easily if we restrict our attention to a two-dimensional plane such that one of the three coordinates is constant, and can therefore be ignored. For example, on the plane $x_3=0$, the position vector at any point can be written as $\vec{x}=x_1 \hat{e}^{(1)} + x_2\hat{e}^{(2)}$, or $\{x, y\}$. Two-dimensional geometry is also a useful first approximation for many natural flows, e.g., large-scale motions in the Earth’s atmosphere.
3.1.1 Matrices as geometrical transformations
When we multiply a matrix $\underset{\sim}{A}$ onto a position vector $\vec{x}$, we transform it into another position vector $\vec{x}^\prime$ with different length and direction (except in special cases). In other words, the point the vector points to moves to a new location. Often, but not always, we can reverse the transformation and recover the original vector. The matrix needed to accomplish this reverse transformation is $\underset{\sim}{A}^{-1}$. To fix these ideas, we’ll now consider a few very simple, two-dimensional examples.
Example $1$
Consider the $2\times 2$ identity matrix $\underset{\sim}{\delta}$. Multiplying a vector by $\underset{\sim}{\delta}$ is a null transformation, i.e., $\vec{x}^\prime =\vec{x}$; the vector is transformed into itself. Not very interesting.
Example $2$
Now consider a slightly less trivial example: the identity matrix multiplied by a scalar, say, 2:
$\underset{\sim}{A}=\left[\begin{array}{ll} {2} & {0} \ {0} & {2} \end{array}\right].\label{eq:4}$
Solution
Multiplying $\underset{\sim}{A}$ onto some arbitrary vector $\vec{x}$ has the effect of doubling its length but leaving its direction unchanged: $\vec{x}^\prime = 2\vec{x}$. Is this transformation reversible? Suppose I gave you the transformed vector $\vec{x}^\prime$ and the transformation $\underset{\sim}{A}$ that produced it, then asked you to deduce the original vector $\vec{x}$. This would be very easy; you would simply divide $\vec{x}^\prime$ by 2 to get $\vec{x}$.
Next we’ll look at a more interesting example:
Example $3$
$\underset{\sim}{A}=\left[\begin{array}{cc} {2} & {0} \ {0} & {1 / 2} \end{array}\right] \label{eq:5}$
Solution
Multiplication of $\underset{\sim}{A}$ onto any vector doubles the vector’s first component while halving its second (figure $2a$):
$\text { If } \vec{x}=\left[\begin{array}{l} {x} \ {y} \end{array}\right], \quad \text { then } \vec{x}^{\prime}=A \vec{x}=\left[\begin{array}{c} {2 x} \ {y / 2} \end{array}\right].\label{eq:6}$
In general, the transformation changes both the length and the direction of the vector. There are, however, exceptions that you can now confirm for yourself:
• Show that the length of a vector is not changed if its $y$ component is $\pm$ twice its $x$ component.
• Identify a particular vector $\vec{x}$ whose direction is not changed in this transformation. Now identify another one.
• Compute the eigenvectors of $\underset{\sim}{A}$. You should find that they are simply the basis vectors $\hat{e}^{(x)}$ and $\hat{e}^{y}$. In general, any multiple of either of these unit vectors is an eigenvector. Note that this class of vectors is also the class of vectors whose direction is unchanged!1
Because a matrix transformation can be applied to any position vector, it can be thought of as affecting any geometrical shape, or indeed all of space. A simple way to depict the general effect of the matrix is to sketch its effect on the unit circle centered at the origin. In this case the circle is transformed into an ellipse (figure $2b$), showing that the general effect of the matrix is to compress things vertically and expand them horizontally.
Is this transformation reversible? Certainly: just halve the $x$ component and double the $y$ component. As an exercise, write down a matrix that accomplishes this reverse transformation, and show that it is the inverse of $\underset{\sim}{A}$.
Example $4$
Now consider
$\underset{\sim}{A}=\left[\begin{array}{ll} {1} & {0} \ {0} & {0} \end{array}\right].\label{eq:7}$
Solution
This transformation, applied to a vector, leaves the $x$ component unchanged but changes the $y$ component to zero. The vector is therefore projected onto the $x$ axis (figure $3a$). Applied to a general shape, the transformation squashes it flat (figure $3b$). Does this transformation change the length and direction of the vector? Yes, with certain exceptions which the reader can deduce.
Is the transformation reversible? No! All information about vertical structure is lost. Verify that the determinant $|\underset{\sim}{A}|$ is zero, i.e., that the matrix has no inverse.
Test your understanding by completing exercises (8) and (9).
Example $5$
As a final example, consider
$\underset{\sim}{A}=\left[\begin{array}{cc} {0} & {1} \ {-1} & {0} \end{array}\right].\label{8}$
Solution
The matrix transforms
$\left[\begin{array}{c} {x} \ {y} \end{array}\right] \text { to }\left[\begin{array}{c} {y} \ {-x} \end{array}\right]. \nonumber$
The length of the vector is unchanged (check), and the transformed vector is orthogonal to the original vector (also check). In other words, the vector has been rotated through a $90^\circ$ angle. In fact, it is generally true that an antisymmetric matrix rotates a vector through $90^\circ$. To see this, apply a general antisymmetric matrix $\underset{\sim}{A}$ to an arbitrary vector $\vec{u}$, then dot the result with $\vec{u}$:
\begin{aligned} &(A \vec{u}) \cdot \vec{u}=A_{i j} u_{j} u_{i}=-A_{j i} u_{j} u_{i} \quad \text { (using antisymmetry) }\ &\begin{array}{ll} {=-A_{i j} u_{i} u_{j}} & {(\text { relabeling } i \leftrightarrow j)} \ {=-A_{i j} u_{j} u_{i} .} & {(\text { reordering })} \end{array} \end{aligned} \nonumber
The result is equal to its own negative and must therefore be zero. We conclude that the transformed vector is orthogonal to the original vector.2
3.1.2 Coordinate rotations and the principle of relativity
It is an essential principle of science that the reality you and I experience exists independently of us. If this were not true, science would be a waste of time, because an explanation of your reality would not work on mine.
Is there any way to prove this principle? No, but we can test it by comparing our experiences of reality. For example, if you measure a force and get the vector $\vec{F}$, and if I measure the same force, I expect to get the same result. Now, we know from the outset that this will not be quite true, because we observe from different perspectives. For example, if you see $\vec{F}$ pointing to the right, I may see it pointing to the left, depending on where I’m standing. For a meaningful comparison, we must take account of these expected differences in perspective. Let’s call the force vector that I measure $\vec{F}^\prime$. The components of this vector will have different values than the ones you measured ($\vec{F}$), but if we can somehow translate from your perspective to mine, the vectors should be the same. We expect this because the force is a physically real entity that exists independently of you and me.
Now imagine that this force acts on a mass m and we each measure the resulting acceleration: you get $\vec{a}$ and I get $\vec{a}^\prime$. As with the force, the components of $\vec{a}^\prime$ will differ from those of $\vec{a}$, but if we correct for the difference in perspective we should find that $\vec{a}^\prime$ and $\vec{a}$ represent the same acceleration.
Now suppose further that we are making these measurements in order to test Newton’s second law of motion. If that law is valid in your reference frame, then the force and acceleration you measure should be related by $\vec{F}=m\vec{a}$. If the law is also valid in my reference frame then I should get $\vec{F}^\prime=m\vec{a}^\prime$. This leads us to the relativity principle, attributed to Galileo: The laws of physics are the same for all observers. Just as with other kinds of laws, if a law of physics works for you but not for me, then it is not a very good law.
Our goal here is to deduce physical laws that describe the motion of fluids. The selection of possible hypotheses (candidate laws) that we could imagine is infinite. How do we determine which one is valid? To begin with, we can save ourselves a great deal of trouble if we consider only hypotheses that are consistent with the relativity principle. To do this, we must first have a mathematical language for translating between different reference frames. In particular, we need to be able to predict the effect of a coordinate rotation on the components of any vector, or of any other quantity we may want to work with.
3.1.3 The rotation matrix
Suppose that, having defined an orthogonal, right-handed set of basis vectors $\hat{e}^{(i)},i = 1,2,3$, we switch to a new set, $\hat{e}^{\prime(i)}$ (figure $5$). Each of the new basis vectors can be written as a linear combination of the original basis vectors in accordance with Equation $\ref{eq:2}$. For example, $\hat{e}^{\prime(i)}$ can be written as
$\hat{e}^{\prime (1)} =C_{i 1} \hat{e}^{(i)} \nonumber$
This is analogous to Equation $\ref{eq:2}$, but in this case the coefficients of the linear combination have been written as one column of a $3\times3$ matrix $\underset{\sim}{C}$. Doing the same with the other two basis vectors ($\hat{e}^{\prime(2)}$ and $\hat{e}^{\prime(3)}$ ) yields the other two columns:
$\hat{e}^{\prime(j)}=C_{i j} \hat{e}^{(i)}, \quad j=1,2,3\label{eq:9}$
To rephrase Equation $\ref{eq:9}$, $\underset{\sim}{C}$ is composed of the rotated basis vectors, written as column vectors and set side-by-side:
$\underset{\sim}{C}=\left[\begin{array}{cc} {\hat{e}^{\prime(1)}} & {\hat{e}^{\prime(2)}} & {\hat{e}^{\prime(3)}} \end{array}\right]\label{eq:10}$
Now suppose that the new basis vectors have been obtained simply by rotating the original basis vectors about some axis.3 In this case, both the lengths of the basis vectors and the angles between them should remain the same. In other words, the new basis vectors, like the old ones, are orthogonal unit vectors: $\hat{e}^{\prime(i)}\cdot\hat{e}^{\prime(j)}=\delta_{ij}$. This requirement restricts the forms that $\underset{\sim}{C}$ can take. Substituting Equation $\ref{eq:9}$, we have:
$\hat{e}^{\prime(i)}\cdot\hat{e}^{\prime(j)}=C_{ki}\hat{e}^{(k)}\cdot C_{lj}\hat{e}^{(l)}=C_{ki}C_{lj}\hat{e}^{(k)}\cdot\hat{e}^{(l)}=C_{ki}C_{lj}\delta_{kl}=C_{li}C_{lj}=C_{il}^TC_{lj}=\delta{ij}. \nonumber$
The final equality is equivalent to
$\underset{\sim}{C}^T=\underset{\sim}{C}^{-1}\label{eq:11}$
i.e., $\underset{\sim}{C}$ is an orthogonal matrix.4
Recall also that the original basis vectors form a right-handed set. Do the rotated basis vectors share this property? In section 15.3.3, it is shown that the determinant of an orthogonal matrix equals $\pm1$. Moreover, if $|\underset{\sim}{C}|=-1$, then $\underset{\sim}{C}$ represents an improper rotation: the coordinate system undergoes both a rotation and a parity switch, from right-handed to left-handed. This is not usually what we want. So, if $\underset{\sim}{C}$ is orthogonal and its determinant equals +1, we say that $\underset{\sim}{C}$ represents a proper rotation.
To reverse a coordinate rotation, we simply use the inverse of the rotation matrix, $\underset{\sim}{C}^{-1}$, or $\underset{\sim}{C}^T$:
$\hat{e}^{(j)}=C_{j i} \hat{e}^{\prime(i)}, \quad j=1,2,3 \label{eq:12}$
Comparing Equation $\ref{eq:12}$ with Equation $\ref{eq:9}$, we see that, on the right-hand side, the dummy index is in the first position for the forward rotation and in the second position for the reverse rotation.
Example $6$
Suppose we want to rotate the coordinate frame around the $\hat{e}^{(1)}$ axis by an angle $\phi$. Referring to figure $6$, we can express the rotated basis vectors using simple trigonometry:
$\hat{e}^{\prime(1)}=\hat{e}^{(1)}=\left[\begin{array}{l} {1} \ {0} \ {0} \end{array}\right] ; \quad \hat{e}^{\prime(2)}=\left[\begin{array}{c} {0} \ {\cos \phi} \ {\sin \phi} \end{array}\right] ; \quad \hat{e}^{\prime(3)}=\left[\begin{array}{c} {0} \ {-\sin \phi} \ {\cos \phi} \end{array}\right]; \nonumber$
Solution
Now, in accordance with Equation $\ref{eq:10}$, we simply place these column vectors side-by-side to form the rotation matrix:
$\underset{\sim}{C}=\left[\begin{array}{ccc} {1} & {0} & {0} \ {0} & {\cos \phi} & {-\sin \phi} \ {0} & {\sin \phi} & {\cos \phi} \end{array}\right].\label{eq:13}$
Looking closely at Equation $\ref{eq:13}$, convince yourself that the following are properties of $\underset{\sim}{C}$:
• $\underset{\sim}{C}^T\underset{\sim}{C}=\underset{\sim}{\delta}$, i.e., $\underset{\sim}{C}$ is orthogonal.
• $|\underset{\sim}{C}|=1$, i.e., $\underset{\sim}{C}$ represents a proper rotation.
• Changing the sign of $\phi$ produces the transpose (or, equivalently, the inverse) of $\underset{\sim}{C}$, as you would expect.
Example 6: Test your understanding by completing exercise 10.
1Generalizing from this example, we could propose that an eigenvector is a vector which, when multiplied by the matrix, maintains its direction. There is a common exception to this, though, and that is when the eigenvalues or eigenvectors are complex. In that case, geometric interpretation of the eigenvectors becomes, well, complex.
2Be sure you understand this proof; there’ll be many others like it.
3The axis need not be a coordinate axis; any line will do.
4We now see why a matrix with this property is called “orthogonal”; orthogonal vectors remain orthogonal after transformation by such a matrix.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/03%3A_Cartesian_Vectors_and_Tensors/3.01%3A_Measuring_space_with_Cartesian_coordinates.txt
|
3.2.1 The position vector in a rotated coordinate system
The position vector $\vec{x}$ that labels a point in space (e.g., the blue vector in figure 3.1.1) is a physical entity that exists independently of any observer and of any coordinate frame used to measure it. If you use the basis vectors $\hat{e}$ and I use $\hat{e}^\prime$, then $\vec{x}$ can be expressed as a linear combination of either set:
$\vec{x}=x_{i} \hat{e}^{(i)}=x_{i}^{\prime} \hat{e}^{\prime(i)}.\label{eq:1}$
Now suppose that our coordinate frames are related by a rotation matrix $\underset{\sim}{C}$. If we know the components $x_i$ that you measure and the rotation matrix $\underset{\sim}{C}$, can we predict the components $x^\prime_i$ that I will measure? We will do this by solving Equation $\ref{eq:1}$ for the components of the vector $\vec{x}^\prime$. Start with the second equality of Equation $\ref{eq:1}$:
$x_{i}^{\prime} \hat{e}^{\prime(i)}=x_{i} \hat{e}^{(i)}. \nonumber$
How do we solve this equality for $x^\prime_i$? If $\hat{e}^{\prime(i)}$ were a scalar, we would simply divide it out of both sides, but division by a vector makes no sense. Instead, we dot both sides with $\hat{e}^{\prime(j)}$:
$x_{i}^{\prime} \hat{e}^{\prime(i)} \cdot \hat{e}^{\prime(j)}=x_{i} \hat{e}^{(i)} \cdot \hat{e}^{\prime(j)}. \nonumber$
Now use Equation 3.1.12 on the right-hand side,1
\begin{aligned} x_{i}^{\prime} \underbrace{\hat{e}^{\prime(i)} \cdot \hat{e}^{\prime(j)}}_{=\delta_{i j}} &=x_{i} \underbrace{\hat{e}^{(i)} \cdot \hat{e}^{(k)}}_{=\delta_{i k}} C_{k j} \ x_{i}^{\prime} \delta_{i j} &=x_{i} \delta_{i k} C_{k j} \end{aligned}. \nonumber
Finally, multiplying out the deltas, we have
$x_{j}^{\prime}=x_{i} C_{i j}.\label{eq:2}$
This result is similar to the transformation rule Equation 3.1.12 for the basis vectors, though its meaning is entirely different. In each case, the dummy index is in the first position of the rotation matrix. The relation Equation $\ref{eq:2}$ can also be written in vector notation:
$\vec{x}^{\prime}=\vec{x} \underset{\sim}{C}, \quad \text { or } \vec{x}=\underset{\sim}{C}^{T} \vec{x} \nonumber$
Now consider the reverse transformation, i.e., my measurement into yours. Multiplying both sides of Equation $\ref{eq:2}$ by $C_{kj}$, we have
$x_{j}^{\prime} C_{k j}=x_{i} C_{i j} C_{k j}=x_{i} C_{i j} C_{j k}^{T}=x_{i} \delta_{i k}=x_{k} \nonumber$
and therefore
$x_{j}=x_{i}^{\prime} C_{j i}.\label{eq:4}$
Compare this with Equation $\ref{eq:2}$. The letters used to label the indices do not matter. The important distinction is that, for the reverse transformation, the dummy index is in the second position whereas for the forward transformation it is in the first. As with Equation $\ref{eq:2}$, there is an equivalent expression in vector form:
$\vec{x}=\vec{x}^{\prime} \underset{\sim}{C}^{T}, \quad \text { or } \quad \vec{x}=\underset{\sim}{C} \vec{x}^{\prime}.\label{eq:5}$
3.2.2 Differentiating the position vector
In a Cartesian coordinate system ${x,y}$, what is $\partial y/\partial x$? This is just the change in $y$ as we move in the $x$-direction, i.e., zero. In contrast, $\partial x /\partial x = 1$. In general, for a 3-dimensional system, we can write:
$\frac{\partial x_{j}}{\partial x_{i}}=\delta_{i j}.\label{eq:6}$
Now imagine rotating the system using rotation matrix $\underset{\sim}{C}$. How rapidly does the $j^{th}$ coordinate in the rotated system change if we move along the $i^{th}$ coordinate axis in the original system? First, expand Equation $\ref{eq:2}$ as
$x_{j}^{\prime}=x_{1} C_{1 j}+x_{2} C_{2 j}+x_{3} C_{3 j}. \nonumber$
We can easily differentiate $x^\prime_j$ with respect to any of the un-primed coordinates. For example:
$\frac{\partial x_{j} \prime}{\partial x_{1}}=C_{1 j}. \nonumber$
More generally:
$\frac{\partial x_{j} \prime}{\partial x_{i}}=C_{i j}.\label{eq:7}$
3.2.3 Defining Cartesian vectors and scalars
We now establish a more precise definition of a vector using the position vector as a prototype.
Definition: Vector
A vector is a quantity possessed of direction and magnitude independent of the observer. A vector must therefore transform in the same way as the position vector:
$v_{j}^{\prime}=v_{i} C_{i j} ; \quad v_{j}=v_{i}^{\prime} C_{j i}.\label{eq:8}$
Before referring to any quantity as a vector, we should check to see that it satisfies this criterion.
Examples:
• Consider the velocity of a moving point: $u_{i}=\frac{d}{d t} x_{i}.\label{eq:9}$ In rotated coordinates, $u_{j}^{\prime}=\frac{d}{d t} x_{j}^{\prime}=\frac{d}{d t}\left(x_{i} C_{i j}\right)=\frac{d x_{i}}{d t} C_{i j}=u_{i} C_{i j}. \nonumber$ This confirms that the velocity transforms in the same way as the position vector, and therefore that it qualifies as a vector.2
• The wave vector of a plane wave transforms as a vector. Consider a surface gravity wave whose surface elevation is given by $\eta=\eta^0\cos(\vec{k}\cdot\vec{x}-\omega t)$. The phase function $\vec{k}\cdot\vec{x}-\omega t$ is a scalar; for example, the crest of a wave is a real entity that looks like the crest to every observer. Therefore $\vec{k}\cdot\vec{x}$ must be a scalar. Defining that scalar as $D$, we can write: $D^{\prime}=k_{i}^{\prime} x_{i}^{\prime}=k_{i}^{\prime} x_{j} C_{j i}=\underbrace{k_{i}^{\prime} C_{j i}}_{k_{j}}x_{j}=D, \nonumber$ which requires that $k_j = k^\prime_iC_{ji}$.
Counterexample: The phase velocity of a plane wave sounds like a vector. It has three components, and we usually use “velocity’’ rather than “speed’’ to differentiate the vector from the scalar. However, the definition of phase velocity is inextricably linked to a particular coordinate system. From the inverse relationship between phase velocity and wave vector, ci = ω/ki, it should be clear that ci does not transform according to Equation $\ref{eq:2}$. If not, try it.
Definition: Scalar
A scalar is a single number that is the same in every reference frame.
Examples:
• Temperature is a scalar. You and I can look from different angles and we will still perceive the same temperature.
• The dot product of two vectors is a scalar. To see this, write the dot product of $\vec{x}$ and $\vec{y}$ as $D=x_iy_i$. Then ask what the value of $D$ would be in a rotated reference frame:
\begin{aligned} D^{\prime}=x_{i}^{\prime} y_{i}^{\prime} &=\overbrace{x_{j} C_{j i}}^{x_{i}^{\prime}} \overbrace{y_{k} C_{k i}}^{y_{i}^{\prime}} \ &=x_{j} y_{k} C_{j i} C_{k i} \ &=x_{j} y_{k} C_{j i} C_{i k}^{T} \ &=x_{j} y_{k} \delta_{j k}=x_{j} y_{j}=D \end{aligned}. \nonumber
Counterexample: A single element of a vector, while consisting of a single number, is not a scalar. For example, the x-coordinate of a point in space differs, obviously, in different coordinate systems.
1Compare with (3.1.9) and note that the dummy index i has been relabeled as k to avoid conflict with the dummy index i that is already in use. Also, the order of the basis vector and the rotation matrix has been reversed, as is always permissible.
2The velocity components ${u_1,u_2,u_3}$ are commonly replaced by ${u,v,w}$.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/03%3A_Cartesian_Vectors_and_Tensors/3.02%3A_Vectors_and_Scalars.txt
|
We have seen how to represent a vector in a rotated coordinate system. Can we do the same for a matrix? The basic idea is to identify a mathematical operation that the matrix represents, then require that it represent the same operation in the new coordinate system. We’ll do this in two ways: first, by seeing the matrix as a geometrical transformation of a vector, and second by seeing it as a recipe for a bilinear product of two vectors.
3.3.1 Derivation #1: preserving geometrical transformations
In section 3.1.1, we saw how a matrix can be regarded as a geometric transformation that acts on any vector or set of vectors (such as those that terminate on the unit circle). Look carefully at figure $1a$. Originally spherical, the ball is compressed in the $\hat{e}^{(3)}$ direction to half its original radius, and is therefore expanded in the $\hat{e}^{(1)}$ and $\hat{e}^{(2)}$ directions by a factor $\sqrt{2}$ (so that the volume is unchanged). The strain, or change of shape, can be expressed by the matrix transformation $\underset{\sim}{A}$:
$\underset{\sim}{A}=\left[\begin{array}{ccc} \sqrt{2} & 0 & 0 \ 0 & \sqrt{2} & 0 \ 0 & 0 & 1 / 2 \end{array}\right] \nonumber$
Note that $\det(\underset{\sim}{A})=1$.
Now suppose we rotate our coordinate system by 90° about $\hat{e}^{(1)}$, so that $\hat{e}^{\prime(3)}= −\hat{e}^{(2)}$ and $\hat{e}^{\prime(2)}=\hat{e}^{(3)}$ (figure $1b$). In the new coordinate system, the compression is aligned in the $\hat{e}^{\prime(2)}$ direction. We therefore expect that the matrix will look like this:
$\underset{\sim}{A^{\prime}}=\left[\begin{array}{ccc} \sqrt{2} & 0 & 0 \ 0 & 1 / 2 & 0 \ 0 & 0 & \sqrt{2} \end{array}\right]\label{eq:1}$
The compression represented by $\underset{\sim}{A^\prime}$ is exactly the same process as that represented by $\underset{\sim}{A}$, but but the numerical values of the matrix elements are different because the process is measured using a different coordinate system.
Can we generalize this result? In other words, given an arbitrary geometrical transformation $\underset{\sim}{A}$, can we find the matrix $\underset{\sim}{A^\prime}$ that represents the same transformation in an arbitrarily rotated coordinate system? Suppose that $\underset{\sim}{A}$ transforms a general vector $\vec{u}$ into some corresponding vector $\vec{v}$:
$A_{i j} u_{j}=v_{i}\label{eq:2}$
Now we require that the same relationship be valid in an arbitrary rotated coordinate system:
$A_{i j}^{\prime} u_{j}^{\prime}=v_{i}^{\prime}\label{eq:3}$
What must $\underset{\sim}{A^\prime}$ look like? To find out, we start with Equation \ref{eq:2} and substitute the reverse rotation formula 3.2.4 for $\vec{u}$ and $\vec{v}$:
$A_{i j} \underbrace{u_{l}^{\prime} C_{j l}}_{u_{j}}=\underbrace{v_{k}^{\prime} C_{i k}}_{v_{i}}\label{eq:4}$
Now we try to make Equation $\ref{eq:4}$ look like Equation $\ref{eq:3}$ by solving for $\vec{v}^\prime$. Naively, you might think that you could simply divide both sides by the factor multiplying $v^\prime_k$, namely $C_{ik}$. This doesn’t work because, although the right hand side looks like a single term, it is really a sum of three terms with $k = 1,2,3$, and $C_{ik}$ has a different value in each. But instead of dividing by $\underset{\sim}{C}$, we can multiply by its inverse. To do this, multiply both sides of Equation $\ref{eq:4}$ by $C_{im}$:
\begin{aligned} A_{i j} u_{l}^{\prime} C_{j l} C_{i m} &=v_{k}^{\prime} C_{i k} C_{i m} \ A_{i j} u_{l}^{\prime} C_{j l} C_{i m} &=v_{k}^{\prime} \underbrace{C_{k i}^{T} C_{i m}}_{\delta_{k m}} \ A_{i j} C_{i m} C_{j l} u_{l}^{\prime} &=v_{m}^{\prime} \end{aligned}. \nonumber
So we have successfully solved for the elements of $\vec{v}^\prime$.
Now if Equation $\ref{eq:3}$ holds, we can replace the right-hand side with $A^\prime_{ml}u^\prime_l$:
$A_{i j} C_{i m} C_{j l} u_{l}^{\prime}=A_{m l}^{\prime} u_{l}^{\prime} \nonumber$
or
$\left(A_{i j} C_{i m} C_{j l}-A_{m l}^{\prime}\right) u_{l}=0. \nonumber$
This relation must be valid not just for a particular vector $\vec{u}$ but for every vector $\vec{u}$, and that can only be true if the quantity in parentheses is identically zero. (Make sure you understand that last statement; it will come up again.)
The transformation rule for the matrix $\underset{\sim}{A}$ is therefore (after a minor relabelling of indices):
$A_{i j}^{\prime}=A_{k l} C_{k i} C_{l j}\label{eq:5}$
This transformation law can also be written in matrix form as
$\underset{\sim}{A}^{\prime}=\underset{\sim}{C}^{T} \underset{\sim}{A} \underset{\sim}{C}\label{eq:6}$
There is also a reverse transformation
$A_{i j}=A_{k l}^{\prime} C_{i k} C_{j l}\label{eq:7}$
As in the transformation of vectors, the dummy index on C$\underset{\sim}{C}$ is in the first position for the forward transformation, in the second position for the reverse transformation.
Exercise: Use this law to check Equation $\ref{eq:1}$.
Definition
A 2nd-order tensor is a matrix that transforms according to Equation $\ref{eq:5}$.
3.3.2 Tensors in the laws of physics
In the above derivation of the matrix transformation formula $\ref{eq:5}$, we thought of the vector $\vec{u}$ as a position vector identifying, for example, a point on the surface of a soccer ball. Likewise, $\vec{v}$ is the same point after undergoing the geometrical transformation $\underset{\sim}{A}$, as described by Equation $\ref{eq:2}$. We then derived Equation{\ref{eq:5}\) by assuming that the geometrical relationship between the vectors $\vec{u}$ and $\vec{v}$, as represented by $\underset{\sim}{A}$, be the same in a rotated coordinate system, i.e. Equation $\ref{eq:4})\. Now suppose that the vectors \(\vec{u}$ and $\vec{v}$ are not position vectors but are instead some other vector quantities such as velocity or force, and Equation $\ref{eq:2}$ represents a physical relationship between those two vector quantities. If this relationship is to be valid for all observers (as discussed in section 3.1.2), then the matrix $\underset{\sim}{A}$ must transform according to Equation $\ref{eq:5}$.
A simple example is the rotational form of Newton’s second law of motion:
$\vec{T}=\underset{\sim}{I} \vec{\alpha}\label{eq:8}$
Here, $\vec{\alpha}$ represents the angular acceleration of a spinning object (the rate at which its spinning motion accelerates). If $\vec{\alpha}$ is multiplied by the matrix $\underset{\sim}{I}$, called the moment of inertia, the result is $\vec{T}$, the torque (rotational force) that must be applied to create the angular acceleration. If the process is viewed by two observers using different coordinate systems, their measurements of $\underset{\sim}{I}$ are related by Equation $\ref{eq:5}$, as is shown in appendix B. If this were not true, Equation $eq:8$ would be useless as a law of physics and would have been discarded long ago.
The number of mathematical relationships that might conceivably exist between physical quantities is infinite, and a theoretical physicist must have some way to identify those relationships that might actually be true before spending time and money testing them in the lab. The requirement that physical laws be the same for all observers, as exemplified in Equation $\ref{eq:5}$, plays this role. It was used extensively in Einstein’s derivation of the theory of relativity, and the modern form of the theory was inspired by that success. It is equally useful in the development of the laws of fluid dynamics as we will see shortly.
3.3.3 Derivation #2: preserving bilinear products
The dot product of two vectors $\vec{u}$ and $\vec{v}$ is an example of a bilinear product:
$D(\vec{u}, \vec{v})=u_{1} v_{1}+u_{2} v_{2}+u_{3} v_{3}.\label{eq:9}$
We call it “bilinear” because it is linear in each argument separately, e.g., $D(\vec{a}+\vec{b},\vec{c})=D(\vec{a},\vec{c})+D(\vec{b},\vec{c})$.
With a little imagination we can invent many other bilinear products, such as
$u_{1} v_{2}-u_{2} v_{3}+2 u_{3} v_{2}.\label{eq:10}$
Such products can be written in the compact form
$A_{i j} u_{i} v_{j}. \nonumber$
For the dot product Equation $\ref{eq:9}$, the matrix $\underset{\sim}{A}$ is just $\underset{\sim}{\delta}$. In the second example Equation $\ref{eq:10}$, the matrix is
$\underset{\sim}{A}=\left[\begin{array}{ccc} 0 & 1 & 0 \ 0 & 0 & -1 \ 0 & 2 & 0 \end{array}\right]. \nonumber$
Exercise $1$
Write out $A_{ij}u_iv_j$ and verify that the result is Equation $\ref{eq:10}$.
So, every matrix can be thought of as the recipe for a bilinear product. Now suppose we rotate to a new coordinate system. How would the matrix $\underset{\sim}{A}$ have to change so that the bilinear product it represents remains the same (i.e., is a scalar)? The following must be true:
$A_{i j}^{\prime} u_{i}^{\prime} v_{j}^{\prime}=A_{k l} u_{k} v_{l} \nonumber$
As in section 3.3.1, we substitute for $\vec{u}$ and $\vec{v}$ using the reverse rotation formula Equation \ref{eq:8}:
$A_{i j}^{\prime} u_{i}^{\prime} v_{j}^{\prime}=A_{k l} u_{i}^{\prime} C_{k i} v_{j}^{\prime} C_{l j}. \nonumber$
Collecting like terms, we can rewrite this as
$\left(A_{i j}^{\prime}-A_{k l} C_{k i} C_{l j}\right) u_{i}^{\prime} v_{j}^{\prime}=0 \nonumber$
If this is to be true for every pair of vectors $\vec{u}$ and $\vec{v}$, the quantity in parentheses must vanish. This leads us again to the tensor transformation law Equation $\ref{eq:5}$.
Example $1$
A dyad is a matrix made up of the components of two vectors: $A_{ij}=u_iv_j$. Is the dyad a tensor?
Solution
$A_{i j}^{\prime}=u_{i}^{\prime} v_{j}^{\prime}=u_{k} C_{k i} v_{l} C_{l j}=u_{k} v_{l} C_{k i} C_{l j}=A_{k l} C_{k i} C_{l j} \nonumber$
The answer is yes, the dyad transforms according to Equation $\ref{eq:5}$ and therefore qualifies as a tensor.
Example $2$
The identity matrix is the same in every coordinate system. Does it qualify as a tensor?
Solution
Let $\underset{\sim}{C}$ represent an arbitrary rotation matrix and then write the rotated identity matrix using the fact that $\underset{\sim}{C}$ is orthogonal:
$\delta_{i j}^{\prime}=C_{i k}^{T} C_{k j} \nonumber$
With a bit of index-juggling, we can show that this equality is equivalent to Equation $ref{eq:5}$:
$\delta_{i j}^{\prime}=C_{i k}^{T} C_{k j}=C_{k i} C_{k j}=\delta_{k l} C_{k i} C_{l j} \nonumber$
We conclude that $\underset{\sim}{\delta}$ transforms as a tensor.
Test your understanding by completing exercises 11 and 12.
3.3.4 Higher-order tensors
A matrix that transforms according to Equation $\ref{eq:5}$ is called a “2nd-order’’ tensor because it has two indices. By analogy, we can think of a vector as a 1st-order tensor, and a scalar as a 0th-order tensor. Each obeys a transformation law involving a product of rotation matrices whose number equals the order:
\begin{aligned} &\text { order } 0: \quad T^{\prime}=T;\ &\text { order } 1: \quad u_{p}^{\prime}=u_{i} C_{i p};\ &\text { order } 2: \quad A_{p q}^{\prime}=A_{i j} C_{i p} C_{j q}. \end{aligned} \nonumber
We can now imagine 3rd and 4th order tensors that transform as
\begin{aligned} &\text { order } 3: \quad G_{p q r}^{\prime}=G_{i j k} C_{i p} C_{j q} C_{k r};\ &\text { order } 4: \quad K_{p q r s}^{\prime}=K_{i j k l} C_{i p} C_{j q} C_{k r} C_{l s}; \end{aligned} \nonumber
The 3rd-order tensor is a three-dimensional array that expresses a relationship among three vectors, or one vector and one 2nd-order tensor. The 4th-order tensor may express a relationship among four vectors, two 2nd-order tensors or a vector and a 3rd-order tensor. We will see examples of both of these higher-order tensor types.
Test your understanding of tensors by completing exercises 13, 14, 15 and 16.
3.3.5 Symmetry and antisymmetry in higher-order tensors
We’re familiar with the properties that define symmetric and antisymmetric 2nd-order tensors: $A_{ij}=A_{ji}$ and $A_{ij}=-A{ji}$, respectively. For higher-order tensors, these properties become a bit more involved. For example, suppose a 3rd-order tensor has the property
$G_{i j k}=G_{j i k}. \nonumber$
This tensor is symmetric with respect to its 1st and 2nd indices. The tensor could also be symmetric with respect to its 1st and 3rd, or 2nd and 3rd indices.
The tensor could also be antisymmetric with respect to one or more pairs of indices, e.g.
$G_{i j k}=-G_{j i k} \nonumber$
A tensor that is antisymmetric with respect to all pairs of indices is called “completely antisymmetric”. The same nomenclature applies to 4th and higher-order tensors.
Symmetry and antisymmetry are intrinsic properties of a tensor, in the sense that they are true in all coordinate systems.1 You will show this in exercise 19.
3.3.6 Isotropy
Definition
An isotropic tensor is one whose elements have the same values in all coordinate systems, i.e., it is invariant under rotations.
Every scalar is isotropic but no vector is. How about 2nd order tensors? As is shown in appendix C, the only isotropic 2nd-order tensors are those proportional to the identity matrix.
Isotropic tensors are of particular importance in defining the basic operations of linear algebra. For example, how many ways can you think of to multiply two vectors? If you were awake in high school then you already know two - the dot product and the cross product - but in fact it would be easy to invent more. The reason we use these two products in particular is that both are based on isotropic tensors.
We saw in section 3.3.3 that the dot product is an example of a bilinear product, and that bilinear products in general are represented by 2nd-order tensors. In the case of the dot product, the tensor is $\underset{\sim}{\delta}$, i.e. $\vec{u}\cdot\vec{v}=\delta_{ij}u_{i}v_{j}$. Now, every 2nd-order tensor defines a bilinear product that could potentially be an alternative to the dot product. But as it turns out (appendix C), the identity tensor $\underset{\sim}{\delta}$ is the only 2nd-order tensor that is isotropic.2 A bilinear product based on any other tensor would have to be computed differently in each reference frame, and would therefore be useless as a general algebraic tool.3
How about the cross product? We can imagine many ways to multiply two vectors such that the product is another vector. Simply define three separate bilinear combinations and use one for each component of the resulting vector. For example, the cross product is defined by the following three bilinear combinations:
\begin{aligned} w_{1} &=u_{2} v_{3}-u_{3} v_{2} \ w_{2} &=-u_{1} v_{3}+u_{3} v_{1} \ w_{3} &=u_{1} v_{2}-u_{2} v_{1}. \end{aligned}\label{eq:11}
Any such formula can be written using a 3rd-order tensor:
$w_{i}=A_{i j k} u_{j} v_{k}. \nonumber$
And conversely, every 3rd-order tensor defines a vector bilinear product. But again, if we want the formula to be independent of the reference frame, then the tensor must be isotropic. As with 2nd-order tensors, there is only one choice for a 3rd-order isotropic tensor, and that is the choice that defines the cross product. We will discuss that tensor in the following section.
Isotropic tensors are also useful for describing the physical properties of isotropic materials, i.e., materials whose inner structure does not have a preferred direction. Both water and air are isotropic to a good approximation. A counterexample is wood, which has a preferred direction set by its grain. In an upcoming chapter we will see that the relationship between stress and strain in an isotropic material is described by an isotropic 4th-order tensor.
Table 3.1 summarizes the various orders of tensors, their rotation rules, and the subset that are isotropic. The isotropic forms are derived in appendices C and D.
$\begin{array}{|c|c|l|l|} \hline \text { order } & \text { common name } & \text { rotation rule } & \text { isotropic } \ \hline 0 & \text { scalar } & T^{\prime}=T & \text { all } \ 1 & \text { vector } & v_{i}^{\prime}=v_{j} C_{j i} & \text { none } \ 2 & \text { matrix } & A_{i j}^{\prime}=A_{k l} C_{k i} C_{l j} & a \delta_{i j} \ 3 & & A_{i j k}^{\prime}=A_{l m n} C_{l i} C_{m j} C_{k n} & \text { completely antisymmetric } \ 4 & & A_{i j k l}^{\prime}=A_{m n p q} C_{m i} C_{n j} C_{p k} C_{q l} & a \delta_{i j} a_{k l}+b \delta_{i k} \delta_{j l}+c \delta_{i l} \delta_{j k} \ \hline \end{array} \nonumber$
Table 3.1: Summary of tensor properties. The variables a, b and c represent arbitrary scalars. The rotation rules are for forward rotations, hence the dummy index is on the left. To reverse the rotation, place the dummy index on the right.
3.3.7 The Levi-Civita tensor: properties and applications
In this section we’ll describe the so-called Levi-Cevita alternating tensor4 (also called the “antisymmetric tensor” and the “permutation tensor”). This tensor holds the key to understanding many areas of linear algebra, and has application throughout mathematics, physics, engineering and many other fields. In this section we will only summarize the most useful properties of the Levi-Civita tensor. For a more complete explanation including proofs, the student is encouraged to examine Appendix D.
The alternating tensor is defined as follows:
$\varepsilon_{i j k}=\left\{\begin{array}{cc} 1, & \text { if } i j k=123,312,231, \ -1, & \text { if } i j k=213,321,132, \ 0, & \text { otherwise. } \end{array}\right.\label{eq:12}$
Illustrated in Figure 3.8, the array is mostly zeros, with three 1s and three -1s arranged antisymmetrically. The essential property of the alternating tensor is that it is completely antisymmetric, meaning that the interchange of any two indices results in a sign change. Two other properties follow from this one:
1. Any element with two equal indices must be zero.
2. Cyclic permutations of the indices have no effect. What is a cyclic permutation? The idea is illustrated in figure $2$. For example, suppose we start with $\epsilon_{ijk}$, then move the $k$ back to the first, or leftmost position, shifting $i$ and $j$ to the right to make room, resulting in $\epsilon{kij}$. That’s a cyclic permutation.
Examine Equation $\ref{eq:12}$ and convince yourself that these two statements are true. See Appendix D for a proof that they have to be true.
The $\varepsilon - \delta$ relation
The alternating tensor is related to the 2nd-order identity tensor in a very useful way:
$\varepsilon_{i j k} \varepsilon_{k l m}=\delta_{i l} \delta_{j m}-\delta_{i m} \delta_{j l}.\label{eq:13}$
For us, the main value of Equation $\ref{eq:13}$ will be in deriving vector identities, as we will see below. Test your understanding of the $\epsilon - \delta$ relation by completing exercise 17.
The cross product
The cross product is defined in terms of $\underset{\sim}{\varepsilon}$ by
$z_{k}=\varepsilon_{i j k} u_{i} v_{j}\label{eq:14}$
You will sometimes see Equation $\ref{eq:14}$ written with the free index in the first position: $z_k = \epsilon_{kij}u_iv_j$. The two expressions are equivalent because $kij$ is a cyclic permutation of $ijk$ (see property 2 of $\underset{\sim}{\varepsilon}$ listed above).
We will now list some essential properties of the cross product.
1. The cross product is anticommutative:
\begin{align*}(\vec{v}\times\vec{u})_k &= \varepsilon_{ijk}v_iu_j \ &= \varepsilon_{ijk}u_jv_i \;\; \text{(reordering)}\ &= \varepsilon_{jik}u_iv_j \;\;\text{(relabeling i and j)}\ &= -\varepsilon_{ijk}u_iv_j \;\;\text{(using antisymmetry)} \ &= -(\vec{u}\times\vec{v})_k. \end{align*} \nonumber
2. The cross product is perpendicular to both $\vec{u}$ and $\vec{v}$. This is left for the reader to prove (exercise 18).
3. The magnitude of the cross product is
$|\vec{z}|=|\vec{u}||\vec{v}||\sin \theta|,\label{eq:15}$
where $\theta$ is the angle between $\vec{u}$ and $\vec{v}$. This is easily proven by writing the squared magnitude of $\vec{z}$ as
$z_{k} z_{k}=\varepsilon_{i j k} u_{i} v_{j} \varepsilon_{k l m} u_{l} v_{m} \nonumber$
then applying the $\underset{\sim}{\varepsilon}-\underset{\sim}{\delta}$ relation. A geometric interpretation of Equation $\ref{eq:15}$ is that the magnitude of the cross product is equal to the area of the parallelogram bounded by $\vec{u}$ and $\vec{v}$ (figure $4$).
Test your understanding of the cross product by completing exercise 18.
The determinant
The determinant of a $3\times3$ matrix $\underset{\sim}{A}$ can be written using $\underset{\sim}{\varepsilon}$:
$\operatorname{det}(\underset{\sim}{A})=\varepsilon_{i j k} A_{i 1} A_{j 2} A_{k 3},\label{eq:16}$
where the columns are treated as vectors.5 The many useful properties of the determinant all follow from this definition (Appendix D).
1A counterexample to this is diagonality: a tensor can be diagonal in one coordinate system and not in others, e.g., exercise 10, or section 5.3.4. So diagonality is not an intrinsic property.
2up to a multiplicative constant
3This is true in Cartesian coordinates. In curved coordinate systems, e.g. Appendix I, the dot product is computed using a more general object called the metric tensor which we will not go into here.
4Tullio Levi-Civita (1873-1941) was an Italian mathematician. Among other things, he consulted with Einstein while the latter was developing the general theory of relativity.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/03%3A_Cartesian_Vectors_and_Tensors/3.03%3A_Cartesian_tensors.txt
|
A field is (for our purposes) a quantity that varies in space and can therefore be differentiated with respect to position. Scalars, vectors and tensors can all be fields (e.g., figure 4.1). The various derivative operations that can be applied to fields are fundamental tools in the study of flow. I assume that the reader is comfortable with the calculus as applied to functions of a single variable and has some familiarity with partial derivatives. With this knowledge, it is straightforward to apply the calculus to scalar, vector and tensor fields
04: Tensor Calculus
In multidimensional calculus, the role of the derivative is taken by the vector differential operator $\vec{\nabla}$, pronounced “del’’, or sometimes “nabla’’:
$\nabla_{i}=\frac{\partial}{\partial x_{i}}. \nonumber$
This operator can be written in several different but equivalent ways:
$\vec{\nabla}=\left\{\frac{\partial}{\partial x_{1}}, \frac{\partial}{\partial x_{2}}, \frac{\partial}{\partial x_{3}}\right\}=\hat{e}^{(i)} \frac{\partial}{\partial x_{i}}=\hat{e}^{(i)} \nabla_{i}. \nonumber$
4.1.1 Gradient of a scalar
Let $\phi(\vec{x}$ be a scalar field and $\vec{x}$ the position vector in a Cartesian coordinate system. Application of $\vec{\nabla}$ yields the gradient:
$\vec{G}=\vec{\nabla} \phi=\left\{\frac{\partial \phi}{\partial x_{1}}, \frac{\partial \phi}{\partial x_{2}}, \frac{\partial \phi}{\partial x_{3}}\right\}=\hat{e}^{(i)} \frac{\partial \phi}{\partial x_{i}} \nonumber$
The gradient has three components and appears to be a vector, but we should check. Identifying a vector is more complicated when spatial derivatives are involved.
\begin{aligned} &G_{i}^{\prime}=\frac{\partial \phi^{\prime}}{\partial x_{i}^{\prime}}\ &\begin{array}{ll} = & \frac{\partial \phi}{\partial x_{i}^{\prime}} \quad(\phi \text { is a scalar}) \ = & \frac{\partial \phi}{\partial x_{j}} \frac{\partial x_{j}}{\partial x_{i}^{\prime}} \quad \left(\text {applying the chain rule and summing over } j\right) \end{array}\ &=G_{j} \frac{\partial x_{j}}{\partial x_{i}^{\prime}}. \end{aligned} \label{eqn:1}
The partial derivative in the final term is the Jacobian matrix for the transformation. Using the reverse transformation rule Equation 3.2.7,
$\frac{\partial x_{j}}{\partial x_{i}^{\prime}}=\frac{\partial x_{k}^{\prime} C_{j k}}{\partial x_{i}^{\prime}}=C_{j k} \frac{\partial x_{k}^{\prime}}{\partial x_{i}^{\prime}}=C_{j k} \delta_{k i}=C_{j i}.\label{eqn:2}$
This useful result pertains to every orthogonal coordinate system, so we’ll highlight for later reference:
$\frac{\partial x_{j}}{\partial x_{i}^{\prime}}=C_{j i}.\label{eqn:3}$
Now, combining Equation $\ref{eqn:1}$ and $\ref{eqn:2}$, we have $G^\prime_i=G_iG_{ij}$, i.e., the gradient of a scalar transforms as a vector.
4.1.2 Divergence
The divergence normally results from applying $\vec{\nabla}$ to a vector:
$\vec{\nabla} \cdot \vec{u}=\frac{\partial u_{i}}{\partial x_{i}}. \nonumber$
The result is a scalar. A vector whose divergence is zero is called solenoidal. The divergence may also be applied to either dimension of a matrix or a 2nd order tensor. For example,
$\frac{\partial A_{i j}}{\partial x_{j}} \hat{e}^{(i)} \nonumber$
If $\underset{\sim}{A}$ is a tensor, the result is a vector.
4.1.3 Curl
The curl is applied to a vector field $\vec{u}$ by taking the cross product with $\vec{\nabla}$:
$\vec{\nabla} \times \vec{u}=\varepsilon_{i j k} \nabla_{i} u_{j} \hat{e}^{(k)}. \nonumber$
The result is another vector field. It can be expanded as
$\vec{u} \times \vec{v}=\hat{e}^{(1)}\left(u_{2} v_{3}-u_{3} v_{2}\right)-\hat{e}^{(2)}\left(u_{1} v_{3}-u_{3} v_{1}\right)+\hat{e}^{(3)}\left(u_{1} v_{2}-u_{2} v_{1}\right).\label{eqn:4}$
See Appendix D, section D.3.3 for details.
4.1.4 Laplacian
The Laplacian results from successive applications of $\vec{\nabla}$:
$\vec{\nabla} \cdot \vec{\nabla}=\nabla^{2}=\nabla_{i} \nabla_{i}=\frac{\partial^{2}}{\partial x_{i}^{2}} \nonumber$
The Laplacian may be applied to either a scalar or a vector:
$\nabla^{2} \phi=\vec{\nabla} \cdot \vec{\nabla} \phi=\frac{\partial^{2} \phi}{\partial x_{i}^{2}}, \nonumber$
or
$\nabla^{2} \vec{u}=\left\{\nabla^{2} u_{1}, \nabla^{2} u_{2}, \nabla^{2} u_{3}\right\}=\hat{e}^{(i)} \nabla^{2} u_{i}. \nonumber$
4.1.5 Advective derivative
The advective derivative is an operation unique to fluid mechanics. Besides $\vec{\nabla}$, it requires a vector field $\vec{u}\left(\vec{x}\right)$, which is often (though not always) chosen to be the velocity of the flow. When this choice is made, we use the more common term material derivative (discussed in section 5.1.1). The operation is
$[\vec{u} \cdot \vec{\nabla}]=u_{i} \frac{\partial}{\partial x_{i}}. \nonumber$
The advective derivative can be applied to a scalar, resulting in another scalar:
$[\vec{u} \cdot \vec{\nabla}] \phi=\vec{u} \cdot \vec{\nabla} \phi=u_{i} \frac{\partial \phi}{\partial x_{i}}, \nonumber$
or to a vector, resulting in another vector:
$[\vec{u} \cdot \vec{\nabla}] \vec{v}=u_{i} \frac{\partial}{\partial x_{i}}\left(v_{j} \hat{e}^{(j)}\right)=\hat{e}^{(j)} u_{i} \frac{\partial v_{j}}{\partial x_{i}}. \nonumber$
4.1.6 Vector identities
Many equations hold for only certain values of a variable, and one may want to solve the equation to find those values. In contrast, identities are equations that hold for all values of a certain class of variables, e.g., all vectors that vary continuously in space. The identity
$\vec{\nabla} \cdot(\vec{\nabla} \times \vec{u})=0 \nonumber$
tells us that the divergence of the curl of a vector is zero, and this is true for every continuously-varying vector $\vec{u}$. Such identities are tremendously useful in vector calculus. For example, if a term includes the divergence of the curl of a vector, you can throw it out regardless of what the vector is.
Appendix E lists 21 of the most useful vector identities. All of these can (and should) be proved using the methods we have covered so far. For example:
• Proof of identity #15: $\vec{\nabla}\times\vec{\nabla}\phi=0$. We start with the kth component of $\vec{\nabla}\times\vec{\nabla}\phi$:
\begin{aligned} &\begin{aligned} [\vec{\nabla} \times \vec{\nabla} \phi]_{k}=\varepsilon_{i j k} \frac{\partial}{\partial x_{i}} \frac{\partial \phi}{\partial x_{j}} &=\varepsilon_{i j k} \frac{\partial^{2} \phi}{\partial x_{i} \partial x_{j}} \ &=\varepsilon_{i j k} \frac{\partial^{2} \phi}{\partial x_{j} \partial x_{i}} \end{aligned} \quad \text { (reverse order of differentiation) }\ &=\varepsilon_{j i k} \frac{\partial^{2} \phi}{\partial x_{i} \partial x_{j}} \quad \left(\text { relabel } i \text { and } j \text { as each other }\right)\ &\begin{array}{ll} = & -\varepsilon_{i j k} \frac{\partial^{2} \phi}{\partial x_{i} \partial x_{j}} \ = & -[\vec{\nabla} \times \vec{\nabla} \phi]_{k} \end{array} \quad \left( \text { use antisymmetry of } \varepsilon\right) \end{aligned} \nonumber
We have shown that $[ \vec{\nabla}\times\vec{\nabla}\phi]_k \nonumber$ is equal to its own additive inverse, and therefore can have no other value but zero.
• Proof of identity #21. It’s easier if we rearrange the identity like this: $\left(\vec{\nabla}\times\vec{u}\right)\times\vec{u}\equiv \left[\vec{u}\cdot\vec{\nabla}\right]\vec{u}-\frac{1}{2}\vec{\nabla}\left(\vec{u}\cdot\vec{u}\right)$. Now define
$\vec{\omega}=\vec{\nabla} \times \vec{u}, \quad \text { or } \quad \omega_{k}=\varepsilon_{i j k} \frac{\partial}{\partial x_{i}} u_{j}\label{eqn:5}$
and
$\vec{F}=(\vec{\nabla} \times \vec{u}) \times \vec{u}, \quad \text { or } \quad F_{m}=\varepsilon_{k l m} \omega_{k} u_{l}\label{eqn:6}$
We now substitute Equation $\ref{eqn:5}$ into Equation $\ref{eqn:6}$ and use the $\varepsilon - \delta$ relation:
\begin{aligned} F_{m} &=\varepsilon_{k l m}\left(\varepsilon_{i j k} \frac{\partial}{\partial x_{i}} u_{j}\right) u_{l} \ &=\varepsilon_{i j k} \varepsilon_{k l m} \frac{\partial u_{j}}{\partial x_{i}} u_{l} \ &=\left(\delta_{i l} \delta_{j m}-\delta_{i m} \delta_{j l}\right) \frac{\partial u_{j}}{\partial x_{i}} u_{l} \ &=\delta_{i l} \delta_{j m} \frac{\partial u_{j}}{\partial x_{i}} u_{l}-\delta_{i m} \delta_{j l} \frac{\partial u_{j}}{\partial x_{i}} u_{l} \ &=\frac{\partial u_{m}}{\partial x_{l}} u_{l}-\frac{\partial u_{l}}{\partial x_{m}} u_{l}. \end{aligned} \nonumber
(In that last step we had two choices: we summed over the dummy indices $i$ and $j$, but we could just as well have chosen to sum over $i$ and $l$ or $j$ and $l$.) Now it is just a matter of recognizing the remaining two terms on the right-hand side as the terms in the identity we want to prove.
\begin{aligned} F_{m} &=u_{l} \frac{\partial}{\partial x_{l}} u_{m}-\frac{1}{2} \frac{\partial}{\partial x_{m}}\left(u_{l} u_{l}\right) \ &=[\vec{u} \cdot \vec{\nabla}] u_{m}-\frac{1}{2} \frac{\partial}{\partial x_{m}}(\vec{u} \cdot \vec{u}), \end{aligned} \nonumber
or $\vec{F}=\left[\vec{u}\cdot\vec{\nabla}\right]\vec{u}-\frac{1}{2}\vec{\nabla}\left(\vec{u}\cdot\vec{u}\right)$, and the identity is proven.
Identity-proving is not only an excellent diversion for a rainy day; it will also, like pushups for a quarterback or scales for a musician, prepare you for great things. You are therefore encouraged to see the list of vector identities in appendix E as a fine pile of puzzles awaiting your attention. Exercise 20 lists a few that you should definitely try.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/04%3A_Tensor_Calculus/4.01%3A_Vector_calculus_operations.txt
|
The flux of a quantity is the rate at which it is transported across a surface, expressed as transport per unit surface area. A simple example is the volume flux, which we denote as $Q$.
4.2.1 Volume flux through a rectilinear surface
Consider the simple, rectilinear channel in (Figure $1$). The flow velocity $\vec{u}$ is assumed to be uniform with magnitude $|\vec{u}| = U$, and the cross-sectional area is A. After a time $\delta t$, the flow through the cross-section marked (a) has travelled a distance $U\delta t$ and occupies a volume $\delta V = AU \delta t$. The volume flux is then
$Q=\frac{\delta V}{\delta t}=A U \nonumber$
Examples:
• A river 100 m wide and 2 m deep has cross-sectional area 200 m2. If we take the velocity to be 1 m s−1, then we estimate the volume flux as 200 m2×1 m s−1= 200 m3 s−1.
• The total volume flux of all of Earth’s rivers is $\sim$ 106 m3 s−1.
• The Gulf Stream, a large ocean current that flows north along the east coast of the U.S., is typically 100 km wide and 1000 m deep, so the cross-sectional area is 108 m2. A typical velocity is 1 m s−1, so the corresponding volume flux is $Q$ = 108 m3 s−1.
Oceanographers measure volume flux in units of Sverdrups1: 1 Sv = 106 m3 s−1. The world’s rivers therefore carry about 1 Sv., while the Gulf Stream carries 100 Sv.
Suppose now that the surface through which we calculate the volume flux is tilted at an angle $\theta$ from the vertical (marked (b) in (Figure $1$)). The volume flux is, of course, the same as that through the vertical section. The area of the vertical section is $A^\prime \cos\theta$. We can therefore define the volume flux through a surface tilted at an arbitrary angle $\theta$ from the vertical as $Q = UA^\prime \cos\theta$. We can also express this flux in terms of the unit vector $\hat{n}$, drawn normal to the surface $A^\prime$. Note that the product $U \cos\theta$ is equal to $\vec{u}\cdot\hat{n}$.
4.2.2 Volume flux through a curved surface
A curved surface can be thought of as being tiled by small, flat, surface elements with area $\delta A$ and unit normal $\hat{n}$. The tiling matches the surface exactly as the tile size shrinks to zero. The volume flux through each tile is $\delta Q = \vec{u}\cdot\hat{n}\delta A$, just as in the case of the tilted surface in section 4.2.1. If we now sum over all of the tiles and take the limit as $\delta A \rightarrow 0$, we obtain the general expression
$Q=\int_{A} \vec{u} \cdot \hat{n} d A.\label{eqn:1}$
4.2.3 Volume flux through an arbitrary closed surface: the divergence theorem
Flux through an infinitesimal cube
Consider a general velocity field $\vec{u}\left(\vec{x}\right)=\left\{ u\left(\vec{x}\right), v\left(\vec{x}\right), w\left(\vec{x}\right)\right\}$, and somewhere within it a small, imaginary cube with edge dimension $\Delta$ (Figure $4$). We would like to know the net volume flux out of the cube. We define a Cartesian coordinate system aligned with the cube as shown. We will now compute the outward volume flux across each of the faces, numbered 1-6 in the figure.
We begin with face #2, highlighted in green. On this face $y = \frac{\Delta}{2}$, and the outward unit normal is $\hat{n}=\hat{e}^{(y)}$. The volume flux may be written as
$Q^{[2]}=\int_{[2]} \vec{u} \cdot \hat{n} d A=\int_{-\Delta / 2}^{\Delta / 2} d x \int_{-\Delta / 2}^{\Delta / 2} d z v(x, \Delta / 2, z).\label{eqn:2}$
We now approximate the spatial variation of v by means of a first-order Taylor series expansion about the origin:
$v(x, y, z)=v^{0}+v_{x}^{0} x+v_{y}^{0} y+v_{z}^{0} z+\ldots\label{eqn:3}$
Here, subscripts indicate partial derivatives (for brevity) and the superscript “0” specifies evaluation at the origin. The dots at the end represent higher-order terms that will vanish later when we take the limit $\Delta\rightarrow 0$; from here on we ignore these. Replacing the integrand in Equation $\ref{eqn:2}$ with Equation $\ref{eqn:3}$, we have
\begin{aligned} Q^{[2]}=\int_{-\Delta / 2}^{\Delta / 2} d x \int_{-\Delta / 2}^{\Delta / 2} d z &\left[\quad v^{0}+v_{x}^{0} x+v_{y}^{0} \frac{\Delta}{2}+v_{z}^{0} z\right] \ =&\left[\quad \Delta^{2} v^{0}+0+\Delta^{2} v_{y}^{0} \frac{\Delta}{2}+0\right] \ =&\left(v^{0}+v_{y}^{0} \frac{\Delta}{2}\right) \Delta^{2} \end{aligned} \nonumber
Now we repeat the process for the opposite face, #5. The only differences are that the uniform value of $y$ becomes $-\Delta/2$ and the outward normal becomes $-\hat{e}^{(y)}$. The calculation therefore gives
$Q^{[5]}=-\left(v^{0}-v_{y}^{0} \frac{\Delta}{2}\right) \Delta^{2} \nonumber$
Summing the fluxes from faces 2 and 5 gives
$Q^{[2]}+Q^{[5]}=2 \times v_{y}^{0} \frac{\Delta}{2} \Delta^{2}=v_{y}^{0} \Delta^{3}. \nonumber$
Note that, if the velocity $v$ were uniform, this net outward flux would be zero, i.e., what comes in one face goes out the other. The net flux is nonzero only when the velocities through the two faces differ. We can now repeat this process for each of the other two opposite pairs of faces:
$Q^{[1]}+Q^{[4]}=u_{x}^{0} \Delta^{3}, \quad \text { and } \quad Q^{[3]}+Q^{[6]}=v_{z}^{0} \Delta^{3} \nonumber$
Adding these results, we have the net outflow:
$Q=\left(u_{x}^{0}+v_{y}^{0}+w_{z}^{0}\right) \Delta^{3} \nonumber$
At this stage we take the limit as $\Delta\rightarrow 0$ so that the higher-order terms that we have neglected vanish. Because our cube could have been placed anywhere in the velocity field, this result is true at every point and we don’t need drop the superscript “0”. The infinitesimal volume flux $\delta Q$ from this small cube therefore expresses the divergence of the velocity field:
$\delta Q=\vec{\nabla} \cdot \vec{u} \delta V,\label{eqn:4}$
where $\delta V$ is the limit of the volume $\Delta^3$.
In many situations, the flows into and out of a small volume balance, and therefore $\vec{\nabla}\cdot\vec{u}=0$. There are two exceptions:
1. There is a volume source, e.g., fluid is being pumped into the cube through a hose.
2. The fluid expands or contracts, e.g., as a result of heating or cooling.
Summing the cubes
Suppose we now want to know the net outflow from two adjacent cubes. We would sum the flows through each face as before. Note, however, that the volume fluxes through the two adjacent faces exactly cancel. (The velocities are the same and the unit normals are opposite.) By Equation $\ref{eqn:4}$, this net outflow equals the divergence evaluated at the center of each cube multiplied by the volume $\delta V$ and summed over the two cubes.
We can generalize this to any assemblage of adjacent cubes: the net outflow is the sum of the outflows through the exterior faces only, because the flows through the interior faces cancel. Moreover, this is equal to the sum of the divergences in each cube times $\delta V$.
The divergence theorem
An arbitrary volume can be approximated with arbitrary precision as an assemblage of small cubes. The foregoing results regarding the flux from a small cube, in the limit as $\delta V \rightarrow 0$, give us the divergence theorem (also called Gauss’ theorem2):
Theorem: Within a given flow field $\vec{u}\left(\vec{x}\right)$, imagine volume of space $V$ bounded by an arbitrary closed surface $A$. At each point on the surface, define the outward-pointing unit normal $\hat{n}$. Then the net volume flux out the surface is given by the integral of its divergence throughout the volume:
$Q=\oint_{A} \vec{u} \cdot \hat{n} d A=\int_{V} \vec{\nabla} \cdot \vec{u} d V,\label{eqn:5}$
or, in index notation:
$Q=\oint_{A} u_{i} n_{i} d A=\int_{V} \frac{\partial u_{i}}{\partial x_{i}} d V.\label{eqn:6}$
In physical terms, the divergence theorem tells us that the flux out of a volume equals the sum of the sources minus the sinks within the volume.
The divergence theorem can be generalized considerably. First, $\vec{u}$ does not have to be the flow velocity; the theorem holds for any vector field. Second, the theorem can be applied to higher-dimensional objects. Suppose, for example, that we take three separate vectors and concatenate them to form the columns of a matrix $\underset{\sim}{A} \left(\vec{x}\right) = \left\{ \vec{u}\left(1\right),\vec{u}\left(2\right),\vec{u}\left( 3\right)\right\}$, or $A_{ij} = u_i^{(j)}$. We can then concatenate Equation $\ref{eqn:6}$ and find
$\oint_{A} A_{i j} n_{i} d A=\int_{V} \frac{\partial A_{i j}}{\partial x_{i}} d V \quad \text { for } j=1,2,3\label{eqn:7}$
One could also let the three vectors be the rows of $\underset{\sim}{A}$, in which case the dummy index in Equation $\ref{eqn:7}$ would be the second index of $\underset{\sim}{A}$ instead of the first.3
1Harald Sverdrup (1888-1957) was a Norwegian oceanographer and meteorologist. He was the scientific director for the Amundsen expedition to the North pole, and was later director of the Scripps Institute of Oceanography in San Diego. He discovered the fundamental balance between wind and the Earth’s rotation that governs the large-scale ocean currents.
2Carl Friedrich Gauss (1777-1855) was a German mathematician and physicist. He is considered one of the greatest scientists in history, and it would be an insult to try to describe his accomplishments in a footnote. However, he did not actually discover the theorem that bears his name - it was used by Lagrange fifty years before Gauss found it.
3The derivation does not rely on $\underset{\sim}{A}$ having the transformation properties of a Cartesian tensor. Indeed, if its columns transform as vectors, then it will not. Conversely, $\underset{\sim}{A}$ may transform as a second-order tensor in which case its columns $\vec{u}^{(1)}$ will not transform as vectors. The theorem works regardless.
4.03: Vorticity and Circulation
Vorticity and circulation are two related measures of the tendency of a flow to rotate. The vorticity is the curl of the velocity field:
$\vec{\omega}=\vec{\nabla} \times \vec{u}, \nonumber$
or in index form
$\omega_{k}=\varepsilon_{i j k} \frac{\partial}{\partial x_{i}} u_{j}. \nonumber$
Using our previous formula (4.1.10) for the curl, we can write the vorticity as
$\vec{\omega}=\hat{e}^{(x)}\left(w_{y}-v_{z}\right)+\hat{e}^{(y)}\left(u_{z}-w_{x}\right)+\hat{e}^{(z)}\left(v_{x}-u_{y}\right),\label{eqn:1}$
where the subscripts denote partial differentiation.
The circulation is a line integral around an arbitrarily-chosen closed curve within the space occupied by the fluid. Its value depends on the curve, velocity of the fluid on the curve, and the direction (e.g., clockwise or counterclockwise) in which the curve is traversed. The circulation is defined as
$\Gamma=\oint \vec{u} \cdot d \vec{\ell}.\label{eqn:2}$
4.3.1 Stokes’ theorem
Stokes’ theorem4 makes the connection between the circulation around a curve and the vorticity within the curve. To derive Stokes’ theorem, we first evaluate the circulation around a small square (Figure $2$). The flow field can be three-dimensional: $\vec{u}=\left\{u,v,w\right\}$, but the coordinates are aligned so that the square lies in the $\hat{e}^{(x)}-\hat{e}^{(y)}$ plane. We now calculate the line integral of $\vec{u}\cdot d\vec{\ell}$ along each of the four edges.
On edge #1, $\vec{\ell} = \hat{e}^{(y)}$, so $\vec{u}\cdot d\vec{\ell}=vdy$, and $x = \Delta/2$. Expanding the spatial variation of the velocity field using a first-order Taylor series, the line integral becomes
\begin{align} \Gamma^{[1]} &=\int_{-\Delta / 2}^{\Delta / 2} d y\left[v^{0}+v_{x}^{0} \frac{\Delta}{2}+v_{y}^{0} y+v_{z}^{0} 0\right] \label{eqn:3}\ &=\left[\Delta v^{0}+\Delta v_{x}^{0} \frac{\Delta}{2}+0+0\right] \label{eqn:4}\ &=\Delta\left(v^{0}+v_{x}^{0} \frac{\Delta}{2}\right) \label{eqn:5} \end{align} \nonumber
On edge #3, $\vec{\ell} = -\hat{e}^{(y)}$, so $\vec{u}\cdot d\vec{\ell} = -vdy$, and $x = −\Delta/2$. Therefore
$\Gamma^{[3]}=-\Delta\left(v^{0}-v_{x}^{0} \frac{\Delta}{2}\right), \nonumber$
and
$\Gamma^{[1]}+\Gamma^{[3]}=2 \Delta v_{x}^{0} \frac{\Delta}{2}=\Delta^{2} v_{x}^{0}. \nonumber$
Similarly,
$\Gamma^{[2]}+\Gamma^{[4]}=-\Delta^{2} u_{y}^{0}. \nonumber$
Adding, we get the net circulation around the square:
$\Gamma=\Delta^{2}\left(v_{x}^{0}-u_{y}^{0}\right). \nonumber$
Referring back to Equation $\ref{eqn:1}$, we see that the quantity in parentheses is the z-component of the vorticity, so taking the limit $\Delta \rightarrow 0$ we have
$\delta \Gamma=\omega^{(z)} \delta A, \nonumber$
where $\delta A$ is the limit of $\Delta^2$ and $\omega (z) = \vec{\omega}\cdot \hat{e}^{(z)}$.
We now generalize this result to the case of an arbitrary surface. We imagine the surface as an assemblage of square tiles. The circulation around each tile is given by Equation $\ref{eqn:1}$. As we found in the case of the volume flux (Figure 4.2.1), the line integrals along adjacent edges cancel. As a result, the net circulation is just the sum of the line integrals along the exterior edges. Summing over all of those edges in the limit as the tile size goes to zero, we have Stokes’ theorem:
Theorem: Let $\Gamma$ be the circulation around an arbitrary closed curve $\ell$, taken in the direction specified by the line element $d\ell$. Moreover, let $A$ be any surface bounded by that closed curve. The unit normal to the surface, $\hat{n}$, is directed in accordance with the right-hand rule (with the fingers pointed along $d\ell$). The circulation is then given by
$\Gamma=\oint \vec{u} \cdot d \vec{\ell}=\int_{A} \vec{\omega} \cdot \hat{n} d A.\label{eqn:6}$
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/04%3A_Tensor_Calculus/4.02%3A_Flux_and_divergence.txt
|
Flood time will repay you just to sit and watch. The river seems to be holding itself up before you like a page open to be read. There is no knowing how the currents move. They shift, and boil, and eddy. They are swifter in some places than in others. To think of a “place” on a flowing surface soon baffles your mind, for the places are ever changing and moving. The current, in all its various motions and speeds, flows along, and that flowing may be stirred again at the surface by the wind in all its various motions. Who can think of it?” - Wendell Berry in “Jayber Crow”
The study of motion can be divided into two parts. Kinematics concerns the description of motion, while dynamics inquires into its causes. In elementary mechanics we are concerned with the motion of solid bodies, e.g., orbiting planets, billiard balls and the apple that supposedly fell on Newton’s head. In these cases the motion is simple to describe, so we don’t pay much attention to kinematics. In contrast, fluid motion can be extremely complicated. The task of describing such motion, a precondition to understanding its causes, is not trivial and is the subject of this chapter.
05: Fluid Kinematics
We can observe a flow in two ways, first by focusing on the motion of a specific fluid parcel (see section 1.2), second by stepping back and looking at the pattern as a whole. These are called the Lagrangian and Eulerian descriptions of flow, respectively. Here we will seek to understand the distinction more fully, to become fluent with both points of view, and to translate between them.
Definitions
Lagrangian information concerns the nature and behavior of fluid parcels.
Eulerian information concerns fields, i.e., properties like velocity, pressure and temperature that vary in time and space.
Here are some examples:
1. Statements made in a weather forecast
• “A cold air mass is moving in from the North.” (Lagrangian)
• “Here (your city), the temperature will decrease.” (Eulerian)
2. Ocean observations
• Moorings fixed in space (Eulerian)
• Drifters that move with the current (Lagrangian)
The Lagrangian perspective is a natural way to describe the motion of solid objects. For example, suppose an apple falls from a tree. Newton taught us to describe the height and velocity of the apple as functions of time. This is a Lagrangian description. To try to describe this event in terms of Eulerian fields would be very awkward. You might define $A(z,t)$ as the “appliness”: $A$ = 1 at points in space and time occupied by the apple and 0 everywhere else, then try to derive a differential equation for $A$. Good luck.
The Eulerian perspective, while useless for solid objects, is natural for fluids. As we will see, it makes the math easier by providing partial differential equations for fields like velocity $\vec{u}\left(\vec{x},t\right)$ and temperature $T\left(\vec{x},t\right)$.
So for fluid mechanics, why do we not just stick to the Eulerian approach? The reason is that the existing laws of physics apply most naturally to fluid parcels. For example,
• if you apply heat to a fluid parcel, its temperature will increase: $dT/dt$ = heating rate;
• if you apply a force to a fluid parcel, it will accelerate: $d\vec{u}/dt = \vec{F}/m$.
So in fluid mechanics we must be bilingual. When we set out to develop the equations of motion (next chapter), we will do it in two steps:
1. apply the known laws of physics to fluid parcels
2. translate the results into Eulerian form for mathematical analysis.
But how do we accomplish this translation between Lagrangian and Eulerian perspectives? The key is an operation called the material derivative, which we discuss next.
5.1.1 The material derivative
Consider a function of time and space $\phi\left(t,\vec{x}\right)$. For specificity, this could be some ocean property such as temperature or salinity. Now suppose that we evaluate $\phi\left( t,\vec{x} \right)$ along some arbitrary trajectory $\vec{x}(t)$. That could be the course of an oceanographic vessel from which $\phi$ is measured. Let the velocity of the measurement point (i.e., the ship) be $\vec{v} = d\vec{x}/dt$. At what rate will our measured value of $\phi$ change in time? The answer is given by the chain rule:
$\frac{d \phi}{d t}=\frac{\partial \phi}{\partial t}+\frac{\partial \phi}{\partial x_{j}} \frac{d x_{j}}{d t}=\frac{\partial \phi}{\partial t}+v_{j} \frac{\partial \phi}{\partial x_{j}}. \nonumber$
(Note that derivatives of $\phi$ are partial derivatives because $\phi$ is a function of several variables, whereas derivatives of $\vec{x}$ are total derivatives because $\vec{x}$ is a function of time only.)
Now consider the special case in which $\vec{x}(t)$ is the trajectory of a fluid parcel, and the observer is following the same trajectory (e.g., a boat allowed to drift with the current). The velocity $\vec{v}$ is now the velocity of the flow at the parcel’s location at any given time: $\vec{u}(t)$. In this special case, the rate of change we measure will be
$\frac{d \phi}{d t}=\frac{\partial \phi}{\partial t}+u_{j} \frac{\partial \phi}{\partial x_{j}}. \nonumber$
The expression on the right-hand side is called the material derivative, i.e., the time derivative following a material parcel. It is a total time derivative, but is distinguished by the use of an uppercase “D”, e.g., $D\phi/Dt$. It can be written equivalently in index form or in vector form:
$\frac{D}{D t} \equiv \frac{\partial}{\partial t}+u_{j} \frac{\partial}{\partial x_{j}} \equiv \frac{\partial}{\partial t}+\vec{u} \cdot \vec{\nabla}\label{eqn:1}$
The material derivative has a dual character: it expresses Lagrangian information (the rate of change following a fluid parcel), but does so in an Eulerian way, i.e., in terms of partial derivatives with respect to space and time.
It is instructive to solve Equation $\ref{eqn:1}$ for the partial time derivative. Suppose, for example, that the field in question is air temperature. Then
$\underbrace{\frac{\partial T}{\partial t}}_{\text {thermometer reading }}=\overbrace{\frac{D T}{D t}}^{\text {heating/cooling }}-\underbrace{\vec{u} \cdot \vec{\nabla} T}_{\text {advection }}.\label{eqn:2}$
This tells us that the temperature at a given location can change for two reasons corresponding to the two terms on the right-hand side. The first term, $DT/Dt$, is nonzero only if the air parcels are actually being heated or cooled (heated by the sun, perhaps). The second term is due to the wind: if the wind is blowing from a warm place, the local temperature will rise1. This process, whereby local changes result from transport by the flow, is called advection2.
Test your understanding by doing exercise 21.
1The minus sign in the advection term of Equation $\ref{eqn:2}$ indicates that temperature change is determined by the direction the wind is from. This is why meteorologists (and sailors, and folksingers) traditionally name a wind by its origin, e.g., a “west wind” or “westerly wind” blows from the west. Oceanographers (such as the present author) eschew this perverse tradition; they refer instead to the direction a current flows toward, e.g., an eastward current.
2This is why we call $\vec{u}\cdot\vec{\nabla}$ the advective derivative, cf. section 4.1.5.
5.02: The Streamfunction
Many flows are approximately two-dimensional. For example, the thickness of the Earth’s atmosphere relative to the planet is comparable to the skin on an apple. Large-scale atmospheric flows are therefore nearly two-dimensional. When a flow is two-dimensional and also incompressible $\left( \vec{\nabla}\cdot\vec{u}=0 \right)$, it can represented by means of a streamfunction $\psi$. Suppose the two dimensions are $x$ and $y$, with corresponding velocity components $u(x, y)$ and $v(x, y)$. Then $\psi$ is defined such that
$u=-\frac{\partial \Psi}{\partial y} ; \quad v=\frac{\partial \Psi}{\partial x} \label{eqn:1}$
Curves of constant $\psi$ are called streamlines. The simple example of a pair of nearby minima is shown in (Figure 5.2(a)).
Several properties of the streamfunction are noteworthy.
1. The definition Equation $\ref{eqn:1}$ guarantees that $\vec{\nabla}\cdot\vec{u}$ will be zero. (Check this for yourself.)
2. The sign convention is arbitrary. The present convention leads to $\omega^{(z)} = +\nabla^2\psi$.
3. You can add any fixed number to $Ψ$ and it makes no difference to the resulting flow. In other words, the streamfunction is defined only up to an additive constant.
4. The direction of the flow vector $\vec{u}$ is perpendicular to $\vec{\nabla}\psi$ or, equivalently, parallel to the streamlines: $\vec{u} \cdot \vec{\nabla} \Psi=u \frac{\partial \Psi}{\partial x}+v \frac{\partial \Psi}{\partial y}=u v-v u=0, \nonumber$ using Equation $\ref{eqn:1}$.
5. The flow direction is defined more fully by noting the signs of the derivatives in Equation $\ref{eqn:1}$. These require that flow be clockwise around a maximum in $\psi$ and counterclockwise around a minimum (such as the pair of minima in Figure $1$). In (Figure $1$a), the streamfunction increases (becomes less negative) from point “A” to point “B”, hence $\partial\psi/\partial x > 0$, hence $v > 0$.
6. The speed (velocity magnitude) is $\sqrt{u^2+v^2} = |\vec{\nabla}\psi|$. Therefore, the flow is fastest where streamlines are clustered together, such as just outside the two peaks on (Figure $1$a). Flow is slow where streamlines are widely spaced.
Given the velocity components $u$, $v$, one can easily invert Equation $\ref{eqn:1}$ to obtain the streamfunction. We can start with either equation; here we’ll pick the first one. Integrating:
$\Psi=-\int u \, d y+f(x), \nonumber$
where $f$ is an unknown function independent of $y$. Substituting this into the second of Equation $\ref{eqn:1}$ gives
$\frac{\partial \Psi}{\partial x}=-\int \frac{\partial u}{\partial x} d y+f^{\prime}(x)=v. \nonumber$
This is readily solved for $f^\prime$, which we integrate to obtain $f$. Note that the constant of integration is arbitrary because of point 3 above.
Test your understanding by doing exercise 23, parts (a) and (b).
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/05%3A_Fluid_Kinematics/5.01%3A_Lagrangian_and_Eulerian_descriptions.txt
|
The complexity of flow can be halved, in a sense, by thinking of it as a combination of two simpler kinds of motion: rotation and strain (Figure $1$), with one or the other dominating at each point. We can then learn something useful by considering idealized flows consisting of rotation alone or strain alone.
Consider the instantaneous relative motion of two nearby fluid particles separated by the vector $\Delta \vec{x}$ (Figure $2$). Their velocities are
$\frac{D}{D t} \vec{x}=\vec{u}, \quad \text { and } \quad \frac{D}{D t}(\vec{x}+\Delta \vec{x})=\vec{u}+\Delta \vec{u}, \nonumber$
and we can then subtract to see that
$\frac{D}{D t} \Delta \vec{x}=\Delta \vec{u}.\label{eqn:1}$
The ith component of the velocity difference $\Delta\vec{u}$ can be written as
$\Delta u_{i}=\frac{\partial u_{i}}{\partial x_{j}} \Delta x_{j}.\label{eqn:2}$
The velocity gradient tensor $\partial u_i/\partial x_j$ can be decomposed into symmetric and antisymmetric parts:
$\frac{\partial u_{i}}{\partial x_{j}}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)+\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}-\frac{\partial u_{j}}{\partial x_{i}}\right), \nonumber$
or
$\frac{\partial u_{i}}{\partial x_{j}}=e_{i j}+\frac{1}{2} r_{i j},\label{eqn:3}$
where the symmetric part
$e_{i j}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)\label{eqn:4}$
is called the strain rate tensor1 and the antisymmetric part (times two) is called the rotation tensor2:
$r_{i j}=\frac{\partial u_{i}}{\partial x_{j}}-\frac{\partial u_{j}}{\partial x_{i}}\label{eqn:5}$
Substituting in Equation $\ref{eqn:2}$, we now write the velocity differential as
$\Delta u_{i}=\frac{\partial u_{i}}{\partial x_{j}} \Delta x_{j}=e_{i j} \Delta x_{j}+\frac{1}{2} r_{i j} \Delta x_{j}\label{eqn:6}$
Any region of a flow can be characterized as strain-dominated or rotation-dominated depending on the relative magnitudes of the two terms on the right-hand side of Equation $\ref{eqn:3}$. More specifically, we can multiply Equation $\ref{eqn:3}$ by itself and get
$\left(\frac{\partial u_{i}}{\partial x_{j}}\right)^{2}=\left(e_{i j}+\frac{1}{2} r_{i j}\right)^{2}=e_{i j} e_{i j}+\frac{1}{4} r_{i j} r_{i j}\label{eqn:7}$
The cross term $e_{ij}r_{ij}$ is the product of a symmetric and an antisymmetric matrix and therefore vanishes identically (exercise 12). The remaining two terms are positive definite measures of the degree of strain and the degree of rotation. Vortices, not surprisingly, are rotation-dominated, e.g., the pair of corotating vortices shown in Figure $3$. (The stream function for this vortex pair is shown in figure 5.2.1.) The rotation term is greatest in the two vortex cores located at $x = \pm 1$ (Figure $3$a), while strain dominates in the region around the vortices, and especially between them (Figure $3$b, near $x = 0$).
The example in Figure $3$ is highly simplified; in a real flow the strained and rotating regions are intertwined in very complex ways, but are still recognizable. Figure $4$ shows the evolution of turbulence in a shear layer. It begins (Figure $4$a) with the growth of co-rotating vortices (cf. Figure $1$). These become unstable (Figure $4$b) and break down into turbulence (Figure $4$c). The turbulence eventually decays, leaving a stable shear layer thickened by turbulent mixing. In the phase of vigorous turbulence, the strain magnitude $e^2_{ij}$ displays an intricate structure (Figure $5$). In the next two sections, we look more closely at the properties of rotation- and strain-dominated regions.
5.3.1 Rotation
The rotation tensor is closely related to a more familiar object: the vorticity vector $\vec{\omega}$:
$\underset{\sim}{r}=\left[\begin{array}{ccc} 0 & \frac{\partial u}{\partial y}-\frac{\partial v}{\partial x} & \frac{\partial u}{\partial z}-\frac{\partial w}{\partial x} \ \frac{\partial v}{\partial x}-\frac{\partial u}{\partial y} & 0 & \frac{\partial v}{\partial z}-\frac{\partial w}{\partial y} \ \frac{\partial w}{\partial x}-\frac{\partial u}{\partial z} & \frac{\partial w}{\partial y}-\frac{\partial v}{\partial z} & 0 \end{array}\right]=\left[\begin{array}{ccc} 0 & -\omega_{3} & \omega_{2} \ \omega_{3} & 0 & -\omega_{1} \ -\omega_{2} & \omega_{1} & 0 \end{array}\right]\label{eqn:8}$
Reverting to index notation, we may write this relationship in a much more compact form:
$r_{i j}=-\varepsilon_{i j k} \omega_{k}\label{eqn:9}$
The contribution of rotation to the velocity differential in Equation $\ref{eqn:6}$ is now
$\frac{1}{2} r_{i j} \Delta x_{j}=-\frac{1}{2} \varepsilon_{i j k} \omega_{k} \Delta x_{j}=\frac{1}{2}(\vec{\omega} \times \Delta \vec{x})_{i}\label{eqn:10}$
Thus, the change in velocity due to rotation is perpendicular to both the separation vector and the local vorticity. A consequence of this is that rotation does not change the distance $|\Delta \vec{x}|$ between the two particles; only strain can accomplish that. To show this explicitly, we write the equation for $|\Delta \vec{x}|$ (or, equivalently, $|\Delta \vec{x}|^2/2$) in vector form:
$\frac{D}{D t} \frac{1}{2}|\Delta \vec{x}|^{2}=\Delta \vec{x} \cdot \frac{D}{D t} \Delta \vec{x}=\Delta \vec{x} \cdot \Delta \vec{u}=\Delta \vec{x} \cdot\left(\underset{\sim}{e} \Delta \vec{x}+\frac{1}{2} \vec{\omega} \times \Delta \vec{x}\right)=\Delta \vec{x} \cdot \underset{\sim}{e} \Delta \vec{x} \nonumber$
The second step above makes use of Equation $\ref{eqn:1}$. Thus, changes in the distance between particles are cause only by strain.
5.3.2 Axisymmetric vortex models
Vortex motion is often approximately axisymmetric, i.e., invariant with respect to rotation about the vortex axis. Here we examine some very simple, axisymmetric vortex models. These are also called cylindrical, or circular, vortices.
Until now, we have measured space using Cartesian coordinates, but in some situations, curvilinear coordinates simplify the math. All of the mathematical constructs derived up to now can be expressed in curvilinear coordinates, and these expressions are listed in appendix I. The study of axisymmetric vortices is simplified using cylindrical polar coordinates (Figure $6$). In this case, every position in space has coordinates ${r,\theta,z}$ corresponding to the radial, azimuthal and axial directions, respectively. The corresponding velocity components are ${u_r,u_\theta ,u_z}$. The vorticity is then given as the curl of the velocity vector:
$\vec{\nabla} \times \vec{u}=\left\{\frac{1}{r} \frac{\partial u_{z}}{\partial \theta}-\frac{\partial u_{\theta}}{\partial z}, \frac{\partial u_{r}}{\partial z}-\frac{\partial u_{z}}{\partial r}, \frac{1}{r} \frac{\partial\left(r u_{\theta}\right)}{\partial r}-\frac{1}{r} \frac{\partial u_{r}}{\partial \theta}\right\}. \nonumber$
In an axisymmetric vortex, the vorticity is purely axial and depends only on the radial coordinate:
$\vec{\omega}=\omega(r) \hat{e}^{(z)} ; \quad \omega(r)=\frac{1}{r} \frac{\partial\left(r u_{\theta}\right)}{\partial r}.\label{eqn:11}$
The circulation around such a vortex at any radius $r$ is just $\Gamma(r) = 2\pi ru_\theta$ (show this). We’ll look at three kinds of vortex motion in this geometrical context.
Rigid rotation
In this case the vorticity is uniform. Solving Equation $\ref{eqn:11}$ gives
$u_{\theta}=\frac{\omega}{2} r. \nonumber$
Note that the velocity is unbounded.
An irrotational vortex
In this case the motion is circular but the vorticity is zero. Solving Equation $\ref{eqn:11}$ gives
$u_{\theta}=\frac{C}{r}, \nonumber$
where $C$ is a constant of integration. The circulation is constant: $\Gamma = 2\pi C$. This gives us a meaningful way to identify the constant:
$u_{\theta}=\frac{\Gamma}{2 \pi r}. \nonumber$
This velocity distribution is unbounded at the origin.
To understand how motion can be circular but irrotational, consider the hand motions illustrated in Figure $7$. When you wave to someone (Figure $7$a), the orientation of your hand changes. When you wipe a window (Figure $7$b), your hand moves in a circle but its orientation doesn’t change. Likewise, an object floating in an irrotational vortex would move in a circle without changing its orientation.
The Rankine vortex
The Rankine vortex3 is a useful model for localized vortices such as tornadoes. The vorticity is uniform out to a radius $r = R$, and zero (irrotational) beyond that. The azimuthal velocity is sketched in Figure $8$. It is left as an exercise for the student to work out the mathematical expressions for $u_\theta$ and $\Gamma$.
Test your understanding by doing problems 24 and 25.
An isolated vortex
Consider the following vorticity distribution, sketched in Figure $9$:
$\omega=\left\{\begin{array}{cc} 2 \dot{\theta}, & 0 \leq r<R_{1} \ -2 \dot{\theta}, & R_{1} \leq r \leq R_{2} \ 0, & r>R_{2} \end{array}\right.\label{eqn:12}$
where the constant $\dot{\theta}$ is the angular velocity (i.e. the time derivative of $\theta$). On the $x$ axis, azimuthal velocity is the same as the Cartesian velocity $v$, and is a maximum at $r = R_1$. Similarly, on the $y$ axis, azimuthal velocity is $u$. From the signs of the derivatives of $u$ and $v$, it is easy to see that the vorticity $\omega = v_x −u_y$ is positive for $r ≤ R_1$ and negative for $R_1 < r ≤ R_2$ as in Equation $\ref{eqn:12}$.
A vortex is called isolated if its total circulation is zero. Here, the total circulation is
$\Gamma_{\text {total}}=2 \pi \int_{0}^{\infty} \omega\left(r^{\prime}\right) r^{\prime} d r^{\prime} \nonumber$
can be evaluated for any $r ≥ R_2$ since there is no vorticity (hence no change in circulation) in that region. The total circulation is zero if $R_2 = \sqrt{2}R_1$ (check for yourself).
5.3.3 Strain
The strain rate tensor is symmetric by definition:
$\underset{\sim}{e}=\left[\begin{array}{ccc} u_{x} & \frac{1}{2}\left(u_{y}+v_{x}\right) & \frac{1}{2}\left(u_{z}+w_{x}\right) \ \frac{1}{2}\left(u_{y}+v_{x}\right) & v_{y} & \frac{1}{2}\left(v_{z}+w_{y}\right) \ \frac{1}{2}\left(u_{z}+w_{x}\right) & \frac{1}{2}\left(v_{z}+w_{y}\right) & w_{z} \end{array}\right].\label{eqn:13}$
Here, subscripts represent partial derivatives. The diagonal elements of $\underset{\sim}{e}$, namely $u_x$, $v_y$ and $w_z$, represent normal strain. These can be either extensional or compressive depending on the sign (compare Figures $10$a,b). Off-diagonal components represent transverse strain, which may also be called tangential strain or shear (Figure $10$c).
Two points about normal strains are noteworthy.
• Imagine an irrotational straining motion in which the direction of the separation vector between two particles does not change, e.g., the normal strains shown in figures $10$ a and b. The separation vector must be an eigenvector of the strain rate tensor:
$\frac{d}{d t} \Delta \vec{x}=\Delta \vec{u}=e \Delta \vec{x}=\lambda \Delta \vec{x}. \nonumber$
The solution of the above is
$\Delta \vec{x}=\Delta \vec{x}_{0} \exp (\lambda t), \nonumber$
i.e., the length of the separation vector grows or decays exponentially in time, and the corresponding eigenvalue gives the rate of growth/decay.
• The sum of the normal strains is the trace of the strain rate tensor, which is also equal to the divergence of the velocity field:
$\operatorname{Tr}(e)=e_{i i}=\frac{\partial u_{i}}{\partial x_{i}}=\vec{\nabla} \cdot \vec{u}.\label{eqn:14}$
In an incompressible fluid, where $\vec{\nabla}\cdot\vec{u}=0$, the normal strains must add to zero, i.e., extension and compression balance.
5.3.4 The principal strains
Consider the evolution of a circular distribution of fluid particles advected by a uniformly sheared flow (Figure $11$), for which the strain is purely transverse (similar to Figure $10$c). Arrows show the velocity profile, with rightward motion in the upper half of the figure changing linearly to leftward motion in the lower half. As a result, the top of the circle moves to the right, the bottom moves to the left, and the sides don’t move. After a short time, the circle becomes an ellipse with major axis tilted 45 degrees from the horizontal. When viewed in a coordinate frame tilted at the same angle (dashed lines), the circle is being expanded in one direction and compressed in the other. In other words, the transverse strain now appears as a purely normal strain.
This raises a crucial point: the distinction between normal and transverse strains depends on the choice of coordinates. In fact, we will now show that any strain is purely normal in an appropriately chosen coordinate system. Like any second-order tensor, the strain rate tensor can be expressed in a rotated reference frame using the transformation rule Equation 3.3.5:
$e_{i j}^{\prime}=e_{k l} C_{k i} C_{l j}.\label{eqn:15}$
Recall that the columns of the rotation matrix $\underset{\sim}{C}$ are the basis vectors of the rotated coordinate system. Now, suppose that we transform $\underset{\sim}{e}$ into the special reference frame whose basis vectors are the eigenvectors of $\underset{\sim}{e}$. Then the $j^{th}$ column of $\underset{\sim}{C}$ is the $j^{th}$ eigenvector:
$C_{l j}=v_{l}^{(j)},\label{eqn:16}$
and $\lambda^{(j)}$ is the corresponding eigenvalue:
$\left.e_{k l} v_{l}^{(j)}=\lambda^{(j)} v_{k}^{(j)} \quad \text { (no sum on } j\right).\label{eqn:17}$
Now assume that the eigenvectors have been chosen to be orthogonal with length equal to 1 (and therefore $v_k^{(I)}v_k^{(j)}=\delta_{ij}$) and ordered so that $\det\left(\underset{\sim}{C}\right)=+1$. In other words $\underset{\sim}{C}$ represents a proper rotation. We now reorder Equation $\ref{eqn:15}$ and substitute Equation $\ref{eqn:16}$ and Equation $\ref{eqn:17}$:
$e_{i j}^{\prime}=C_{k i} e_{k l} C_{l j}=v_{k}^{(i)} e_{k l} v_{l}^{(j)}=v_{k}^{(i)} \lambda^{(j)} v_{k}^{(j)}=\lambda^{(j)} \delta_{i j} \quad(\text { no sum on } j). \nonumber$
The result is just a diagonal matrix with the eigenvalue $\lambda^{(j)}$ as the $j^{th}$ diagonal element:
$\underset{\sim}{e^{\prime}}=\left[\begin{array}{ccc} \lambda^{(1)} & 0 & 0 \ 0 & \lambda^{(2)} & 0 \ 0 & 0 & \lambda^{(3)} \end{array}\right].\label{eqn:18}$
This special reference frame is called the principal frame. The basis vectors (the eigenvectors of $\underset{\sim}{e}$) are the principal axes of strain and the normal strains appearing on the main diagonal (the eigenvalues of $\underset{\sim}{e}$) are the principal strains.
Test your understanding by doing problem 23.
The fine print: In this discussion, we have made two implicit assumptions about the strain rate tensor. First, we have assumed that its eigenvalues and eigenvectors are all real; otherwise, the geometrical interpretation of the principal axes and strains would make no sense. Second, we have assumed that the eigenvectors can be chosen to be orthogonal. Happily, these properties are guaranteed for real, symmetric matrices, of which the strain rate tensor is one. For further details see any linear algebra text (e.g., Bronson and Costa 2009).
1Strain quantifies the net deformation of a material. The strain rate is its time derivative.
2It is only a matter of historical accident that $e_{ij}$ is defined with the factor 1/2 and $r_{ij}$ is not.
3William John Macquorn Rankine (1820-1872) was a Scottish engineer whose primary interest was the thermodynamics of steam engines.
5.04: Pancakes and noodles- the geometry of turbulence
In incompressible flow, the sum of the principal strains is zero:
$\lambda^{(1)}+\lambda^{(2)}+\lambda^{(3)}=\operatorname{Tr}(e)=\vec{\nabla} \cdot \vec{u}=0. \nonumber$
This is easily seen in the principal frame, and can be shown to be true in any frame because both the eigenvalues and the trace are scalars. (Exercise: prove this.)
If we order the eigenvalues $\left{\{\lambda(1)\ ,\lambda(2)\ ,\lambda(3) \right\}$ from smallest to largest, then $\lambda(1) < 0$ and $\lambda(3) > 0$, whereas $\lambda(2)$ can have either sign. In any region of space where $\lambda(2) > 0$ there are two extensional strains and one compressive strain. As a result, a spherical fluid parcel would be distended into an oblate spheroid or, less technically, a “pancake”. In regions where $\lambda(2) < 0$, there are two compressive strains and one extensional strain resulting in a prolate spheroid, or “noodle”.
In the 1990s, computational hardware advances allowed the simulation of turbulent flows to determine which flow geometry is dominant. The procedure was to compute the eigenvalues of the strain rate tensor at every point in space, then see whether positive or negative values of $\lambda(2) < 0$ were more common. Invariably, it was found that $\lambda(2)$ has a tendency to be positive, i.e., a turbulent strain field is more likely to produce pancakes than noodles. An example from a flow similar to that shown in Figures 5.3.4 and 5.3.5 is shown in Figure 5.3.12 (middle frame).
The rotation field is dominated by one-dimensional regions of rapid rotation as sketched in figure 5.3.1, i.e., vortices. These tend to wrap the pancakes around themselves to form what we might think of as crepes.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/05%3A_Fluid_Kinematics/5.03%3A_Rotation_and_strain-_the_relative_motion_of_two_nearby_particles.txt
|
Fluid dynamics (as opposed to kinematics) inquires into the causes of fluid motion. Our discussion will be based on the two great theories of classical physics: Newton’s laws of motion and the laws of thermodynamics. Newton’s laws were designed to apply to rigid objects like apples and planets. How do we apply them to an object that not only accelerates but also changes its shape in response to an applied force? That challenge will occupy us for the first half of this chapter. Newton’s laws lead naturally to a consideration of kinetic and potential energy. To these concepts we will add internal energy in accordance with the laws of thermodynamics. Finally, the addition of an equation of state (through which we describe the thermodynamic properties of the specific fluid we are interested in) results in a complete set of equations that will, we hope, describe the motion of real fluids.
Our approach to applying classical laws of physics to fluid motion involves the use of conservation laws. Specifically, we assume that
• The mass of a fluid parcel does not change.
• The momentum of a fluid parcel changes due to the applied forces in accordance with Newton’s second law.
• The total energy of a parcel changes due work done and heat added in accordance with the first law of thermodynamics.
Mathematically, conservation laws for a fluid parcel take the Lagrangian form:
$\frac{D}{D t} \int_{V_{m}(t)}(\text { something }) d V=\cdots,\label{eqn:1}$
where $V_m(t)$ is the fluid parcel: a volume of space that changes in time but always contains the same fluid. (The subscript m indicates a material volume; another term for a fluid parcel.) The equation says that the net amount of some property contained by the fluid parcel (e.g., mass, momentum, energy) changes in time according to whatever is on the right hand side. While the physical meaning of Equation $\ref{eqn:1}$ is easy to understand, the result is an integro-differential equation whose solution would be quite daunting. We’ll start by establishing Leibniz’ rule, which allows us to convert Lagrangian conservation laws of the form Equation $\ref{eqn:1}$ into Eulerian partial differential equations, which we at least have some hope of solving.
06: Fluid Dynamics
Leibniz’s rule1 allows us to take the time derivative of an integral over a domain that is itself changing in time. Suppose that $f\left( \vec{x},t \right)$ is the volumetric concentration of some unspecified property we will call “stuff”. The Leibniz rule is mathematically valid for any function $f\left(\vec{x},t\right)$, but it is easiest to interpret physically if we imagine that f is something per unit volume. For a concrete example, imagine that the “stuff” is air, and $f$ is then the mass of air molecules per unit volume, i.e., the density. Now consider a closed surface that can change arbitrarily in time (not a material volume, in general). Its area is $A(t)$ and the volume it encloses is $V(t)$ (Figure $1$a). The quantity of “stuff” contained in the volume at any given time is $\int_V f\left( \vec{x},t \right) dV=\int_V \frac{\partial f}{\partial t}dV$. That quantity can change in time in two ways.
First, the concentration $f$ may change in time, e.g., the density of the air may change due to heating or cooling. If this were the only source of change, we could write:
$\frac{d}{d t} \int_{V} f(\vec{x}, t) d V=\int_{V} \frac{\partial f}{\partial t} d V.\label{eqn:1}$
Second, the volume itself can change, for example, the volume could grow, thereby engulfing more “stuff”. Quantifying this second contribution requires a bit more thought. At any point on the boundary we define $\hat{n}$ to be the outward-pointing normal vector (Figure $1$a). The points that make up the boundary have velocity $\vec{u}_A$, which varies over the boundary and also in time as the surface evolves. The expansion velocity, $\vec{u}_A\cdot\hat{n}$, is the component of $\vec{u}_A$ that is perpendicular to the boundary and directed outward. Now consider the motion of a small surface element over a brief time $dt$ (Figure $1$b). The surface element moves by a distance $\vec{u}_A\cdot\hat{n}dt$, thereby enclosing a small volume $dV=\vec{u}_A\cdot\hat{n}dtdA$. The amount of “stuff” contained in this small volume is $f dV$, or $f\vec{u}_A\cdot\hat{n}dtdA$. If we now integrate this quantity over the whole surface, we get the amount of “stuff” engulfed (or ejected, if $\vec{u}_A\cdot\hat{n}<0$) in time $dt$: $\int_A f\vec{u}_A\cdot\hat{n}dt dA$. Dividing by $dt$ and taking the limit $dt \rightarrow 0$, we have the second term that controls the change in the amount of “stuff” enclosed by our surface $A$:
$\frac{d}{d t} \int_{V(t)} f(\vec{x}, t) d V=\int_{V(t)} \frac{\partial f}{\partial t} d V+\int_{A(t)} f \vec{u}_{A} \cdot \hat{n} d A.\label{eqn:2}$
This is the most general form of Leibniz’s rule.
Three special cases
1. If the surface is unchanging in time, then $\vec{u}_A=0$ and Equation $\ref{eqn:2}$ is the same as $\ref{eqn:1}$.
2. Suppose that $f$ is a function of only one spatial coordinate and time: $f = f(x,t)$. The integral is then an ordinary integral from, say, $x = a$ to $x = b$, but the boundaries $a$ and $b$ can vary in time (Figure $2$). In that case Leibniz’ rule becomes
$\frac{d}{d t} \int_{a(t)}^{b(t)} f(x, t) d x=\int_{a(t)}^{b(t)} \frac{\partial f}{\partial t} d x+f(b, t) \frac{d b}{d t}-f(a, t) \frac{d a}{d t}.\label{eqn:3}$
The second and third terms on the right-hand side are the contributions due to the motion of the boundaries. This is the version of Leibniz’ rule commonly found in calculus textbooks.
3. The most important case of Equation $2$ for fluid mechanics is that in which $A(t)$ is a material surface $A_m(t)$, always composed of the same fluid particles, and $V = V_m(t)$ is therefore a material volume (or fluid parcel). In this case $\vec{u}_A$ is just $\vec{u}\left(\vec{x},t\right)$, the velocity of the motion, and the time derivative is $D/Dt$:
$\frac{D}{D t} \int_{V_{m}(t)} f(\vec{x}, t) d V=\int_{V_{m}(t)} \frac{\partial f}{\partial t} d V+\int_{A_{m}(t)} f \vec{u} \cdot \hat{n} d A. \nonumber$
Note that the time derivative is defined as $D/Dt$ because it is evaluated in a reference frame following the motion. It does not, however, have the form (5.1.3), as it does when applied to a continuous field. Now notice that, in the final term, the integrand is the dot product of the vector $f\vec{u}$ and the outward unit normal $\hat{n}$. According to the divergence theorem2, we can convert this term to a volume integral:
$\int_{A}(f \vec{u}) \cdot \hat{n} d A=\int_{V} \vec{\nabla} \cdot(f \vec{u}) d V. \nonumber$
We now have Leibniz’ rule for a material volume:3
$\frac{D}{D t} \int_{V_{m}(t)} f(\vec{x}, t) d V=\int_{V_{m}(t)}\left(\frac{\partial f}{\partial t}+\vec{\nabla} \cdot f \vec{u}\right) d V.\label{eqn:4}$
1Gottfried Wilhelm Leibniz (1646-1716) was a German philosopher and mathematician who invented calculus independently of Isaac Newton. The notation “$d/dx$” that we use today comes from Leibniz’s version. As a philosopher, Leibniz espoused the theory of “optimism”, which holds that the universe we inhabit is the best one that God could have created. This cheery attitude is especially admirable given that Newton got all the credit for inventing calculus.
2The divergence theorem reads $\oint_A\vec{v}\cdot\hat{n}dA=\int_A\vec{\nabla}\cdot\vec{v}dV$, where $\vec{v}$ is a general vector and $\hat{n}$ is the unit normal to the closed surface $A$. See section 4.2.3 for details.
3Note that the integral on the left-hand side of Equation $\ref{eqn:4}$ depends only on time. We write its time derivative as $D/Dt = \partial/\partial t +\vec{u} \cdot \vec{\nabla}$, even though only the term $\partial/\partial t$ is nonzero, and the derivative could be written just as accurately as $d/dt$. We choose the symbol $D/Dt$ to remind ourselves that this time derivative is measured by an observer moving with the flow.
6.02: Leibniz rule
In the introduction to this chapter we listed three assumptions that our theory of flow will be based on. We now have the tools to convert those assumptions into Eulerian form.
Our first assumption is that mass is neither created nor destroyed, which appears to be true absent nuclear reactions. In that case, the time derivative of the mass of a fluid parcel, taken on the parcel’s trajectory, is
$\frac{D}{D t} \int_{V_{m}(t)} \rho(\vec{x}, t) d V=0,\label{eqn:1}$
where $V_m$ is a material volume. This statement is in Lagrangian form. It is physically clear but difficult to handle mathematically. Invoking (6.1.6), we can write this equation as a volume integral:
$\int_{V_{m}(t)}\left(\frac{\partial \rho}{\partial t}+\vec{\nabla} \cdot \rho \vec{u}\right) d V=0 . \quad \forall V_{m}\label{eqn:2}$
Now here is a critical point that we will encounter repeatedly: the material volume $V_m$ is chosen arbitrarily; we could just as easily choose some different volume and the integral would still be zero. So, in what case may an integral be zero over every possible domain of integration? This can happen only if the integrand itself is zero everywhere. Therefore, mass conservation requires that
$\frac{\partial \rho}{\partial t}+\vec{\nabla} \cdot \rho \vec{u}=0\label{eqn:3}$
at every point. This is commonly known as the continuity equation. Note that the product $\rho\vec{u}$ in Equation $\ref{eqn:3}$ is the mass flux. The equation therefore tells us that the density at each point increases or decreases depending on whether the mass flux converges or diverges. We have now succeeded in converting the Lagrangian statement of mass conservation Equation $\ref{eqn:1}$ into the Eulerian form Equation $\ref{eqn:3}$. In other words, we have a partial differential equation which can, in principle at least, be solved.
A second form of Equation $\ref{eqn:3}$ is equally useful. We use the product rule to split the second term into two,
$\frac{\partial \rho}{\partial t}+\vec{u} \cdot \vec{\nabla} \rho+\rho \vec{\nabla} \cdot \vec{u}=0, \nonumber$
and note that the first two terms now form the material derivative of $\rho$. Thus,
$\frac{D \rho}{D t}+\rho \vec{\nabla} \cdot \vec{u}=0.\label{eqn:4}$
An important special case is that of an incompressible fluid, for which $\vec{\nabla}\cdot\vec{u}=0$. In that case, the mass of any material volume remains constant:
$\frac{D \rho}{D t}=0.\label{eqn:5}$
Density can still vary in space and time, but if you follow a fluid particle along its trajectory, its density will not change.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.01%3A_Solution_methods.txt
|
The second assumption is that momentum changes only in accordance with Newton’s second law, i.e., the rate of change of momentum of a fluid parcel equals the sum of forces applied. This assumption must be modified for relativistic flows, which occur in solar flares and supernovae, for example, but is otherwise very nearly exact.
6.3.1 Forces acting on a fluid parcel
We can write Newton’s second law for a material volume $V_m(t)$ as an integro-differential equation analogous to Equation 6.2.1:
$\frac{d}{d t} \int_{V_{m}} \rho \vec{u} d V=\sum_n \overrightarrow{\mathscr{F}}^{[n]}.\label{eqn:1}$
The superscript “$\left[ n \right]$” is a label to enumerate the various forces that may be active. While the physical meaning of Equation $\ref{eqn:1}$ is clear, its mathematical solution is less obvious. Our strategy is to convert it to a partial differential equation, as we did with mass conservation. First, we need to specify the individual forces $\vec{\mathscr{F}}^{[n]}$ acting on the parcel $V_m$. These fall into two categories: body forces and contact forces, which we now describe.
• Body forces act throughout the fluid parcel. The only body force we’ll be concerned with here is gravity.1 We’ll assume that the gravitational field is uniform and constant, and hence can be represented by a vector $\vec{g}$ with units of acceleration (force per unit mass). The force per unit volume is then $\vec{F}=\rho \vec{g}$. To get the net gravitational force on a fluid parcel, we simply integrate the force per unit volume over the volume. The $j$-component is
$\int_{V_{m}} \rho g_{j} d V \nonumber$
We often work in gravity-aligned coordinates, where the direction of gravity is taken to be $-\hat{e}^{(z)}$ (i.e., “down”), so that $\rho\vec{g}$ becomes $-\rho g \hat{e}^{(z)}$, $g$ being the magnitude of $\vec{g}$. In this case the $j$ component of the gravitational force per unit volume is $F_j=-\rho g \delta_{j3}$. On the Earth’s surface, a typical value for $g$ is 9.81 ms−2.
• Contact forces act only at the boundary of the fluid parcel. They are the macroscopic expression of intermolecular forces acting between molecules on opposite sides of the boundary. The mathematical description of these forces is one of the primary challenges of fluid dynamics, and we must overcome it before we can apply Equation $\ref{eqn:1}$. The next subsection is devoted to the Cauchy stress tensor, through which we incorporate contact forces into Equation $\ref{eqn:1}$.
6.3.2 The Cauchy stress tensor
Imagine pushing on a solid object with your fingertip, causing it to move. You are only actually pushing on a small part of the object, namely the part in direct contact with your finger, yet the whole object moves. How? Obviously the molecules pushed directly by your finger subsequently push their neighbors, which in turn push their neighbors, and so on. If the object is a rigid solid, the force is transmitted so that all parts of the object accelerate together, greatly simplifying the application of Newton’s second law. But if the object is able to change its shape in response to your push, application of the second law presents a highly interesting challenge!
Stress is the means by which forces are transmitted through the interior of a continuous medium. A fluid is one example of a continuous medium, but the theory applies to elastic solids as well. Stress is the macroscopic expression of forces that act directly between neighboring molecules. It is the force per unit area $\vec{f}$ acting between molecules on either side of an arbitrary surface, which may be at a physical boundary of the medium (e.g., the ocean surface) or may be an imaginary interior surface (Figure $1$).
The stress vector $\vec{f} \left( \vec{x}, t, \hat{n}\right)$ represents the force per unit area acting at a point $\vec{x}$ on a surface where the unit normal is $\hat{n}$ (Figure $1$). It is useful to think of the force as the resultant of two separate forces, one normal to the plane (Figure $2$, shown in blue) and one tangential (green). The normal force is gotten by projecting $\vec{f}$ onto the normal vector: $\left( \vec{f} \cdot \hat{n}\right)\hat{n}$. The tangential is what’s left when we subtract the normal component: $\vec{f}-\left( \vec{f} \cdot \hat{n} \right)\hat{n}$.
Some nomenclature: A normal stress may be referred to as compression (if it is directed toward the plane) or tension2 (if it is directed away from the plane, as in Figure $2$). A tangential stress may also be called a transverse, or a shear stress. At the ocean surface, for example, the atmosphere exerts a compressive normal stress (pressure) and a tangential stress (wind).
Consider the simple case of stress on a coordinate plane, a plane perpendicular to one of the basis vectors. For example, the plane labelled 1 in Figure $3$a (shown in blue) is perpendicular to the coordinate basis vector $\hat{e}^{(1)}$. The stress (force per unit area) acting on this surface is defined as the vector $\vec{f}^{(1)}$. The normal stress is $f^{(1)}_1 \hat{e}^{(1)}$; the tangential stress is $f_2^{(1)}\hat{e}^{(2)}+f_3^{(1)}\hat{e}^{(3)}$. Analogous vectors can be defined for the other two coordinate planes.
The three vectors $\vec{f}^{(i)}$ have a total of 9 components which may be collected to form an array we will call the Cauchy array3:
$\tau_{i j}=f_{j}^{(i)}.\label{eqn:2}$
Each row of the array is one of the vectors $\vec{f}^{(i)}$ (Figure $3$b). The first index of $\tau_{ij}$\) denotes the coordinate plane that the force acts on. The second index denotes the direction of the force component. Each element on the main diagonal represents a normal stress, while off-diagonal elements represent the tangential stresses.
Reversing Equation $\ref{eqn:2}$, we can write the $j$’th component of the force on the coordinate plane whose unit normal is $\vec{n}$ as
$f_{j}=\tau_{i j} n_{i}.\label{eqn:3}$
For example, if the unit normal is $\hat{e}^{(1)}$, then $f_j = \tau_{ij}\delta_{i1}= \tau_{1j}$.
Now suppose we could show that the Cauchy array actually transforms as a tensor. If that were true, then Equation $\ref{eqn:3}$ could describe the force acting on any plane, because we could rotate the coordinates so that the plane in question becomes a coordinate plane and Equation $\ref{eqn:3}$ is valid by definition. The proof that the Cauchy array is a tensor is somewhat involved. It is described in detail in appendix F. Here we give a simplified version.
Consider the net force acting on a fluid particle (a fluid parcel of infinitesimal size). In the limit as the mass becomes small, any nonzero net force results in infinite acceleration, contrary to what we observe. Therefore, the net force on a fluid particle must be zero. This is called the requirement of local equilibrium.
Consider a fluid parcel with the shape of a right triangle of uniform thickness, as shown on Figure $4$a. Look first at the left-hand face. The length of the face is $\ell_1$, and its outward unit normal (blue arrow) is $-\hat{e}^{(1)}$. Therefore, Equation $\ref{eqn:3}$ tells us that the force per unit area acting on the left-hand face is $-\tau_{1j}\hat{e}^{(j)}$. Similarly, the bottom face has length $\ell_2$, outward normal $-\hat{e}^{(2)}$, and force/area $-\tau_{2j}\hat{e}^{(j)}$. Finally, let the hypotenuse have length $h$ and force/area $f$.
In local equilibrium, those forces balance. For the jth component,
$f_{j} h=\tau_{1 j} \ell_{1}+\tau_{2 j} \ell_{2}. \nonumber$
(We have assumed that the thickness of the shape is uniform and therefore cancels out.) Now, note that $\ell_1 = h\sin \theta$ and $\ell_2 = h\cos \theta$; hence
$f_{j} h=\tau_{1 j} h \sin \theta+\tau_{2 j} h \cos \theta. \nonumber$
Dividing by $h$ and noting that $\hat{n}$ has components $\{\sin \theta, \cos \theta\}$, we can write this as
$f_{j}=\tau_{1 j} n_{1}+\tau_{2 j} n_{2}, \nonumber$
which is equivalent to Equation $\ref{eqn:3}$. We conclude that the stress acting at a point obeys Equation$\ref{eqn:3}$.
Equivalently, we can say that the array $\underset{\sim}{\tau}$ transforms as a 2nd-order tensor: $\tau^\prime_{ij}= \tau_{kl}C_{ki}C_{lj}$. We call it the Cauchy stress tensor, or just the stress tensor. It describes a frame-independent relationship between two vectors: the force on a surface per unit area and the unit normal to that surface.
We can also show that the stress tensor must be symmetric. Consider the tangential stresses acting on two adjacent sides of a square (Figure $4$b). The torques exerted by these stresses act oppositely. Like the net force, the net torque must vanish as the size of the square goes to zero; otherwise there would be infinite angular acceleration. This requires that $\tau_{12}$ and $\tau_{21}$ be equal. By extending the argument to all three dimensions, we can show that the stress tensor at a point is symmetric (appendix F).
The symmetry of the stress tensor has three important consequences:
• The stress tensor has just six independent elements: three normal, and three tangential, to the coordinate planes.
• The formula Equation $\ref{eqn:3}$ for the stress vector can be written in the equivalent form
$f_{i}=\tau_{i j} n_{j},\label{eqn:4}\) • Like the strain rate tensor (sections 5.3.3, 5.3.4), the stress tensor is a 2nd-order, real, symmetric tensor. It therefore has real eigenvalues, called the principal stresses, and real, orthogonal eigenvectors, the principal axes of stress. In the reference frame of the principal axes, the stress is entirely normal. 6.3.3 Cauchy’s equation The net contact force on a fluid parcel is the area integral of the force per unit area (defined in 6.3.3) over the boundary: \[\int_{A_{m}} f_{j} d A=\int_{A_{m}} \tau_{i j} n_{i} d A. \nonumber$
The generalized divergence theorem (4.2.12) shows that the right-hand side is equal to the volume integral of the divergence of the stress tensor:
$\int_{V_{m}} \frac{\partial \tau_{i j}}{\partial x_{i}} d V. \nonumber$
The sum of the forces forces appearing on the right-hand side of equation Equation $\ref{eqn:1}$ can therefore be written entirely in terms of volume integrals:
$\frac{D}{D t} \int_{V_{m}} \rho u_{j} d V=\int_{V_{m}}\left(\rho g_{j}+\frac{\partial \tau_{i j}}{\partial x_{i}}\right) d V.\label{eqn:5}$
Now how about the left-hand side? We can convert it to a volume integral using Equation 6.1.6 as we did in the case of mass conservation (section 6.2), but there is a very useful lemma that will make the result simpler.
Lemma
Let $\chi$ be any quantity. Using Leibniz’s rule applied to a material volume
$\frac{D}{D t} \int_{V_{m}} \rho \chi d V=\int_{V_{m}}\left[\frac{\partial}{\partial t} \rho \chi+\vec{\nabla} \cdot(\rho \chi \vec{u})\right] d V. \nonumber$
Expanding via the product rule, this becomes
$\int_{V_{m}}\left[\rho \frac{\partial \chi}{\partial t}+\chi \frac{\partial \rho}{\partial t}+\underline{\chi} \vec{\nabla} \cdot(\rho \vec{u})+\rho \vec{u} \cdot \vec{\nabla} \chi\right] d V. \nonumber$
The second and third terms (underlined) add up to zero, as we can see by removing the common factor χ and recognizing that what’s left is the left-hand side of the mass equation 6.2.3. This leaves us with
$\int_{V_{m}}\left[\rho \frac{\partial \chi}{\partial t}+\rho(\vec{u} \cdot \vec{\nabla}) \chi\right] d V, \nonumber$
which you will recognize as
$\int_{V_{m}} \rho \frac{D \chi}{D t} d V. \nonumber$
Combining, we have
$\frac{D}{D t} \int_{V_{m}} \rho \chi d V=\int_{V_{m}} \rho \frac{D \chi}{D t} d V.\label{eqn:6}$
Note that this equality follows only from conservation of mass and Leibniz’s rule. Therefore, $\chi$ can be either a scalar or a component of a vector or tensor.
Note also that Equation $\ref{eqn:6}$ is most useful when $\chi$ is a specific concentration, i.e., the amount of “something” per unit mass of fluid. In that case, $\rho \chi$ is the amount per unit volume (or volumetric concentration) and the volume integral is the total amount in the fluid parcel.
Using Equation $\ref{eqn:6}$ with $u_j$ as $\chi$, we can rewrite Equation $\ref{eqn:5}$ as
$\frac{D}{D t} \int_{V_{m}} \rho u_{j} d V=\int_{V_{m}} \rho \frac{D u_{j}}{D t} d V=\int_{V_{m}} \rho g_{j} d V+\int_{V_{m}} \frac{\partial \tau_{i j}}{\partial x_{i}} d V, \nonumber$
or
$\int_{V_{m}}\left\{\rho \frac{D u_{j}}{D t}-\rho g_{j}-\frac{\partial \tau_{i j}}{\partial x_{i}}\right\} d V=0 \quad \forall V_{m}. \nonumber$
Now remember that the boundary of our fluid parcel $V_m$ is chosen arbitrarily, so this volume integral must be zero for any fluid parcel we might choose. That can only be true if the integrand is identically zero. From this, we get the partial differential equation expressing Newton’s second law for a deformable medium, called Cauchy’s equation:
$\rho \frac{D u_{j}}{D t}=\rho g_{j}+\frac{\partial \tau_{i j}}{\partial x_{i}}.\label{eqn:7}$
6.3.4 Stress and strain in a Newtonian fluid
Cauchy’s equation applies not only to fluids but also to other elastic materials such as rock, metal and ice. The same theory is therefore used by seismologists, engineers, glaciologists and many others. But now we must part company with these colleagues and press on into the realm of fluids. Note that the equations we have are not closed. Cauchy’s equation represents 3 PDE’s, and we already had one from mass conservation for a total of four. But there are more unknowns than this: density, three components of velocity, plus the six independent elements of the stress tensor. To close this system, we must specify in more detail the physical nature of the material we’re talking about, and that will lead us to the idea of a “Newtonian” fluid.
We define a fluid as a material that flows in response to a shear (or transverse) stress (also see 1.2). If you apply a shear stress to a solid object it may bend or even break, but it will not flow. If it flows in response to a shear stress, it’s a fluid. This includes air, water, paint, blood, ketchup, sand, ... all of which flow in response to a shear stress, though not all in the same way. We’ll do this in two stages, talking first about a stationary (motionless) fluid, then introducing motion.
Stress in a fluid at rest
The condition for a fluid to be motionless is that there be no shear stress. In other words, the stress tensor must be diagonal, since shear stresses appear off the main diagonal. Notice, though, that the existence of shear stress depends on the choice of coordinates. Every stress tensor is diagonal in its principal coordinate frame (like the strain rate tensor as discussed in section 5.3.4). But if we rotate away from those axes, shear terms may appear off the main diagonal.
So, for a fluid to be motionless, there can be no shear stresses in any coordinate frame. This requires that the stress tensor be diagonal in every coordinate frame. This condition is satisfied only by tensors proportional to the identity tensor $\underset{\sim}{\delta}$ (see homework problem 28), meaning that the principal stresses must all be equal. So, we’ll write the stress tensor for a motionless fluid as $\underset{\sim}{\delta}$ times some scalar, which is the common value of the principal stresses. It will turn out to be most convenient if we build a minus sign into the definition of that scalar, so $\underset{\sim}{\tau} = -p\underset{\sim}{\delta}$. We’ll soon recognize that the scalar $p$ is a familiar quantity, namely the pressure. Note that the corresponding stress vector is
$f_{j}=\tau_{i j} n_{i}=-p \delta_{i j} n_{i}=-p n_{j}, \nonumber$
or
$\vec{f}=-p \vec{n}. \nonumber$
The action of pressure on any surface is normal to the surface itself.
Substituting this stress tensor into Cauchy’s equation, we have:
$\rho \frac{D u_{j}}{D t}=\rho g_{j}-\frac{\partial p}{\partial x_{j}}, \nonumber$
and specifying $\vec{u} = 0$,
$\frac{\partial p}{\partial x_{j}}=\rho g_{j}, \quad \text { or } \quad \vec{\nabla} p=\rho \vec{g}. \nonumber$
This is the state of hydrostatic equilibrium: a balance between pressure and gravity that allows a fluid to remain motionless. In gravity-aligned coordinates, $g_j = -g\delta_{j3}$, so:
$\frac{\partial p}{\partial x_{j}}=-\rho g \delta_{j 3}. \nonumber$
The pressure cannot vary in the two horizontal directions ($j$ = 1,2, perpendicular to gravity), but varies in the vertical ($j$ = 3) as prescribed by:
$\frac{\partial p}{\partial z}=-\rho g.\label{eqn:8}$
For many applications it is useful to integrate the hydrostatic equation and thereby solve for pressure:
$p=\int_{z}^{\infty} p g d z.\label{eqn:9}$
This tells us that the pressure at height $z$ equals the weight of all fluid above $z$. In the atmosphere, $\rho \sim$ 1.2 kg/m3 at sea level and decreases to unmeasureably small values a few tens of kilometers up. The SI unit for pressure is Newtons per square meter, i.e., force per unit area (which makes sense because it is a stress.) An abbreviation for this unit is the pascal (Pa). At sea level, the mean atmospheric pressure is about 105 Pa, which is sometimes referred to as one bar. Meteorologists express pressure in millibars, so mean surface pressure is 1000 mbar. The old unit for pressure, which is still in common use, is pounds per square inch, or psi. Mean atmospheric pressure at the surface is 14 psi.
A well-inflated car tire holds about 28-30 psi in excess of the external (atmospheric) pressure, while pressures in bike tires can be as much as 130 psi. Pressurized tires maintain their shape thanks to Pascal’s Law: pressure acts equally in all directions. This is just another way to say that, in hydrostatic equilibrium, the stress tensor is isotropic. The force (per unit area) is $f_j = n_i\tau_{ij} = n_i(-p\delta_{ij}) = -pn_j$, so it acts opposite to the outward normal from any surface (real or fictitious) regardless of its orientation.
In oceanography, we usually define a Cartesian coordinate system such that $z$ = 0 is at the ocean surface, and $z$-values in the ocean are therefore negative. The pressure is given by Equation 8.2.3, but we can now break the range of integration into two parts; the integral from $z$ to 0 (the ocean surface) and the integral from 0 to infinity, which is just atmospheric pressure at the surface. So pressure beneath the ocean surface is the sum of atmospheric pressure and the weight of the water above a given depth:
$p=\int_{z}^{0} \rho_{w a t e r} g d z+\int_{0}^{\infty} \rho_{a i r} g d z.\label{eqn:10}$
Stress in a moving fluid
Air, I should explain, becomes wind when it is agitated.” - Titus Lucretius Carus, On the Nature of Things
We now allow our fluid to flow. To begin, we’ll define a new quantity, a tensor expressing the added stress due to the fact that the fluid is in motion, i.e.
$\tau_{i j}=-p \delta_{i j}+\sigma_{i j}.\label{eqn:11}$
This new tensor $\sigma$ is called the “deviatoric” stress tensor. Despite the complicated name, this stress is familiar; it’s basically just friction.4
And how do we write the deviatoric stress tensor? To do that, we need to specify the kind of fluid we’re talking about, because deviatoric stress is different in different fluids. Remembering that deviatoric stress is just friction, you can imagine that it would be different in gooey maple syrup than in thin air, and the differences actually get more complicated than that.
We will now make a sequence of three assumptions that define a hypothetical substance called Newtonian fluid. These will be labelled N1, N2 and N3.
Assumption N1: The deviatoric stress depends only on the velocity gradient tensor. Is this assumption plausible? Yes, it makes intuitive sense that friction should depend on differences of velocity between different fluid parcels.
Assumption N2: Deviatoric stress is a linear function of the velocity gradients. This is best understood in terms of a counterexample: paint (Figure $5$). Paint is manufactured so that, when you apply stress to it using your paintbrush, it flows easily. But after it’s on the wall, the stress applied by gravity tends to make it run (i.e., strain), and you want to minimize that. So good quality paint has a nonlinear relationship between stress and strain, maximizing strain under the strong stress of the brush, but minimizing it under the weak stress of gravity. The same is true of ketchup. In an old-fashioned glass bottle, mild shaking causes the ketchup to stay right where it is. But if you shake harder, you reach a threshold where suddenly the ketchup flows, typically onto your shirt. So the assumption we’re making here is that we’re NOT talking about paint or ketchup, we’re talking about a fluid in which strain increases with stress at a uniform rate. In this case we can write the stress-strain relationship using a 4-dimensional tensor:
$\sigma_{i j}=K_{i j k l} \frac{\partial u_{k}}{\partial x_{l}}.\label{eqn:12}$
You can also think of this as the first term in a Taylor series approximation of a more general function consistent with N1. In that case, we expect Equation $\ref{eqn:12}$ to be valid for velocity gradients that are not too intense.
But how do we write $K$? Note that we seem to be going in the wrong direction: we started with 10 unknowns, and we have just added another 81! Bear with me.
Assumption N3: The fluid is “frictionally isotropic”, i.e., friction acts the same in every direction. A counterexample to this would be blood, which has platelets in it that cause the fluid to flow more easily in some directions than in others. So, if we are not dealing with paint, ketchup, blood or the like, we can assume that the deviatoric stress and the strain are connected by a 4-dimensional tensor $K$ that is isotropic. In that case, the most general form for $K$ is
$K_{i j k l}=\lambda \delta_{i j} \delta_{k l}+\mu \delta_{i k} \delta_{j l}+\gamma \delta_{i l} \delta_{j k},\label{eqn:13}$
where $\lambda$, $\mu$ and γ are scalars (appendix C).
So a Newtonian fluid is defined by the assumption that deviatoric stress tensor is proportional to strain tensor, and the 4-dimensional tensor of proportionality constants is isotropic. How does this make life better? Well, we’ve just reduced 81 unknowns to 3: $\lambda$, $\mu$ and $\gamma$. But we can do even better.
We already know is that the stress tensor is symmetric, and therefore so is the deviatoric stress tensor. From Equation $\ref{eqn:12}$, it is obvious that $\underset{\sim}{K}$ is symmetric in its first and second indices. Knowing this, we can now show that $\gamma$ and $\mu$ are the same. Working from Equation $\ref{eqn:13}$, we write
\begin{aligned} K_{i j k l}-K_{j i k l} &=0 \ &=\lambda\left(\delta_{i j}-\delta_{j i}\right) \delta_{k l}+\mu\left(\delta_{i k} \delta_{j l}-\delta_{j k} \delta_{i l}\right)+\gamma\left(\delta_{i l} \delta_{j k}-\delta_{j l} \delta_{i k}\right). \end{aligned} \nonumber
The first term on the right-hand side is zero because $\delta$ is symmetric. The products of $\delta$’s in the second term are the same as those in the third, just reversed; hence:
$K_{i j k l}-K_{j i k l}=(\mu-\gamma)\left(\delta_{i k} \delta_{j l}-\delta_{j k} \delta_{i l}\right)=0.\label{eqn:14}$
Note that this equation holds for every combination $ijkl$. The expression $\delta_{ik}\delta_{jl}-\delta_{jk}\delta_{il}$ on the right hand side will be zero for some such combinations, but not for all. The only way that Equation $\ref{eqn:14}$ can hold for all $ijkl$ is if $\mu - \gamma = 0$.
As a result, Equation $\ref{eqn:13}$ becomes
$K_{i j k l}=\lambda \delta_{i j} \delta_{k l}+\mu\left(\delta_{i k} \delta_{j l} \mp \delta_{i l} \delta_{j k}\right),\label{eqn:15}$
eliminating one more unknown.
We can now substitute Equation $\ref{eqn:15}$ in Equation $\ref{eqn:12}$:
\begin{aligned} \sigma_{i j} &=K_{i j k l} \frac{\partial u_{k}}{\partial x_{l}} \ &=\lambda \delta_{i j} \delta_{k l} \frac{\partial u_{k}}{\partial x_{l}}+\mu\left(\delta_{i k} \delta_{j l}+\delta_{i l} \delta_{j k}\right) \frac{\partial u_{k}}{\partial x_{l}} \ &=\lambda \delta_{i j} \frac{\partial u_{k}}{\partial x_{k}}+\mu\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right) \ &=\lambda \delta_{i j} e_{k k}+2 \mu e_{i j}. \end{aligned} \nonumber
We now substitute this result into Equation $\ref{eqn:11}$, arriving at
$\tau_{i j}=-p \delta_{ij}+\lambda \delta_{ij} e_{k k}+2 \mu e_{i j} \label{eqn:16}$
This is an example of a constitutive relation. A fluid that obeys this particular constitutive relation is called a Newtonian fluid. Air and water are Newtonian to a very close approximation; ketchup is not. Before we substitute our stress tensor back into the Cauchy equation, let us look at the two new parameters that have appeared: $\mu$ and $\lambda$. Each represents a different kind of friction. The term involving $\mu$ contains $e_{ij}$, which you will recall is the symmetric part of the velocity gradient tensor. The quantity $\mu$ is called the dynamic viscosity. Dynamic viscosity depends on temperature, but in many real-life situations it is nearly constant. Typical values are
$\mu=\left\{\begin{array}{ll} 10^{-3} k g m^{-1} s^{-1}, & \text {in water } \ 1.7 \times 10^{-5} k g m^{-1} s^{-1}, & \text {in air.} \end{array}\right.\label{eqn:17}$
The quantity $\lambda$ is called the “second viscosity’’. As with pressure, the stress involving $\lambda$ is proportional to the identity tensor, and therefore involves purely normal stresses. Moreover, it is proportional to $e_{kk} = \vec{\nabla}\cdot\vec{u}$. This friction responds to either expansion or contraction of the fluid, and is zero in an incompressible fluid. The second viscosity term is negligible in most applications to the atmosphere and oceans. Before long we are going to assume that it is zero and forget about it, but we can easily go back and retrieve it should we ever want to.
6.3.5 The Navier-Stokes equation
Now we can substitute the Newtonian stress tensor Equation $\ref{eqn:16}$ into Cauchy’s equation Equation $\ref{eqn:7}$. For the special case of uniform $\lambda$ and $\mu$, the result is
\begin{align} \rho \frac{D u_{j}}{D t} &=\rho g_{j}+\frac{\partial}{\partial x_{i}} \tau_{i j} \ &=\rho g_{j}+\frac{\partial}{\partial x_{i}}\left[-p \delta_{i j}+\lambda \delta_{i j} e_{k k}+\mu\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)\right] \ &=\rho g_{j}-\frac{\partial p}{\partial x_{j}}+\lambda \frac{\partial}{\partial x_{j}} e_{k k}+\mu \frac{\partial^{2} u_{j}}{\partial x_{i}^{2}}+\mu \frac{\partial^{2} u_{i}}{\partial x_{j} \partial x_{i}}\label{eqn:18} \end{align} \nonumber
This is called the Navier-Stokes equation, and it’s the central equation of fluid dynamics. In vector form it looks like this:
$\rho \frac{D \vec{u}}{D t}=\rho \vec{g}-\vec{\nabla} p+\mu \nabla^{2} \vec{u}+(\lambda+\mu) \vec{\nabla}(\vec{\nabla} \cdot \vec{u})\label{eqn:19}$
The final two terms on the right-hand side have been interchanged for clarity. For incompressible flow, the final term vanishes.
To summarize, the Navier-Stokes equation Equation $\ref{eqn:19}$ describes how the velocity at a point evolves in response to four forces, corresponding to the four terms on its right-hand side:
• Gravity acts just as it does on solids.
• The pressure gradient force acts oppositely to the gradient of the pressure, pushing fluid from high- to low-pressure regions.
• The viscosity term represents the effect of friction as neighboring molecules collide. This tends to smooth out gradients in the velocity.
• The fourth term, nonzero only in compressible flow, describes viscous resistance to either expansion or contraction.
Test your understanding: It is important that you understand the assumptions that had to be made to arrive at Equation $\ref{eqn:19}$. Go back through section 6.3 and identify the parts that were essential for this argument. Then list as many circumstances as you can think of in which Equation $\ref{eqn:19}$ would not be valid.
6.3.6 Eddy viscosity
We think of viscosity as being due to molecular forces but it is also useful to think of it as being due to turbulence. Turbulence can be considered a phenomenon separate from the “main” flow. For example, flow over an airplane wing is smooth up to a certain speed, after which it develops little swirls and eddies. The “main” flow still goes over the wing, but it has this added, small-scale component. Turbulent eddies exert a drag on the plane, very similar to friction. To the pilot, it feels like the air has become more viscous. Models of this effect are essential in the design of airplane wings. Turbulence modeling is also necessary in every atmospheric and ocean model.
To use such a model, we define an “effective” viscosity (or eddy viscosity) due to turbulence. This “viscosity” is usually much greater than the molecular viscosity. In the simplest models, eddy viscosity is assumed to be a constant. But it is much more accurate to let eddy viscosity vary in space and time in response to changes in the large-scale flow. This requires re-deriving Equation $\ref{eqn:18}$ and Equation $\ref{eqn:19}$ with derivatives of $\mu$ and $\lambda$ retained. A further generalization recognizes that turbulence may not be isotropic. In that case, assumption N3 must be revisited. We will normally assume that eddy viscosity is a scalar constant, like molecular viscosity.
1Electromagnetic forces are another example of a body force. Their inclusion leads one into the fascinating realm of magnetohydrodynamics, which is unfortunately beyond our scope here (but see Choudhuri 1996).
2This is the origin of the word “tensor”.
3Augustin-Louis Cauchy (1789-1857) was a French mathematician who pioneered group theory and complex analysis, as well as many areas of mathematical physics including the one we are interested in here: the mechanics of deformable materials. He is reputed to have more concepts and theorems named after him than any other mathematician, and you will see evidence of this.
4Note also that, in this definition, the pressure $p$ need not be in hydrostatic balance.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.03%3A_Mass_conservation.txt
|
By assuming that mass and momentum are conserved, we have developed equations for density and flow velocity. To arrive at a closed set of equations, we must also invoke conservation of energy. We’ll do this in a rather roundabout fashion. The discussion of energy conservation leads us to an intuitively appealing summary of the factors affecting the motion and evolution of a fluid parcel which we’ll take some time to explore. It also frustrates our attempt at closure by introducing new variables, necessitating some additional assumptions about the nature of the fluid and the changes that it undergoes. Finally, we arrive at a closed system of equations that we can, in principle, solve to predict fluid behavior in a wide variety of situations.
We can often gain greater understanding of a physical system by identifying its evolution as an exchange of energy among two or more“ reservoirs’’, or kinds of energy. In a Newtonian fluid, energy is exchanged between kinetic, potential and internal forms through various identifiable processes. Recall that a fluid is in fact made of molecules (section 1.2). What we call the flow velocity is really the average velocity of many molecules occupying a small space. Besides this average velocity, each molecule is doing its own complicated dance, whizzing around, spinning, oscillating, and colliding randomly with its neighbors. The kinetic energy of these microscopic motions is manifested macroscopically as the temperature of the fluid. In energetic terms, it is regarded as part of the internal energy. In a compressible flow, squeezing molecules together requires that work be done against intermolecular forces. The result is analogous to the storage of potential energy in a compressed spring, and is treated as part of the internal energy.
6.4.1 Kinetic energy
We begin by recalling some basic concepts from solid body mechanics. A solid object of mass m (see Figure $1$), moving at speed $v$, has kinetic energy
$K E=\frac{1}{2} m v^{2}. \nonumber$
The object also has momentum $mv$, which changes in time according to Newton’s second law when a force $F$ is applied:
$\frac{d}{d t} m v=F.\label{eqn:1}$
The connection between momentum and kinetic energy is made by multiplying both sides of Equation $\ref{eqn:1}$ by $v$:
$v \frac{d}{d t} m v=\frac{d}{d t} \frac{1}{2} m v^{2}=v F. \nonumber$
The product $vF$ is called the rate of working of the force $F$ upon the object. If $vF > 0$, i.e., if the force acts in the direction that the object is already moving, it tends to increase the object’s kinetic energy. The opposite is true if the force is opposite to the motion.
The kinetic energy of a fluid parcel is given by
$K E=\int_{V m} \frac{1}{2} \rho u_{j}^{2} d V. \nonumber$
The analogue of Newton’s second law is Cauchy’s equation Equation 6.3.18. Multiplying both sides of Equation 6.3.18 by $uj$, we have
$\rho u_{j} \frac{D u_{j}}{D t}=\rho u_{j} g_{j}+u_{j} \frac{\partial \tau_{i j}}{\partial x_{i}}.\label{eqn:2}$
The left-hand side of Equation $\ref{eqn:2}$ is easily transformed using the product rule of differentiation (omitting the factor $\rho$ for simplicity):
\begin{align} u_{j} \frac{D u_{j}}{D t} &=u_{j} \frac{\partial}{\partial t} u_{j}+u_{j} u_{i} \frac{\partial}{\partial x_{i}} u_{j} \label{eqn:3}\ &=\frac{\partial}{\partial t} \frac{1}{2} u_{j}^{2}+u_{i} \frac{\partial}{\partial x_{i}} \frac{1}{2} u_{j}^{2}=\frac{D}{D t}\left(\frac{1}{2} u_{j}^{2}\right)\label{eqn:4} \end{align} \nonumber
Restoring $\rho$ and integrating over the fluid parcel then gives
$\int_{V_{m}} \rho u_{j} \frac{D u_{j}}{D t} d V=\int_{V_{m}} \rho \frac{D}{D t}\left(\frac{1}{2} u_{j}^{2}\right) d V=\frac{D}{D t} \int_{V_{m}} \rho \frac{1}{2} u_{j}^{2} d V=\frac{D}{D t} K E, \nonumber$
where Cauchy’s lemma Equation 6.3.15 has been used for the second step. We therefore have an evolution equation for the kinetic energy of the fluid parcel:
$\frac{D}{D t} K E=\int_{V_{m}} \rho u_{j} g_{j} d V+\int_{V_{m}} u_{j} \frac{\partial \tau_{i j}}{\partial x_{i}} d V. \nonumber$
The terms on the right hand side represent the rates of working by gravity and by contact forces, respectively. The contact term is worth a closer look. Using the product rule, we can rewrite its integrand in two parts,
$u_{j} \frac{\partial \tau_{i j}}{\partial x_{i}}=\frac{\partial}{\partial x_{i}}\left(u_{j} \tau_{i j}\right)-\tau_{i j} \frac{\partial}{\partial x_{i}} u_{j},\label{eqn:5}$
which we will investigate seperately. The volume integral of the first term can be converted to a surface integral using the generalized divergence theorem (section 4.2.3)
$\int_{V_{m}} \frac{\partial}{\partial x_{i}}\left(u_{j} \tau_{i j}\right) d V=\oint_{A_{m}} u_{j} \tau_{i j} n_{i} d A, \nonumber$
where $\hat{n}$ is the outward normal to the parcel boundary $A_m$. Noting that $\tau_{ij}n_i=f_j$, we can write this area integral as
$\oint_{A_{m}} \vec{u} \cdot \vec{f} d A. \nonumber$
Analogous in form to Equation $\ref{eqn:1}$, this is the rate of working by contact forces at the parcel boundary.
The second term in Equation $\ref{eqn:5}$ is the rate of working by contact forces in the interior of the parcel. The integrand is split into two parts by recalling Equation 5.3.5 , the symmetric-antisymmetric decomposition of the deformation tensor:
$\frac{\partial u_{j}}{\partial x_{i}}=e_{j i}+\frac{1}{2} r_{j i}=e_{i j}-\frac{1}{2} r_{i j}. \nonumber$
Therefore
$-\tau_{i j} \frac{\partial u_{j}}{\partial x_{i}}=-\tau_{i j}\left(e_{i j}-\frac{1}{2} r_{i j}\right)=-e_{i j} \tau_{i j}, \nonumber$
where the final equality results from the fact that $\underset{\sim}{r}$ is antisymmetric while $\underset{\sim}{\tau}$ is symmetric. This term can be further subdivided by substituting Equation 6.3.32:
$-e_{i j} \tau_{i j}=-e_{i j}\left(-p \delta_{i j}+\lambda \delta_{i j} e_{k k}+2 \mu e_{i j}\right)=p e_{j j}-2 \mu e_{i j} e_{i j}-\lambda e_{k k}^{2} \nonumber$
The three terms on the right-hand side represent distinct physical processes.
• The first term, $pe_{jj}$ represents the rate of working by pressure, or the expansion work. If the fluid is expanding, $e_{jj}>0$, the outward force of pressure accelerates the expansion, increasing kinetic energy, whereas contraction is opposed by pressure.
• The second term is negative definite and is important enough to have its own symbol:
$-2 \mu e_{i j} e_{i j}=-\rho \varepsilon. \nonumber$
This term represents the action of ordinary viscosity, which decreases kinetic energy whenever strain is nonzero. This process is called dissipation, and ε is called the kinetic energy dissipation rate.1 It is most commonly written as
$\varepsilon=2 v e_{i j}^{2}, \nonumber$
where $ν = \mu/\rho$ is the kinematic viscosity. Typical values are
$v=\left\{\begin{array}{ll} 10^{-6} m^{2} s^{-1}, & \text {in water } \ 1.4 \times 10^{-5} m^{2} s^{-1}, & \text {in air. } \end{array}\right.\label{eqn:6}$
• The final term represents the action of the second viscosity. The term is negative semidefinite: zero if the divergence is zero, negative if the divergence is nonzero. Therefore, the second viscosity opposes any divergent motion, either expansion or contraction. The second viscosity term is small in most naturally-occurring flows and will be neglected from here on, but it is easily retrieved if needed.
We can now assemble these various terms to make the evolution equation for the kinetic energy of the fluid parcel:
$\frac{D}{D t} K E=\underbrace{\int_{V_{m}} \rho \vec{u} \cdot \vec{g} d V}_{\text {gravity }}+\underbrace{\oint_{A_{m}} \vec{u} \cdot \vec{f} d A}_{\text {surface contact }}+\underbrace{\int_{V_{m}} p \vec{\nabla} \cdot \vec{u} d V}_{\text {expansion work }}-\underbrace{\int_{V_{m}} \rho \varepsilon d V}_{\text {viscous dissipation }}\label{eqn:7}$
6.4.2 Mechanical energy
Further insight into the gravity term can be gained by working in gravity-aligned coordinates. The gravity vector is $\vec{g}=-g\hat{e}^{(z)}$, where $g$ is taken to be a constant and $\hat{e}^{(z)}$ defines the vertical direction. We can now write
$\vec{u} \cdot \vec{g}=-g \vec{u} \cdot \hat{e}^{(z)}=-g w, \nonumber$
where $w$ is the vertical component of velocity. Now note that, as a parcel moves, $w$ is the time derivative of its vertical coordinate:
$\frac{D z}{D t}=\frac{\partial z}{\partial t}+u \frac{\partial z}{\partial x}+v \frac{\partial z}{\partial y}+w \frac{\partial z}{\partial z}=0+0+0+w. \nonumber$
Since $g$ is a constant, we have
$\vec{u} \cdot \vec{g}=-\frac{D}{D t} g z. \nonumber$
The gravity term in Equation $\ref{eqn:7}$ now becomes
$-\int_{V_{m}} \rho \frac{D}{D t}(g z) d V=-\frac{D}{D t} \int_{V_{m}} \rho g z d V, \nonumber$
where Equation $\ref{eqn:6}$ has been used. The volume integral on the right hand side represents the potential energy of the fluid parcel; hence, the gravity term represents an exchange between kinetic and potential energies. We now have an equation for the sum of kinetic and potential energy, called the mechanical energy:
$\frac{D}{D t} \int_{V_{m}}\left(\frac{1}{2} \rho|\vec{u}|^{2}+\rho g z\right)=\oint_{A_{m}} \vec{u} \cdot \vec{f} d A+\int_{V_{m}} p \vec{\nabla} \cdot \vec{u} d V-\int_{V_{m}} \rho \varepsilon d V.\label{eqn:8}$
The concept of potential energy is equally valid in other coordinate frames. For the general case, we define $\phi$ as the specific2 potential energy such that the net potential energy of a fluid parcel is $PE = \int_{V_m}\rho \phi dV$ and
$\vec{u} \cdot \vec{g}=-\frac{D}{D t} \Phi, \nonumber$
so that $\phi = gz$ in the special case of gravity-aligned coordinates. We can write this as a Lagrangian evolution equation for the potential energy of a fluid parcel:
$\frac{D}{D t} \int_{V_{m}} \rho \Phi d V=-\int_{V_{m}} \rho \vec{u} \cdot \vec{g} d V.\label{eqn:9}$
6.4.3 Internal energy and heat
The first law of thermodynamics can be postulated as follows:
The change of energy stored in a physical object equals the work done on the object by its environment plus the heat added.
Before we explore the consequences of this for a fluid parcel, we represent the heat flux through a material as the vector field
$\vec{q}(\vec{x}, t)=-k \vec{\nabla} T+\vec{q}_{r a d},\label{eqn:10}$
where the two terms on the right hand side represent conduction and radiation, respectively. The scalar $k$ is called the thermal conductivity. We define $\mathscr{I}$ as the internal energy per unit mass, so that $\rho\mathscr{I}$ is the internal energy per unit volume. We can now write the first law of thermodynamics as:
$\frac{D}{D t} \int_{V_{m}}\left(\frac{1}{2} \rho|\vec{u}|^{2}+\rho g z+\rho \mathscr{J}\right)=\oint_{A_{m}} \vec{u} \cdot \vec{f} d A-\oint_{A_{m}} \vec{q} \cdot \hat{n} d A.\label{eqn:11}$
Subtracting Equation $\ref{eqn:8}$, we obtain an equation for the internal energy of the fluid parcel:
$\frac{D}{D t} \int_{V_{m}} \rho \mathscr{J} d V=\underbrace{-\oint_{A_{m}} \vec{q} \cdot \hat{n} d A}_{\text {heat input }}-\underbrace{\int_{V_{m}} p \vec{\nabla} \cdot \vec{u} d V}_{\text {loss to expansion }}+\underbrace{\int_{V_{m}} \rho \varepsilon d V}_{\text {viscous heating }}.\label{eqn:12}$
• The first term represents a gain of internal energy if heat is being absorbed by the parcel and a loss if heat is lost.
• If the parcel is expanding, the second term describes a conversion of the potential energy stored in the intermolecular forces to kinetic energy of expansion, and vice versa if the parcel is contracting.
• The third term is the conversion of kinetic energy to heat via visous dissipation, and is always positive (an expression of the second law of thermodynamics).
6.4.4 Summary
The Lagrangian equations for kinetic, potential and internal energy, collected below, can be summarized in the form of an energy budget diagram (Figure $2$). The boundary stress represents an interaction with the external environment, as does the heat flux term. These occur only once in the three equations. The remaining terms each occur twice with opposite signs; they therefore represent conversions between energy types within the parcel.
$\frac{D}{D t} \int_{V_{m}} \frac{1}{2} \rho u_{i}^{2} d V=\underbrace{\int_{V_{m}} \rho \vec{u} \cdot \vec{g} d V}_{\text {gravity }}+\underbrace{\oint_{A_{m}} \vec{u} \cdot \vec{f} d A}_{\text {boundary stress }}+\underbrace{\int_{V_{m}} p \vec{\nabla} \cdot \vec{u} d V}_{\text {expansion }}-\underbrace{\int_{V_{m}} \rho \varepsilon d V}_{\text {dissipation }}.\label{eqn:13}$
$\frac{D}{D t} \int_{V_{m}} \rho \Phi d V=\underbrace{-\int_{V_{m}} \rho \vec{u} \cdot \vec{g} d V}_{\text {gravity }}.\label{eqn:14}$
$\frac{D}{D t} \int_{V_{m}} \rho \mathscr{I} d V=-\underbrace{\oint_{A_{m}} \vec{q} \cdot \hat{n} d A}_{\text {heat output }}-\underbrace{\int_{V_{m}} p \vec{\nabla} \cdot \vec{u} d V}_{\text {expansion }}+\underbrace{\int_{V_{m}} \rho \varepsilon d V}_{\text {dissipation }}.\label{eqn:15}$
1This should not be confused with the Levi-Civita tensor $\underset{\sim}{\epsilon}$ defined in section 3.3.7.
2per unit mass
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.04%3A_Momentum_conservation.txt
|
We now resume our quest for a closed set of equations to describe the flow of a Newtonian fluid. We previously assumed mass and momentum conservation, resulting in the density equation and the Navier-Stokes momentum equation Equation 6.3.37. This collection of equations totals 4, but involves 5 unknowns: $ρ$, $p$, and the three components of $\vec{u}$.
We have now invoked a new assumption, namely energy conservation in the form of the 1st law of thermodynamics. This will allow us to add a new equation to the set. We first convert Equation 6.4.29 to Eulerian form. Two terms require conversion to volume integrals.
• We apply Cauchy’s lemma to the left -hand side, resulting in:
$\frac{D}{D t} \int_{V_{m}} \rho \mathscr{I} d V=\int_{V_{m}} \rho \frac{D \mathscr{I}}{D t} d V. \nonumber$
• The heat loss can be converted using the divergence theorem (section 4.2.3):
$\oint_{A_{m}} \vec{q} \cdot \hat{n} d A=\int_{V_{m}} \vec{\nabla} \cdot \vec{q} d V \nonumber$
We now have
$\int_{V_{m}}\left(\rho \frac{D \mathscr{I}}{D t}+\vec{\nabla} \cdot \vec{q}+p \vec{\nabla} \cdot \vec{u}-\rho \varepsilon\right) d V=0.\qquad . \nonumber$
We conclude as usual that the integrand must be zero everywhere, resulting in:
$\rho \frac{D \mathscr{I}}{D t}=k \nabla^{2} T-\vec{\nabla} \cdot \vec{q}_{r a d}-p \vec{\nabla} \cdot \vec{u}+\rho \varepsilon,\label{eqn:1}$
where Equation 6.4.27 has been used for the heat flux. We have gained a new equation, but have also introduced two new unknowns, the internal energy I and the temperature $T$. (Note that neither $\varepsilon$ nor $\vec{q}_{rad}$ counts as an unknown. The former is determined by the velocity field via the strain, $\varepsilon = 2v e^2_{ij}$, while we assume that the latter is specified independently.) This leaves us short two pieces of information.
6.5.1 Specific heat capacity
Our next goal is to eliminate internal energy from the problem by establishing a relationship between it and temperature. We will consider two possible approaches, each based on an assumption about the nature of the temperature changes that can be illustrated with a simple lab experiment.
Incompressible heating
Suppose that a fluid sample is contained in a closed, rigid vessel wherein it is slowly heated. We keep track of the heat input and the resulting temperature rise, and the two turn out to be approximately proportional:
$\left(\frac{\partial Q}{\partial T}\right)_{\upsilon}=C_{\upsilon}.\label{eqn:2}$
Here $Q$ is the specific heat content, i.e., heat content per unit mass, typically measured in joules per kilogram. The subscript $\upsilon$ on the partial derivative specifies that the specific volume $\upsilon=\rho-1$ is held fixed while $Q$ and $T$ are changing. $C_\upsilon$ is called the specific heat capacity at constant volume, and can be regarded as constant if the range of temperatures is not too wide. Typical values are
$c_{\upsilon}=\left\{\begin{array}{l} 42000 k_{k}-1_{k}-1, \text { in\ water } \ 7000 k g^{-1} K^{-1}, \text {in\ air. } \end{array}\right.\label{eqn:3}$
Now because this incompressible fluid does not expand or contract, changes in internal energy are due entirely to changes in heat content, so
$\frac{D \mathscr{I}}{D t}=\frac{D Q}{D t}=C_{\upsilon} \frac{D T}{D t}. \nonumber$
We can now rewrite Equation $\ref{eqn:1}$ as
$\rho C_{v} \frac{D T}{D t}=k \nabla^{2} T-\vec{\nabla} \cdot \vec{q}_{r a d}+\rho \varepsilon,\label{eqn:4}$
keeping in mind that $\vec{\nabla}\cdot\vec{u}=0$. We have now succeeded in eliminating $E$, but the solution only works if $\vec{\nabla}\cdot\vec{u}=0$. Compressibility effects are not always negligible, especially in gases. To allow for that possibility, we imagine a slightly different experiment.
Isobaric heating
Suppose that we once again apply heat to a sample of fluid, but that the fluid is enclosed not in a rigid container but rather in a flexible membrane, like a balloon. As a result, the fluid can expand or contract freely, but the pressure does not change. (We assume that our balloon does not change altitude significantly, as would a weather balloon.) The process is therefore designated as isobaric.
For this process we define a new thermodynamic variable called the specific enthalpy, $H$. When a system changes slowly, the change in enthalpy is given by ∆H = ∆I +∆(pυ). In an isobaric process, this becomes $\Delta H = \Delta \mathscr{I} + p\Delta \upsilon$. For a given change in temperature, the change in enthalpy is given by
$\left(\frac{\partial H}{\partial T}\right)_{p}=C_{p}.\label{eqn:5$
$C_p$ is called the specific heat capacity at constant pressure, and is approximately constant for small temperature changes. Typical values are
$C_{p}=\left\{\begin{array}{l} 4200 \mathrm{J} \mathrm{kg}^{-1} \mathrm{K}^{-1}, \text {in water } \ 1000 \mathrm{J} \mathrm{kg}^{-1} \mathrm{K}^{-1}, \text {in air. } \end{array}\right.\label{eqn:6}$
For a fluid parcel undergoing this isobaric change, the material derivative of the enthalpy is
$\frac{D H}{D t}=\frac{D \mathscr{I}}{D t}+p \frac{D \upsilon}{D t}=C_{p} \frac{D T}{D t}.\label{eqn:7}$
The material derivative of $\upsilon$ can be written as
$\frac{D \upsilon}{D t}=\frac{D \rho^{-1}}{D t}=-\rho^{-2} \frac{D \rho}{D t}=-\rho^{-2}(-\rho \vec{\nabla} \cdot \vec{u})=\rho^{-1} \vec{\nabla} \cdot \vec{u},\label{eqn:8}$
where mass conservation Equation 6.2.5 has been invoked. Multiplying Equation $\ref{eqn:7}$ by density and substituting Equation $\ref{eqn:8}$, we have
$\rho \frac{D H}{D t}=\rho \frac{D \mathscr{I}}{D t}+p \vec{\nabla} \cdot \vec{u}=\rho C_{p} \frac{D T}{D t} \nonumber$
which, together with Equation $\ref{eqn:1}$, gives
$\rho C_{p} \frac{D T}{D t}=k \nabla^{2} T-\vec{\nabla} \cdot \vec{q}_{r a d}+\rho \varepsilon.\label{eqn:9}$
6.5.2 The heat equation
The two temperature equations that hold in the incompressible and isobaric approximations, Equation $\ref{eqn:4}$ and $\ref{eqn:9}$, differ only in the choice of the specific heat capacity:
$\rho\left(C_{\upsilon} \text { or } C_{p}\right) \frac{D T}{D t}=k \nabla^{2} T-\vec{\nabla} \cdot \vec{q}_{r a d}+\rho \varepsilon. \nonumber$
Which approximation is better? In water, there is virtually no difference, because $C_\upsilon$ is nearly equal to $C_p$. In air, compressibility can be important, so the isobaric approximation is preferable. For geophysical applications, then, we choose the isobaric version:
$\rho C_{p} \frac{D T}{D t}=k \nabla^{2} T-\vec{\nabla} \cdot \vec{q}_{r a d}+\rho \varepsilon.\label{eqn:10}$
This is a generalization of the “heat equation’’ often discussed in physics texts: neglecting the radiation and dissipation terms, it becomes
$\frac{D T}{D t}=\kappa_{T} \nabla^{2} T.\label{eqn:11}$
Here, $\kappa_T$ is the thermal diffusivity, given by
$\kappa_{T}=\frac{k}{\rho C_{p}}=\left\{\begin{array}{ll} 1.4 \times 10^{-7} m^{2} s^{-1}, & \text {in water } \ 1.9 \times 10^{-5} m^{2} s^{-1}, & \text {in air } \end{array}\right.\label{eqn:12}$
By using Equation $\ref{eqn:10}$ instead of $\ref{eqn:1}$ we are able to impose energy conservation while adding only one new unknown (instead of two), so at least we are no worse off in terms of closure. We now have 5 equations for 6 unknowns.
In the special case of incompressible fluid, the condition $\vec{\nabla}\cdot\vec{u}=0$ represents an additional equation and the set is closed (i.e., no more equations are needed). For the more general case, we need to make a further assumption about the nature of the fluid. This will take the form of an equation of state. Again, there is more than one possibility.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.05%3A_Energy_conservation_in_a_Newtonian_fluid.txt
|
Equations of state are laws that relate changes in density to changes in other thermodynamic variables. Each equation of state is an approximation. In a given situation, we want to choose the simplest approximation consistent with the level of accuracy we need. Here we will list several possibilities grouped into four categories of increasing complexity and (one hopes) accuracy.
6.6.1 Homogeneous fluid
The simplest approximation we can make is to assume that density is uniform. We refer to such a fluid as homogeneous. In this case, the equation of state is simply
$\rho=\rho_{0}, \nonumber$
and our set of equations is closed.
6.6.2 Barotropic fluid
A slightly more general choice is to assume that density varies only due to changes in pressure:
$\rho=\rho(p). \nonumber$
A fluid for which this is true is called barotropic. It can be shown that the variation of density with pressure supports the propagation of sound waves, and
$\left(\frac{\partial \rho}{\partial p}\right)_{T}=\frac{1}{c^{2}}\label{eqn:}$
where $c$ is the speed of sound. The subscript $T$ reminds us that the partial derivative is to be evaluated at fixed temperature. Typical values are
$c=\left\{\begin{array}{l} 1500 m s^{-1}, \quad \text { in water } \ 300 m s^{-1}, \text {in air. } \end{array}\right. \nonumber$
6.6.3 Temperature-dependent: $\rho = \rho \left(p,T\right)$
To increase realism, we can allow for density to be governed by both pressure and temperature. We’ll look at two examples:
1. Perhaps the best-known equation of state is the ideal gas law:
$\rho(p, T)=\frac{p}{R T},\label{eqn:2}$
where $R$ is the gas constant, equal to 287 J kg−1K−1 for dry air. The ideal gas law can be derived from the assumption that molecular collisions conserve kinetic energy (Curry and Webster 1998).
2. For liquids, the equation of state can only be determined empirically, i.e., density is measured over a range of pressures and temperatures and the results are fitted to some mathematical function by adjusting values of constants. For example, a useful approximation for liquid water has the form
$\rho(p, T)=\frac{\sum_{n=0}^{5} A_{i} T^{n}}{1-p / p_{0}} \nonumber$
with $A_0$ = 999.842594, $A_1$ = 6.793952 × 10−2, $A_2$ = 9.09529 × 10−3 , $A_3$ = 1.001685 × 10−4, $A_4$ = −1.120083 × 10−6, $A_5$ = 6.536332×10−9, $p_0$ = 19652, $T$ in Celsius and $p$ in bars (Gill 1982). (If you’re serious about computing the density of water, I suggest downloading a software package such as seawater, described in subsection 6.6.4).
The thermal expansion coefficient quantifies the tendency for a material to expand when heated.
$\alpha=-\frac{1}{\rho_{0}}\left(\frac{\partial \rho}{\partial T}\right)_{p, S}, \nonumber$
with typical values
$\alpha=\left\{\begin{array}{l} 1 \times 10^{-4} K^{-1}, \quad \text { in water } \ 3 \times 10^{-3} K^{-1}, \quad \text {in air. } \end{array}\right. \nonumber$
The thermal expansion coefficient can vary significantly and should only be treated as constant in nearly-uniform conditions.
6.6.4 Composition effects
The next step toward increased realism is to allow density to depend not only on pressure and temperature but also on the chemical composition of the fluid. We consider three examples.
1. The air in Earth’s atmosphere has almost uniform composition except for a variable fraction of water vapor. We define the specific humidity $q$ to be the mass of water per unit mass of air, measured in parts per thousand or $g/kg$. Accounting for humidity, the ideal gas law becomes
$\rho(p, T)=\frac{p}{R T} \frac{1}{1+0.61 q}.\label{eqn:3}$
The constant 0.61 is determined by the molecular masses of air and water(Gill 1982).
2. In salt water, density is affected by salinity, defined as the mass of salt per unit mass of water. Salinity can be measured in parts per thousand, $g/kg$, commonly called practical salinity units (psu). Values range from zero in fresh water to 41 psu in the Red Sea; 35 psu is typical. The equation of state for seawater is entirely empirical and is too complicated to reproduce here. You can look it up in (for example) Gill (1982), or you can evaluate it using standard software such as the “seawater” package, currently available at
www.cmar.csiro.au/datacentre/ext_docs/seawater.htm
6.6.5 Linearized equations of state
The empirical equations of state for liquids are far too cumbersome for use in analytical calculations. Because many problems involve only small variations in density, it is useful to work with the equation of state in a linearized form. In the case of seawater, for example, we can assume that $p$, $T$ and $S$ vary only slightly from some fixed values $p_0$, $T_0$ and $S_0$ at which the density is $\rho_0$. We can then represent the density dependence, $\rho = \rho(p,T,S)$, as a first-order Taylor series expansion
$\rho=\rho_{0}+\left(\frac{\partial \rho}{\partial p}\right)_{T, S}\left(p-p_{0}\right)+\left(\frac{\partial \rho}{\partial T}\right)_{p, S}\left(T-T_{0}\right)+\left(\frac{\partial \rho}{\partial S}\right)_{p, T}\left(S-S_{0}\right). \nonumber$
The partial derivatives are taken to be constants. One of these is the inverse square of the sound speed as discussed above:
$\left(\frac{\partial \rho}{\partial p}\right)_{T, S}=\frac{1}{c^{2}}, \nonumber$
where $c=c(p_0,T_0,S_0)$. A second is the thermal expansion coefficient $\alpha$. We also use the saline density coefficient
$\beta=\frac{1}{\rho_{0}}\left(\frac{\partial \rho}{\partial S}\right)_{p, T}, \nonumber$
whose value in seawater remains fairly close to 7×10−4 psu−1. With these definitions, the linearized equation of state can be written as
$\frac{\rho-\rho_{0}}{\rho_{0}}=-\alpha\left(T-T_{0}\right)+\beta\left(S-S_{0}\right)+\frac{1}{c^{2}}\left(p-p_{0}\right).\label{eqn:4}$
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.06%3A_The_temperature_%28heat%29_equation.txt
|
The composition-dependent equations of state all require adding a new variable to the equations of motion: $q$ in the case of moist air, $S$ in the case of seawater. Closure then requires that one more equation be added to account for the new variable. In the case of moist air, the new equation must account for the complex thermodynamics and chemistry of water vapor, and we will not go into it here (for a detailed discussion see Curry and Webster 1998). In the case of salinity, the solution is much simpler.
Salinity $S$ is the mass of salt, in grams, per unit mass of salt water, in kilograms. The mass of salt in a fluid parcel is therefore given by $\int_{V_m}\rho S dV$. That mass of salt can change only by means of exchanges with the environment, represented by a salt flux $\vec{J}_S$:
$\frac{D}{D t} \int_{V_{m}} \rho S d V=-\oint_{A_{m}} \vec{J}_{S} \cdot \hat{n} d A=-\int_{V_{m}} \vec{\nabla} \cdot \vec{J}_{S} d V, \nonumber$
where the divergence theorem has been used for the final step. This Lagrangian statement can be converted to Eulerian form in the usual way, resulting in
$\rho \frac{D S}{D t}=-\vec{\nabla} \cdot \vec{J}_{S}. \nonumber$
The salt flux is well approximated by Fick’s law:
$\overrightarrow{J_{S}}=-\rho \kappa_{S} \vec{\nabla} S, \nonumber$
where $\kappa_S$ is the molecular diffusivity of salt, with typical value 10−9m2s−1. Neglecting small variations in $\rho\kappa_S$, we have a diffusion equation for salinity:
$\frac{D S}{D t}=\kappa_{S} \nabla^{2} S\label{eqn:1}$
6.08: The advection-diffusion equation for a scalar concentration
We now have the tools to specify a closed set of equations for most geophysical flows we are likely to encounter. Here we will list the seven equations applicable to seawater:
$\frac{D \rho}{D t}=-\rho \vec{\nabla} \cdot \vec{u},\label{eqn:1}$
$\rho \frac{D \vec{u}}{D t}=\rho \vec{g}-\vec{\nabla} p+\mu \nabla^{2} \vec{u}+\mu \vec{\nabla}(\vec{\nabla} \cdot \vec{u}),\label{eqn:2}$
$\rho C_{p} \frac{D T}{D t}=k \nabla^{2} T-\vec{\nabla} \cdot \vec{q}_{r a d}+\rho \varepsilon ; \quad \varepsilon=2 \mathrm{v} e_{i j}^{2},\label{eqn:3}$
$\rho \frac{D S}{D t}=-\vec{\nabla} \cdot \vec{J}_{S} ; \quad \vec{J}_{S}=-\rho \kappa_{S} \vec{\nabla} S,\label{eqn:4}$
$\rho=\rho(p, T, S) \quad \text(the equation of state).\label{eqn:5}$
This set involves seven unknowns:
$\rho, \vec{u}, p, T, \text { and } S \nonumber$
and the following input parameters:
$\vec{g}, \mu, C_{p}, k, q_{r a d}, \text { and } \kappa_{S}. \nonumber$
The second viscosity $\lambda$ is neglected here as it is usually negligible in geophysical applications.
In atmospheric flows, the salinity equation is not needed, and the equation of state is the ideal gas law Equation 6.6.5. If moisture effects are important, an advection-diffusion equation for the humidity $q$ is added (section 6.7), the ideal gas law is modified as in Equation 6.6.9 and the temperature equation Equation $\ref{eqn:3}$ is extended to include latent heating effects (Curry and Webster 1998).
We have also neglected the effects of the Earth’s rotation. The flows we deal with here happen on scales of distance and time that we can witness directly, say a few km or less and a few hours or less, and planetary rotation is unimportant for these motions. To deal with larger and/or slower flow phenomena (e.g., synoptic weather systems or ocean currents), we would have to account for the Coriolis and centrifugal accelerations. In the most common approximation, this requires adding a new term to the right-hand side of Equation $\ref{eqn:2}$: $f \hat{e}^{(z)}\times\vec{u}$. Here $\hat{e}^{((z))}$ is the local vertical unit vector and $f$ = 1.46×10-4s-1 is proportional to the Earth’s rotation rate (e.g., Vallis 2006). We will not go further into these effects in this book, but if you want to experience them directly, try playing catch on a merry-go-round.
6.8.1 Unpacking the equations of motion
The equations summarized above contain several vector differential operators of the sort described in section 4.1. Each of these is an abbreviation for one or more partial derivatives acting on one or more components of a vector. It is worthwhile to spend some time looking at more explicit versions of these equations to make sure we understand the operations involved.
The mass equation Equation 6.2.5 can be written in index notation as
$\left(\frac{\partial}{\partial t}+u_{i} \frac{\partial}{\partial x_{i}}\right) \rho=-\rho \frac{\partial u_{i}}{\partial x_{i}}. \nonumber$
Note that $i$ is a dummy variable to be summed over. Using the familiar notation $\vec{u}=(u,v,w)$;$\vec{x}=(x,y,z)$, this can be expanded as
$\frac{\partial \rho}{\partial t}+u \frac{\partial \rho}{\partial x}+v \frac{\partial \rho}{\partial y}+w \frac{\partial \rho}{\partial z}=-\rho\left(\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+\frac{\partial w}{\partial z}\right). \nonumber$
Similarly, we can write the momentum equation Equation $\ref{eqn:2}$ as
$\rho\left(\frac{\partial}{\partial t}+u_{i} \frac{\partial}{\partial x_{i}}\right) u_{j}=\rho g_{j}-\frac{\partial p}{\partial x_{j}}+\mu \frac{\partial^{2}}{\partial x_{i}^{2}} u_{j}+\mu \frac{\partial}{\partial x_{j}}\left(\frac{\partial u_{i}}{\partial x_{i}}\right). \nonumber$
Here $i$ is once again a dummy index, while $j$ identifies the direction of the velocity component. For the case $j = 1$ we can write:
$\rho\left(\frac{\partial u}{\partial t}+u \frac{\partial u}{\partial x}+v \frac{\partial u}{\partial y}+w \frac{\partial u}{\partial z}\right)=\rho g_{1}-\frac{\partial p}{\partial x}+\mu\left(\frac{\partial^{2} u}{\partial x^{2}}+\frac{\partial^{2} u}{\partial y^{2}}+\frac{\partial^{2} u}{\partial z^{2}}\right)+\mu \frac{\partial}{\partial x}\left(\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+\frac{\partial w}{\partial z}\right). \nonumber$
Exercise: Write out the corresponding equations for the cases $j = 2$ and $j = 3$, then repeat the exercise for Equation $\ref{eqn:3}$ and Equation $\ref{eqn:4}$. Also, familiarize yourself with the cylindrical and spherical versions summarized in appendix I.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.07%3A_Equations_of_state.txt
|
Land and water are not really separate things, but they are separate words, and we perceive through words.” - David Rains Wallace
Boundaries, in the real universe, are figments of the human imagination. We simplify the task of understanding reality by dividing it into parts (e.g., the atmosphere, the ocean) and trying to understand them individually. But our “parts” are not really separate - one blends smoothly into the other. For example, at the ocean surface the properties of the fluid change dramatically over a small distance, but a close look reveals a complex mixture of air, spray, foam, bubbles and water.
The equations of fluid motion remain valid at each point, and therefore any variable that appears differentiated with respect to space must be continuous to avoid infinities. Because the stress tensor is differentiated (as seen explicitly in Equation 6.3.18), it must be continuous everywhere, and in particular at the surface of a body of water. One result of this is that the pressure immediately below the ocean surface must equal atmospheric pressure1. Another is that the transverse stress immediately below the surface must match that exerted by the wind.
We usually model the surface of a lake or ocean as a material surface (i.e., always composed of the same molecules) located where fluid properties change most rapidly. This surface is represented by its height at each horizontal location: $z=\eta(x,y,t)$, where the average of $\eta$ is zero. Because the surface is material, we can write
$\frac{D}{D t}(z-\eta)=0, \text { or }\left.w\right|_{z=\eta}=\frac{D \eta}{D t}=\frac{\partial \eta}{\partial t}+u \frac{\partial \eta}{\partial x}+v \frac{\partial \eta}{\partial y}.\label{eqn:1}$
A special case of this is a fixed boundary, an example being the lower boundary of the atmosphere at the ground. This boundary can be modeled by Equation $\ref{eqn:1}$, but with the vertical location $z = h(x,y)$ representing the Earth’s surface:
$\frac{D}{D t}(z-h)=0, \text { or } w=\frac{D h}{D t}=u \frac{\partial h}{\partial x}+v \frac{\partial h}{\partial y} \text { at } z=h. \nonumber$
At a solid boundary in a viscous fluid, we imagine fluid molecules becoming intermingled with boundary molecules (or crystals) and therefore require that the velocity itself be continuous. In the Earth’s reference frame, the fluid velocity must approach zero at the boundary. This is called a “no-slip” boundary condition.
A useful idealization is the frictionless boundary, at which the velocity is not necessarily zero but the tangential (shear) stress must vanish. This minimizes the effect of friction on the flow and is realistic in cases where viscosity is unimportant. For example, frictionless flow over a horizontal boundary would obey
$\frac{\partial u}{\partial z}=\frac{\partial v}{\partial z}=0. \nonumber$
This is often called a “free-slip” boundary condition.
1neglecting the effect of surface tension; see exercise 33.
6.10: Solution methods
There is no general solution for the equations summarized in section 6.8, because they are inherently nonlinear. The main source of nonlinearity is in the advective part of the material derivative, e.g., $[\vec{u}\cdot\vec{\nabla}]\vec{u}$, which stymies standard solution methods as well as accounting for many of the most fascinating aspects of fluid motion. To make analytical progress, we must restrict our attention to very simple flow geometries. In recent decades numerical methods of solution have become increasingly important. While allowing progress on complex flows, numerical solutions have an important limitation. Each numerical solution pertains only to a single set of assumed parameter values. If we want to know how a flow varies with some parameter, we must create many such solutions, and we can never be sure that we’ve captured all of the variability.
For example, suppose we want to know how the wind speed over a mountain depends on the mountain’s height. We could construct numerical solutions for mountains of height 1000 m, 2000 m, 3000 m, etc., plot the results on a graph and draw a smooth curve connecting them. But what if something completely different happens for a mountain of height 1500 m? No matter how closely we space our heights, we can never be certain that we are seeing the real picture. At what height is the speed a maximum? We can simulate forever and never be sure. The task is further complicated because wind speed over a mountain depends on many other parameters such as the width of the mountain and the upstream velocity. We can easily find ourselves doing thousands of simulations to describe one fairly simple flow geometry. Laboratory experiments, incidentally, suffer exactly the same limitation.
An analytical solution, even if it requires an extreme simplification of the physics, provides us with a mathematical description that we can examine in as much detail as we wish. For example, we can find the mountain height that maximizes wind speed simply by differentiating the solution. In the mountain example, the most useful solution follows from assumptions of this sort: "The flow varies mainly in the streamwise $(x)$ direction and in height $z$, so derivatives with respect to $t$ and $y$ can be discarded."
In practice, progress in understanding fluids results from a combination of numerical solutions, analytical solutions and laboratory experiments, all of which must be compared with real-world observations to assess the validity of the underlying assumptions.
In what follows we will construct analytical solutions for a few very simple flow geometries that model phenomena we witness in everyday life. We do this to gain insight into the workings of these phenomena, but more importantly to test the validity of our model of Newtonian fluid mechanics by comparing its predictions with the behavior we observe.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.09%3A_Summary-_the_equations_of_motion.txt
|
Even in the most chaotic, turbulent flow, long-lived coherent vortices can be identified (e.g., Figure $1$). In this section we will establish three theorems that tell us why vortices have such a remarkable tendency to stay together. To begin with, we will study the basic aspects of vorticity in the simplest possible form, by neglecting complications due to viscosity and inhomogeneity, i.e., we will assume that $\mu=v=0$ and $\rho=\rho_0$. Later, we will add the effects of viscosity and allow $\rho$ to vary.
7.02: Vortex dynamics in a homogeneous, inviscid fluid
7.2.1 The vorticity equation
Assume $\rho = \rho_0$, $v = \rho = 0$. The mass and momentum equations Equations 6.2.5, 6.8.2 are then
$\vec{\nabla} \cdot \vec{u}=0\label{eqn:1}$
$\frac{D \vec{u}}{D t}=\vec{g}-\vec{\nabla} \frac{p}{\rho_{0}}.\label{eqn:2}$
Vorticity is defined as the curl of velocity:
$\vec{\omega}=\vec{\nabla} \times \vec{u}.\label{eqn:3}$
To get an equation for $\vec{\omega}$, we take the curl of Equation $\ref{eqn:2}$. Here are four vector identities that will be useful in that task:
1. $\vec{\nabla} \cdot(\vec{\nabla} \times \vec{u}) \equiv 0$
2. $\vec{\nabla} \times(\vec{\nabla} \phi) \equiv 0$
3. $[\vec{u} \cdot \vec{\nabla}] \vec{u} \equiv(\vec{\nabla} \times \vec{u}) \times \vec{u}+\frac{1}{2} \vec{\nabla}(\vec{u} \cdot \vec{u})$
4. $\vec{\nabla} \times(\vec{u} \times \vec{v}) \equiv[\vec{v} \cdot \vec{\nabla}] \vec{u}-\vec{v}(\vec{\nabla} \cdot \vec{u})-[\vec{u} \cdot \vec{\nabla}] \vec{v}+\vec{u}(\vec{\nabla} \cdot \vec{v})$
An immediate consequence of [1] is that the vorticity field is solenoidal:
$\vec{\nabla} \cdot \vec{\omega}=0.\label{eqn:4}$
This is true for every flow, not just inviscid and homogeneous. Next, we apply the curl operator to Equation $\ref{eqn:2}$, starting with the left hand side.
\begin{aligned} \vec{\nabla} \times \frac{D \vec{u}}{D t} &=\vec{\nabla} \times\left(\frac{\partial \vec{u}}{\partial t}+[\vec{u} \cdot \vec{\nabla}] \vec{u}\right) \ &=\vec{\nabla} \times\left(\frac{\partial \vec{u}}{\partial t}+\vec{\omega} \times \vec{u}+\frac{1}{2} \vec{\nabla}(\vec{u} \cdot \vec{u})\right) \ &=\frac{\partial \vec{\omega}}{\partial t}+\vec{\nabla} \times(\vec{\omega} \times \vec{u})+0 \ &=\frac{\partial \vec{\omega}}{\partial t}+[\vec{u} \cdot \vec{\nabla}] \vec{\omega}-0-[\vec{\omega} \cdot \vec{\nabla}] \vec{u}+0 \ &=\frac{D \vec{\omega}}{D t}-[\vec{\omega} \cdot \vec{\nabla}] \vec{u} \end{aligned} \nonumber
where the numbers indicate the vector identities employed. The right hand side is simpler:
$\vec{\nabla} \times\left(\vec{g}-\vec{\nabla} \frac{p}{\rho_{0}}\right)=0, \nonumber$
because the first term is a constant and the second is a gradient (see [2]). The curl of Equation $\ref{eqn:2}$ is therefore
$\frac{D \vec{\omega}}{D t}=[\vec{\omega} \cdot \vec{\nabla}] \vec{u}.\label{eqn:5}$
Aside: An interesting consequence of Equation $\ref{eqn:5}$ is that, in homogeneous, inviscid flow, if the vorticity of a fluid parcel is zero at some initial time, it remains zero for all time. Therefore if the vorticity is zero everywhere at some initial time, it remains zero everywhere, even if the flow is accelerated by gravity or pressure. This leads to the idealization of irrotational flow, in which the math is simplified by assuming that $\vec{\omega}$ is identically zero. That idealization is useful in technological applications but less so in geophysical flows, so we will not pursue it here (for a discussion see Kundu et al. 2016).
7.2.2 Vortex filaments
Consider, as we did previously, the relative motion of two nearby fluid particles separated by a vector $\Delta\vec{x}$ [Figure $1$, also see Equation 5.3.2 (5.3.1)]. The material derivative of $\Delta\vec{x}$ is the velocity differential:
$\frac{D}{D t} \Delta x_{i}=\Delta u_{i}=\frac{\partial u_{i}}{\partial x_{j}} \Delta x_{j}=\left[\Delta x_{j} \frac{\partial}{\partial x_{j}}\right] u_{i} \nonumber$
or
$\frac{D}{D t} \Delta \vec{x}=[\Delta \vec{x} \cdot \vec{\nabla}] \vec{u}.\label{eqn:6}$
Note the correspondence in form between Equation $\ref{eqn:5}$ and Equation $\ref{eqn:6}$: the vorticity vector $\vec{\omega}$ and the particle separation vector $\Delta\vec{x}$ obey the same equation! Therefore, if $\vec{\omega}$ and $\Delta\vec{x}$ are parallel at some initial time, they will remain parallel for all future times. Consider the two points labelled $\vec{x}$ and $\vec{x}+\Delta\veC{x}$ at the upper left of Figure $2$. These are chosen so that the separation between them, $\Delta\vec{x}$, is parallel to the local vorticity vector $\vec{\omega}$. After a time interval $\Delta t$, the two points have moved to new positions $\vec{x}^\prime$ and $\vec{x}^\prime + \Delta\vec{x}^\prime$, and the local vorticity vector has become $\vec{\omega}^\prime$, but the separation vector is still parallel to the vorticity.
A vortex filament is a curve within the fluid that is everywhere parallel to the local vorticity, e.g., the upper yellow curve in figure $2$. After the time interval $\Delta t$, each point on the vortex filament has moved to its new position, resulting in the lower yellow curve. Because the separation vector between each pair of points on the curve remains parallel to the local vorticity vector, the yellow curve remains a vortex filament. We therefore have the following theorem:
Helmholtz’s theorem #11
In an inviscid, homogeneous fluid, vortex filaments move with the flow.
One may say that vortex filaments are “frozen” into the flow.
Remember that the time derivative of the separation vector can also be written in terms of the the strain rate and rotation tensors (cf. 5.3.8):
$\frac{D}{D t} \Delta \vec{x}=\Delta u_{i}=\frac{\partial u_{i}}{\partial x_{j}} \Delta x_{j}=\left(e_{i j}+\frac{1}{2} r_{i j}\right) \Delta x_{j}.\label{eqn:7}$
It now follows from Equation $\ref{eqn:5}$ that the same is true of the vorticity vector:
$\frac{D}{D t} \omega_{i}=\left(e_{i j}+\frac{1}{2} r_{i j}\right) \omega_{j}.\label{eqn:8}\) Remembering that $r_{ij}=-\varepsilon_{ijk}\omega_j\omega_k$, we note that the second term on the right-hand side must be zero: \[r_{i j} \omega_{j}=-\varepsilon_{i j k} \omega_{j} \omega_{k}=0, \nonumber$
as is easily seen by interchanging $j$ and $k$. As a result, Equation $\ref{eqn:8}$ becomes
$\frac{D}{D t} \vec{\omega}=\underset{\sim}{e} \vec{\omega}.\label{eqn:9}$
Equation $\ref{eqn:9}$ is equivalent to Equation $\ref{eqn:5}$(7.2.5) , but it clearly shows the effect of strain on the vorticity.
• If the strain is extensional, it lengthens the vorticity vector just as it does the separation vector. In the ideal case where the vorticity is aligned with a principal strain, the vorticity grows exponentially, with growth rate equal to the corresponding strain rate eigenvalue. This exponential amplification of vorticity by extensional strain is called vortex stretching. Examples include tornadoes, where local rotation is amplified by stretching due to rising air, and a bathtub drain, where stretching is due to plunging water. If the local strain is compressive, the vorticity vector is compressed and its amplitude is correspondingly reduced.
The off-diagonal elements of $\underset{\sim}{e}$ induce a tilting motion, converting one component of vorticity into another.
Vortex stretching is most efficient for vortices lying parallel to the principal axis of extensional strain. However, for a slowly-varying strain field, almost all vortices are rotated by the transverse strains so that they ultimately point in that optimal direction, as we now illustrate. Figure $3$a shows a vorticity vector (red), with material particles delineating its ends, marked “1’’. In the background is a strain field, drawn in two dimensions for simplicity. The vortex is oriented nearly parallel to the principal axes of compression. As a result, the vortex is compressed and its magnitude (speed of rotation) decreases, indicated by thinner arrow at position 2. In the next stage of evolution (position 3), the particles are drawn toward the principal axis of extension. The vorticity vector is now stretched, and its magnitude increases accordingly. Ultimately (position 4), the vortex approaches perfect alignment with the principal axis of extension. In this limit the magnitude of the vorticity grows exponentially, with growth rate equal to the largest eigenvalue of the strain rate tensor.2
While the initial prientation of the vorticity vector shown in figure $3$a is arbitrary, the end state is virtually always the same: the vector winds up growing exponentially in the direction of maximum extensional strain. (The only exception is if the initial vector is exactly perpendicular to the extensional strain axis, in which case it will compress to zero.)
Vortex stretching leads to a phenomenon often seen in nature - the creation of vorticity by a strain field. The flow may be almost entirely strain, but because stretched vorticity grows exponentially (in this simple geometry, at least), even a very weak vortical disturbance can rapidly become dominant. Figure $3$b shows a strain field between two corotating vortices. In the region between the vortices, small-amplitude vortices are stretched by the process shown in Figure $3$a, soon becoming aligned with the extensional strain. If you observe waves breaking on a beach, you often see that the crest of the wave just before it breaks is riven with small ripples, only an inch or so wide, aligned perpendicular to the wave crests. These are a manifestation of vortex stretching by the strain created around the wave crest as the wave grows.
7.2.3 Vorticity conservation in planar flow.
Consider a flow that lies entirely in the $x$−$y$ plane, so that the vorticity is $\omega(x,y)\hat{e}^{(z)}$. In this case the “3” component of Equation $\ref{eqn:9}$ is
$\frac{D}{D t} \omega=e_{3 j} \omega_{j}=\omega e_{33}. \nonumber$
But the only nonzero components of the strain rate tensor will correspond to $i= 1,2$ and $j = 1,2$ (see exercise 23, for example), so that $e_{33}=0$. As a result $D\vec{\omega}/Dt=0.$. This result is true for all 2D flows: i.e., vorticity is conserved following the motion. There is no vortex stretching, tilting or compression in a 2D flow.
7.2.4 Vortex tubes
A vortex filament is infinitely thin, while vortices we observe have thickness. To model these, we define a vortex tube:
Definition: A vortex tube is a bundle of vortex filaments (Figure $4$).
The measure of the strength of a vortex tube is its circulation, taken around any cross section. This would not be a useful measure if it depended on the cross section chosen. Helmholtz’s theorem #2 assures us that it does not:
Helmholtz theorem #2
Circulation is the same for every cross-section through a vortex tube.
This theorem has two important corollaries:
1. Thinner parts of a vortex tube have stronger vorticity. This is because $\Gamma = \oint \vec{\omega}\cdot \hat{n}dA$, so for $\Gamma$ to be uniform a reduction of area must be compensated by an increase in vorticity.
2. A vortex tube cannot end within the fluid. It can either:
• end at a boundary (e.g., tornado)
• form a loop (e.g., smoke ring, as in Figure 7.1.1).
Proof
We will integrate the vorticity over the surface of the vortex tube segment (Figure $\PageInde{4}$) in two ways. First, we will perform the whole integral at once using the divergence theorem, then we will integrate over the two cross-sections and the sidewall separately. Finally we will use Stokes’ theorem to express the result in terms of circulation.
The complete integral is
$\oint_{A} \vec{\omega} \cdot \hat{n} d A=\int_{V} \vec{\nabla} \cdot \vec{\omega} d V=0.\label{eqn:10}$
The first equality is due to the divergence theorem; the second follows from Equation $\ref{eqn:4}$.
Alternatively, we can perform the integral over the three surfaces seperately:
$\oint_{A} \vec{\omega} \cdot \hat{n} d A=\underbrace{\int_{\alpha} \vec{\omega} \cdot \hat{n}_{\alpha} d A}_{=\Gamma \alpha}+\underbrace{\int_{S} \vec{\omega} \cdot \hat{n}_{S} d A}_{=0}+\underbrace{\int_{\beta} \vec{\omega} \cdot \hat{n}_{\beta} d A}_{=-\Gamma_{\beta}}=0.\label{eqn:11}$
The first term is equal to the circulation around the cross-section $\alpha$,
$\Gamma_{\alpha}=\int_{\alpha} \vec{\omega} \cdot \hat{n}_{\alpha} d A, \nonumber$
by Stokes theorem Equation 4.3.13. The integral over the sidewall (the second term) must vanish because that surface is composed of vortex filaments, and therefore its unit normal vector is everywhere perpendicular to $\vec{\omega}$.
Stokes’ theorem requires that the unit normal be directed according to the right-hand rule, which in the third term would be into the vortex tube segment (Figure $4$), opposite to the outward normal $\hat{n}\beta}$. Therefore the third term in Equation $\ref{eqn:11}$ equals $-\Gamma\beta$. Because the three terms must add to zero, we have $\Gamma_\beta - \Gamma_\alpha = 0$ and the theorem is proven.
Kelvin’s circulation theorem3
If the fluid is inviscid and homogeneous, then circulation around a vortex tube does not change in time.
This explains the extraordinary longevity of vortex tubes even as they twist and turn in a turbulent flow.
Proof
As preparation, consider a scalar function of space: $\phi = \phi(\vec{x})$. If we move through a small displacement $d\vec{x}$, the change in $\phi$ is
$d \varphi=\frac{\partial \varphi}{\partial x_{i}} d x_{i}. \nonumber$
We call this a perfect differential. Now consider the following integral
$\int_{\vec{x}^{(1)}}^{\vec{x}^{(2)}} \frac{\partial \varphi}{\partial x_{i}} d x_{i}=\int_{\vec{x}^{(1)}}^{\vec{x}^{(2)}} d \varphi=\varphi\left(\vec{x}^{(2)}\right)-\varphi\left(\vec{x}^{(1)}\right). \nonumber$
For a closed circuit,$\vec{x}^{(1)}=\vec{x}^{(2)}$, and therefore
$\oint \frac{\partial \varphi}{\partial x_{i}} d x_{i}=0. \nonumber$
So, the integral of a perfect differential around a closed circuit is zero. Our strategy here is to get things into the form of perfect differentials.
OK, here goes. Consider the circulation around an arbitrary cross-section. By the definition of the integral, we can write this as the sum over many increments, as illustrated in Figure $5$, in the limit as the size of each increment goes to zero:
$\Gamma=\oint \vec{u} \cdot d \vec{x}=\lim _{|\Delta \vec{x}| \rightarrow 0} \sum_{n} \vec{u}^{(n)} \cdot \Delta \vec{x}^{(n)} \nonumber$
We now take the material derivative:
\begin{aligned} \frac{D \Gamma}{D t} &=\lim _{|\Delta \vec{x}| \rightarrow 0} \frac{D}{D t} \sum_{n} \vec{u}^{(n)} \cdot \Delta \vec{x}^{(n)} \ &=\lim _{|\Delta \vec{x}| \rightarrow 0} \sum_{n} \frac{D}{D t}\left(\vec{u}^{(n)} \cdot \Delta \vec{x}^{(n)}\right) \ &=\lim _{|\Delta \vec{x}| \rightarrow 0} \sum_{n}\left(\frac{D \vec{u}^{(n)}}{D t} \cdot \Delta \vec{x}^{(n)}+\vec{u}^{(n)} \cdot \frac{D \Delta \vec{x}^{(n)}}{D t}\right) \end{aligned} \nonumber
Now we substitute from Equation $\ref{eqn:2}$ and return the terms to integral form
\begin{aligned} \frac{D \Gamma}{D t} &=\lim _{|\Delta \vec{x}| \rightarrow 0} \sum_{n}\left[\left(\vec{g}-\frac{1}{\rho_{0}} \vec{\nabla} p^{(n)}\right) \cdot \Delta \vec{x}^{(n)}+\vec{u}^{(n)} \cdot \Delta \vec{u}^{(n)}\right] \ &=\lim _{| \Delta \vec{x} \rightarrow 0} \sum_{n}\left[\vec{g} \cdot \Delta \vec{x}^{(n)}-\frac{1}{\rho_{0}} \vec{\nabla} p^{(n)} \cdot \Delta \vec{x}^{(n)}+\vec{u}^{(n)} \cdot \Delta \vec{u}^{(n)}\right] \ &=\oint \vec{g} \cdot d \vec{x}-\frac{1}{\rho_{0}} \oint d p+\oint \vec{u} \cdot d \vec{u} \end{aligned} \nonumber
Each of these three integrands is a perfect differential and therefore integrates to zero around a closed circuit. For clarity, we will look at the terms individually.
• The first term is the line integral of the constant vector $\vec{g}$ around the closed curve. We could rewrite the integrand as $g_i dx_i$, or $d(g_ix_i)$ since $g_i$ is a constant. So the integrand is a perfect differential.
• In the second term, $\vec{\nabla}p^{(n)}\cdot\Delta\vec{x}^{(n)}$ has been recognized as the small change in $p$ over the spatial increment $\Delta \veC{x}^{(n)}$, i.e., another perfect differential.
• The third term is the perfect differential of $\vec{u}\cdot\vec{u}/2$: $\vec{u} \cdot d \vec{u}=u_{i} d u_{i}=u_{i} \frac{\partial u_{i}}{\partial x_{j}} d x_{j}=\frac{\partial}{\partial x_{j}}\left(\frac{u_{i}^{2}}{2}\right) d x_{j}=d\left(\frac{u_{i}^{2}}{2}\right)=d\left(\frac{\vec{u} \cdot \vec{u}}{2}\right) \nonumber$ and therefore integrates to zero.
So we have $D\Gamma/Dt=0$, and the proof is complete.4
Summary: In an inviscid, homogeneous fluid we have the following:
• Helmholtz #1: Vortex tubes move with the fluid (because vortex filaments do).
• Helmholtz #2: A vortex tube cannot end within the fluid.
• Kelvin: Circulation around a vortex tube does not vary in time.
Taken together, these three theorems explain the extraordinary coherence and persistence of vortex tubes.
7.2.5 Pressure drop within a vortex tube
Inside a vortex tube, the outward centrifugal force is balanced by a reduction of pressure. Here, we will quantify this effect.
Consider a Rankine vortex with radius $R$ in a homogeneous fluid (Figure $\PageIndex6}$a; also section 5.3.2). In a cylindrical coordinate system, the velocity has radial component $u_r = 0$ and azimuthal component $u_\theta = \dot{\theta}r$ for $r<R$. The equation for radial velocity in this coordinate system (appendix I):
$\frac{D u_{r}}{D t}=\frac{u_{\theta}^{2}}{r}-\frac{\partial}{\partial r} \frac{p}{\rho_{0}}.\label{eqn:12}$
The first term on the right-hand side represents the centrifugal acceleration. To maintain $u_r=0$, the right-hand side must vanish, i.e., the radial pressure gradient must balance the centrifugal acceleration:
$\frac{\partial}{\partial r} \frac{p}{\rho_{0}}=\frac{u_{\theta}^{2}}{r}=\frac{(\dot{\theta} r)^{2}}{r}=\dot{\theta}^{2} r. \nonumber$
Integrating, we have
$\frac{p}{\rho_{0}}=\frac{1}{2} \dot{\theta}^{2} r^{2}+\frac{p_{0}}{\rho_{0}},\label{eqn:13}$
where $p_0$ is the pressure at $r = 0$. The pressure profile therefore has the form of a parabola (Figure $6$b). The total pressure change from the center to the radius of maximum velocity ($r = R$) is
$p-p_{0}=\frac{1}{2} \rho_{0} u_{\theta}^{2}, \nonumber$
where $p$ and $u_\theta$ are values at $r = R$. It is left as an exercise for the reader to determine the pressure distribution for $r > R$. Note that $p− p_0$ is positive definite, i.e., the pressure inside a vortex tube is always reduced.
Example: pressure drop in a tornado
A strong tornado (as defined for the purpose of nuclear reactor design) has maximum azimuthal velocity 130 m/s. Using $ρ_0$ = 1.2 kg/m3, the mean density of air at sea level, the pressure drop is 1.4 psi, or about 1/10 of normal atmospheric pressure. This is why a tornado can cause windows to burst: on a 1 m2 window, 1/10 of atmospheric pressure is about the weight of a small car.
Exercise:
Think about all the ways in which a real tornado differs from the simple model considered here.
Surface elevation in rotating flow Consider a vertical, axisymmetric vortex in homogeneous fluid with a free surface, e.g., a stirred mug of coffee. Let the height of the free surface be $\eta(r)$. With no vertical motion, the fluid is in hydrostatic equilibrium, i.e., the pressure at each point is the weight of the fluid above it: With no vertical motion, the fluid is in hydrostatic equilibrium, i.e., the pressure at each point is the weight of the fluid above it:
$p(r, z)=\int_{z}^{\eta(r)} \rho_{0} g d z=\rho_{0} g(\eta-z). \nonumber$
Substitution into Equation $\ref{eqn:13}$ gives
$\eta=\eta_{0}+\frac{\dot{\theta}^{2}}{2 g} r^{2},\label{eqn:14}$
where $\eta_0$ is the surface elevation at $r = 0$. The surface is therefore a paraboloid of revolution.5
Suppose we stir a cup of coffee at a rate of two revolutions per second, so that $\dot{\theta} = 4\pi s^{-1}$. If the radius of the cup is 4 cm and $g$ = 9.81 ms−2, then the elevation difference should be about 1.3 cm.
You will get to play with these concepts in exercises 30 and 31.
1Hermann von Helmholtz (1821-1894) was a German physicist and physician. He is widely known among physiologists for his work on the physics of vision. He also wrote on philosophy, particularly the relationship of perception to physical laws. A well-known partial differential equation is named for him.
2You can show this more generally by approximating $\underset{\sim}{e}$ as a constant matrix and writing the general solution of Equation $\ref{eqn:9}$ as a linear combination of its eigenvectors. Each coefficient grows or decays exponentially at a rate given by the corresponding strain rate eigenvalue. Eventually, the term with the largest positive eigenvalue will dominate, i.e., $\vec{\omega}$ will approach the direction of greatest extensional strain.
3William Thompson, 1st Baron Kelvin, (1824-1907) was born in Belfast but spent most of his career at the University of Glasgow. He established the lower limit of temperature, absolute zero, and as a result the Kelvin temperature scale is named in his honor. He led the design of the first trans-Atlantic telegraph cable, for which he was knighted by Queen Victoria. He was the first British scientist elevated to the House of Lords.
4Note that this proof does not actually require that the path of integration enclose a vortex tube; it applies to any closed circuit in an inviscid, homogeneous fluid. The theorem also holds if $\rho$ is not uniform provided that $\rho$ is a function of $p$ only, i.e., if the fluid is barotropic (see section 6.6.2).
5As a fluid dynamicist, always stir your coffee first and add cream second, so you can see the circulation patterns.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/07%3A_Vortices/7.01%3A_Introduction.txt
|
Here we will extend the vorticity equation to cover viscous effects, then use the result to develop a simple model of a vortex in a viscous fluid.
7.3.1 The vorticity equation for a viscous fluid
Assume $\rho=\rho_0$ and that $ν$ is uniform but nonzero. The momentum equation Equation 6.8.2 is then
$\frac{D \vec{u}}{D t}=\vec{g}-\vec{\nabla} \frac{p}{\rho_{0}}+v \nabla^{2} \vec{u}.\label{eqn:1}$
As before, we obtain the vorticity equation by taking the curl of the momentum equation, in this case Equation $\ref{eqn:1}$. Happily we have already done most of the work; we need only add the viscous term. This is simply
$\vec{\nabla} \times\left(v \nabla^{2} \vec{u}\right)=v \nabla^{2}(\vec{\nabla} \times \vec{u})=v \nabla^{2} \vec{\omega}. \nonumber$
The curl of Equation $\ref{eqn:1}$ is therefore
$\frac{D \vec{\omega}}{D t}=[\vec{\omega} \cdot \vec{\nabla}] \vec{u}+v \nabla^{2} \vec{\omega}.\label{eqn:2}$
The final term tells us that vorticity is diffused by viscosity in the same manner as is velocity cf. Equation $\ref{eqn:1}$.
7.3.2 The Burgers vortex
In the presence of viscosity, vortex tubes are no longer immortal due to Kelvin’s theorem. They can, however, be maintained against viscous diffusion by vortex stretching. The Burgers1 vortex is a simple model of an axisymmetric vortex that is simultaneously amplified by extensional strain and diffused by viscosity (Figure $1$a) such that equilibrium is maintained, i.e., $\partial\vec{\omega}/\partial t = 0$. The flow has two components: a strain field with expansion in the axial ($z$) direction balanced by compression in the radial ($r$) direction and a vortex whose motion is entirely azimuthal. We will define these fields in turn, and then test the resulting solution as a model for naturally occurring vortices.
We begin by defining the extensional strain component in the simplest possible way:
$w=\lambda z.\label{eqn:3}$
To deduce the corresponding radial velocity, we invoke incompressibility using the divergence in cylindrical coordinates Equation I.1.2:
$\vec{\nabla} \cdot \vec{u}=\frac{1}{r} \frac{\partial}{\partial r}\left(r u_{r}\right)+\frac{1}{r} \frac{\partial u_{\theta}}{\partial \theta}+\frac{\partial w}{\partial z}=0. \nonumber$
Axisymmetry requires that $\partial/\partial\theta$ be zero, so the second term on the right-hand side vanishes, while the third term is just $\lambda$. This leads to a simple equation for $u_r$
$\frac{\partial}{\partial r}\left(r u_{r}\right)=-\lambda r, \nonumber$
which integrates to give
$u_{r}=-\frac{1}{2} \lambda r.\label{eqn:4}$
So we have the assumed vertical extension Equation $\ref{eqn:3}$ balanced by a radial inflow Equation $\ref{eqn:4}$, as illustrated in Figure $1$a.
We next solve for the vorticity. Recall that, for an axisymmetric vortex,
$\vec{\omega}=\omega(r) \hat{e}^{(z)}, \nonumber$
so we only have to determine the scalar function $\omega(r)$. We do this using the $z$-component of Equation $\ref{eqn:2}$:
$\frac{D \omega}{D t}=\left[\omega \frac{\partial}{\partial z}\right] w+v \nabla^{2} \omega.\label{eqn:5}$
The first term on the right-hand side is just $\omega\lambda$, but there is complexity hidden in the cylindrical forms of the material derivative and the Laplacian. The material derivative of $\omega$ is Equation I.1.6
\begin{align} \frac{D \omega}{D t} &=\left(\frac{\partial}{\partial t}+u_{r} \frac{\partial}{\partial r}+\frac{u_{\theta}}{r} \frac{\partial}{\partial \theta}+u_{z} \frac{\partial}{\partial z}\right) \omega, \ &=-\frac{1}{2} \lambda r \frac{d \omega}{d r}\label{eqn:6} \end{align} \nonumber
The total derivative is written because $\omega$ depends only on $r$. The Laplacian of $\omega$ Equation I.1.4 is
\begin{align} \nabla^{2} \omega &=\left(\frac{1}{r} \frac{\partial}{\partial r} r \frac{\partial}{\partial r}+\frac{1}{r^{2}} \frac{\partial^{2}}{\partial \theta^{2}}+\frac{\partial^{2}}{\partial z^{2}}\right) \omega, \ &=\frac{1}{r} \frac{d}{d r}\left(r \frac{d \omega}{d r}\right)\label{eqn:7} \end{align} \nonumber
Substituting Equation $\ref{eqn:6}$ and Equation $\ref{eqn:7}$ into Equation $\ref{eqn:5}$ we obtain an ordinary differential equation for $\omega(r)$:
$-\frac{1}{2} \lambda r \frac{d \omega}{d r}=\omega \lambda+\frac{\mathrm{v}}{r} \frac{d}{d r}\left(r \frac{d \omega}{d r}\right). \nonumber$
Multiplying through by $r$ and integrating yields
$vr \frac{d \omega}{d r}+\lambda \frac{1}{2} r^{2} \omega=0, \nonumber$
or
$\frac{d \omega}{d r}=-\frac{\lambda}{2 \mathrm{v}} r \omega. \nonumber$
The solution is a Gaussian function:
$\omega=\omega_{0} e^{-\frac{\lambda}{4 v} r^{2}}, \nonumber$
where $\omega_0$ is an arbitrary constant representing the maximum vorticity. This is more commonly written as
$\omega(r)=\frac{\Gamma}{2 \pi r_{0}^{2}} \exp \left(-\frac{r^{2}}{2 r_{0}^{2}}\right), \quad \text { where } r_{0}=\sqrt{\frac{2 v}{\lambda}}.\label{eqn:8}$
The balance between stretching and viscosity is expressed in the radial scale $r_0$ (Figure $1$b). Stronger stretching gives a thinner, more intense vortex; stronger viscosity gives a thicker, weaker vortex.
Vortices at the dissipation scale In a turbulent flow, the vorticity field has the form of a spaghetti-like tangle of vortex tubes (e.g., Moffatt et al. 1994). Is the Burgers model applicable to these structures? The smallest turbulent motions are of the order of the Kolmogorov scale, $L_K=(v^3/\varepsilon)^\frac{1}{4}$, where $\varepsilon$ is the kinetic energy dissipation rate $\varepsilon = 2ve_{ij}e_{ij}$. In geophysical turbulence, $\varepsilon$ varies greatly, but $L_K$ varies less because of the power 1/4; a typical value is ∼ 1 cm. In the Burgers model, the only nonzero strain rate component is $e_{33}=\lambda$, and therefore $\varepsilon = 2v\lambda^2$. The Burgers model predicts $r_0 = 1.7 L_K$. The agreement is reasonable in an order-of-magnitude sense.
Test your understanding: Of the simplifying assumptions that underlie the Burgers model, which are most likely to be wrong in this case?
A tornado as a Burgers vortex Suppose that a tornado is driven by a vertical expansion $dw/dz = \lambda = 0.1\ s^{-1}$. This would correspond, for example, to an updraft of 50 m/s at a height of 500 m. For air, $ν$ = 10−5m2/s. These values lead to $r_0$ ∼ 10−2 m, far too small to be realistic. Clearly at least one simplifying assumption is wrong. For example, tornados are not usually cylindrical but are more funnelshaped. A more extreme discrepancy, though, is in the very simple, symmetric form of the velocity field. In reality, tornadoes are intensely turbulent. The effect of turbulence on the overall flow is similar to that of a greatly increased viscosity. To get $r_0$ = 30 m, a reasonable value, we must assume that this “turbulent” viscosity (or “eddy” viscosity, see section 6.3.6) is 102 m2/s.
1Johannes Martinus (Jan) Burgers was a Dutch physicist best known for the Burgers equation, which describes nonlinear-diffusive systems.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/07%3A_Vortices/7.03%3A_Viscous_effects.txt
|
We now assume that the fluid is inviscid ($ν = 0$) but not homogeneous ($\rho$ is allowed to vary). The momentum equation Equation 7.2.2 becomes
$\frac{D \vec{u}}{D t}=\vec{g}-\frac{1}{\rho} \vec{\nabla} p,\label{eqn:1}$
and once again we take its curl to form the vorticity equation. The first two terms are already computed:
$\vec{\nabla} \times \frac{D \vec{u}}{D t}=\frac{D \vec{\omega}}{D t}-[\vec{\omega} \cdot \vec{\nabla}] \vec{u} \quad \text { and } \vec{\nabla} \times \vec{g}=0. \nonumber$
The curl of the new term (obtained using identities 11 and 15 listed in appendix E) is:
\begin{aligned} \vec{\nabla} \times\left(-\rho^{-1} \vec{\nabla} p\right) &=-\left(\vec{\nabla} \rho^{-1} \times \vec{\nabla} p+\rho^{-1} \vec{\nabla} \times \vec{\nabla} p\right) \ &=-\vec{\nabla} \rho^{-1} \times \vec{\nabla} p \ &=\frac{1}{\rho^{2}} \vec{\nabla} \rho \times \vec{\nabla} p. \end{aligned} \nonumber
This is called the baroclinic torque:
$\vec{B}=\frac{1}{\rho^{2}} \vec{\nabla} \rho \times \vec{\nabla} p,\label{eqn:2}$
and the vorticity equation is
$\frac{D \vec{\omega}}{D t}=[\vec{\omega} \cdot \vec{\nabla}] \vec{u}+\vec{B}. \nonumber$
The baroclinic torque $\vec{B}$ is the mechanism by which density variations influence vorticity. This torque is zero in the case of a barotropic fluid (section 6.6.2), where $\rho = \rho(p)$, because $\vec{\nabla}\rho$ and $\vec{\nabla}p$ are then parallel.
To understand the mechanism physically, we first separate the pressure into two parts: hydrostatic and nonhydrostatic:
$p=p_{H}+p^{*}. \nonumber$
The hydrostatic part is defined by the condition of hydrostatic balance:
$\vec{\nabla} p_{H}=\rho \vec{g}, \nonumber$
so that
$\vec{\nabla} p=\rho \vec{g}+\vec{\nabla} p^{*}.\label{eqn:3}$
Substituting this decomposition into the momentum equation Equation $\ref{eqn:1}$ gives
$\frac{D \vec{u}}{D t}=-\frac{1}{\rho} \vec{\nabla} p^{*}.\label{eqn:4}$
So the hydrostatic pressure simply holds up the weight of the water at each point while the nonhydrostatic pressure actually causes motion.
Now we substitute the decomposition Equation $\ref{eqn:3}$ into Equation $\ref{eqn:2}$:
\begin{align} \vec{B} &=\frac{1}{\rho^{2}} \vec{\nabla} \rho \times \vec{\nabla} p \ &=\frac{1}{\rho^{2}} \vec{\nabla} \rho \times\left(\rho \vec{g}+\vec{\nabla} p^{*}\right)\label{eqn:5} \ &=\underbrace{\frac{1}{\rho} \vec{\nabla} \rho \times \vec{g}}_{\text {buoyancy }}+\underbrace{\frac{1}{\rho^{2}} \vec{\nabla} \rho \times \vec{\nabla} p^{*}.}_{\text {inertia }}\label{eqn:6} \end{align} \nonumber
Evidently, the baroclinic torque has two parts. Illustrated in Figure ($1$), these correspond to two distinct mechanisms by which density differences can alter the vorticity. The buoyancy term quantifies the tendency for dense fluid to sink and light fluid to rise. It is nonzero if the density gradient has a component perpendicular to gravity, as in Figure $1$a. Using the right-hand rule, verify for yourself that the cross product $\vec{\nabla}\rho \times\vec{g}$ is directed out of the page. The buoyancy term therefore imparts a counterclockwise rotation to the flow, consistent with the vertical motion of the fluid parcels.
The inertia term results from the fact that a force (e.g., $-\vec{\nabla}p^*$), acting on fluid parcels of different density, produces different rates of acceleration (Figure $1$b). The term is nonzero if the density gradient has a component perpendicular to the pressure gradient force. Again, use the right-hand rule to check that the inertial torque is directed into the page and therefore generates clockwise vorticity.
Which term is more important? Inspection of Equation $\ref{eqn:5}$ shows that the inertial term is small compared to the buoyancy term if $|\vec{\nabla}p^*|/\rho\ll g$. The left-hand side of this inequality is just the net acceleration of the flow, as is shown by Equation $\ref{eqn:4}$. Therefore, the buoyancy term dominates if the net acceleration is much less than the gravitational acceleration (“1 gee”, in aeronautical lingo). This is true of the slow motions caused by weak density gradients in the interior of the ocean or the atmosphere. For that reason, we often neglect the inertial term when describing such motions. This leads to the Boussinesq approximation (see Appendix G).
In contrast, air-water interfaces have large density gradients. Accelerations are therefore comparable to gravity (imagine a breaking wave, for example), and buoyancy and inertia are both important. These motions are the topic of the upcoming chapter.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/07%3A_Vortices/7.04%3A_Buoyancy_effects-_the_baroclinic_torque.txt
|
I spin on the circle of wave upon wave of the sea.” - Pablo Neruda
08: Waves
Our second example of the application of the Navier-Stokes equation to natural flows is surface gravity waves, as would occur at the surface of an ocean or lake. We will make several simplifying assumptions:
• The fluid is inviscid: $ν$ = 0.
• The fluid is homogeneous: $\rho=\rho_0$.
• The flow is confined to the $x$−$z$ plane, so there is no dependence on $y$ and no motion in the $y$ direction.
• The amplitude of the waves is small enough to allow neglect of nonlinearity.
The last assumption is new, and is tremendously important in the analysis of geophysical flows. By linearizing the equations, we filter out some very interesting phenomena such as large-amplitude breaking waves, but the resulting simplification allows us to understand the dynamics in detail. This provides a foundation for more sophisticated theories that include large-amplitude phenomena. In the next application, hydraulic flows, the restriction to small-amplitude motions will be removed.
Hyperbolic functions review1
In this section we will make frequent use of the hyperbolic functions
$\sinh x=\frac{e^{x}-e^{-x}}{2} ; \quad \cosh x=\frac{e^{x}+e^{-x}}{2} ; \quad \text { and } \tanh x=\frac{\sinh x}{\cosh x}. \nonumber$
These obey the relations
$\frac{d}{d x} \sinh x=\cosh x ; \quad \frac{d}{d x} \cosh x=\sinh x ; \quad \frac{d^{2}}{d x^{2}} \sinh x=\sinh x ; \quad \frac{d^{2}}{d x^{2}} \cosh x=\cosh x, \nonumber$
and have the following Taylor series approximations:
$\sinh x \approx x ; \quad \cosh x \approx 1 ; \quad \tanh x \approx x \nonumber$
These are valid for $|x|\ll 1$ and become exact in the limit $|x|\rightarrow 0$. As $x\rightarrow\pm\inf$,
$\sinh x \rightarrow \pm \frac{e^{|x|}}{2} ; \quad \cosh x \rightarrow \frac{e^{|x|}}{2} ; \quad \tanh x \rightarrow \pm 1. \nonumber$
1Also see problem 7.
8.02: The Dispersion Relation
With $\rho = \rho_0$ and $v=\rho=0$, the mass and momentum equations Equations 6.2.5, 6.8.2 are
$\rho_{0} \frac{D \vec{u}}{D t}=-\rho_{0} g \hat{e}^{(z)}-\vec{\nabla} p.\label{eqn:1}$
$\vec{\nabla} \cdot \vec{u}=0\label{eqn:2}$
The velocity field has components $u$ and $w$, and the independent variables are $x$, $z$ and $t$. As in our previous discussion of the baroclinic torque, we separate the pressure into two parts:
$p=p_{H}+p^{*}, \quad \text { where } \vec{\nabla} p_{H}=-\rho_{0} g \hat{e}^{(z)}, \quad \text { or } p_{H}=-\rho_{0} g z.\label{eqn:3}$
Note that $p_H$ is the hydrostatic pressure in the absence of surface deflections. Substituting this decomposition into the momentum equation Equation $\ref{eqn:1}$ gives
$\rho_{0} \frac{D \vec{u}}{D t}=-\vec{\nabla} p^{*}.\label{eqn:4}$
8.2.1 Linearizing the equations of motion
In the absence of motion, the fluid is in an equilibrium state defined by
$\vec{u}=0, p^{*}=0, \eta=0. \nonumber$
We will assume that the system remains close to this equilibrium state. As a result of this assumption, any term that involves a product of two or more of $\vec{u}$, $p^*$ and $\eta$ will be treated as negligible. For example:
$\frac{D \vec{u}}{D t}=\frac{\partial \vec{u}}{\partial t}+\underbrace{[\vec{u} \cdot \vec{\nabla}] \vec{u}}_{\approx 0} \approx \frac{\partial \vec{u}}{\partial t}.\label{eqn:5}$
As a result, Equation $\ref{eqn:4}$ becomes
$\rho_{0} \frac{\partial \vec{u}}{\partial t}=-\vec{\nabla} p^{*}.\label{eqn:6}$
This equation is linear, and as a result is vastly easier to solve than the nonlinear version Equation $\ref{eqn:4}$. We must bear in mind, though, that the solution becomes invalid if the amplitude is large enough that the advective term in Equation $\ref{eqn:5}$ is not negligible. (How large is this? We will take up this important question in section 8.2.4.)
The next step is to derive an equation that we can solve for $p^*$. Taking the divergence of Equation $\ref{eqn:6}$ and using $\ref{eqn:2}$, we find
$\rho_{0} \vec{\nabla} \cdot \frac{\partial \vec{u}}{\partial t}=\rho_{0} \frac{\partial}{\partial t} \vec{\nabla} \cdot \vec{u}=0=-\vec{\nabla} \cdot \vec{\nabla} p^{*}=-\nabla^{2} p^{*}. \nonumber$
The nonhydrostatic pressure is therefore a solution of Laplace’s equation:
$\frac{\partial^{2} p^{*}}{\partial x^{2}}+\frac{\partial^{2} p^{*}}{\partial z^{2}}=0.\label{eqn:7}$
Combining this with the horizontal and vertical components of Equation $\ref{eqn:6}$:
$\rho_{0} \frac{\partial u}{\partial t}=-\frac{\partial p^{*}}{\partial x};\label{eqn:8}$
$\rho_{0} \frac{\partial w}{\partial t}=-\frac{\partial p^{*}}{\partial z},\label{eqn:9}$
we have three equations for the three unknowns $u$, $w$, and $p^*$. Notice further than u appears only in Equation $\ref{eqn:8}$, so Equation $\ref{eqn:7}$ and Equation $\ref{eqn:9}$ form a fully-determined system for the two unknowns $p^*$ and $w$.
8.2.2 Boundary conditions
Boundary conditions are imposed at the surface and the bottom. We will begin with the latter as is it is simpler. We assume that the bottom is flat and impermeable:
$w=0 \quad \text { at } \quad z=-H.\label{eqn:10}$
Since this is true for all time, it is also true that $\partial w/\partial t = 0$ at the lower boundary, in which case Equation $\ref{eqn:9}$ provides a boundary condition on the nonhydrostatic pressure:
$\frac{\partial p^{*}}{\partial z}=0 \quad \text { at } \quad z=-H.\label{eqn:11}$
The surface is assumed to be material, i.e., it is always composed of the same fluid particles. As a result:
$w=\frac{D \eta}{D t} \quad \text { at } \quad z=\eta. \nonumber$
This boundary condition is both nonlinear and self-referential, i.e., you can’t apply it without knowing $\eta(x,t)$, which requires knowing the solution. We know how to remove the nonlinearity:
$\frac{D \eta}{D t}=\frac{\partial \eta}{\partial t}+\underbrace{u \frac{\partial \eta}{\partial x}}_{\approx 0} \approx \frac{\partial \eta}{\partial t}. \nonumber$
The self-referential nature of the boundary condition is removed via another kind of linearization:
$\left.w\right|_{z=\eta}=\left.w\right|_{z=0}+\left.\underbrace{\left.\frac{\partial w}{\partial z}\right|_{z=0} \eta}_{\approx 0} \approx w\right|_{z=0} .\label{eqn:12}$
The result is a linearized boundary condition:
$w=\frac{\partial \eta}{\partial t} \quad \text { at } \quad z=0.\label{eqn:13}$
A second surface condition originates with the requirement that pressure be continuous. The total pressure ($p_H$ + $p^*$) immediately below the water surface must therefore be equal to atmospheric pressure $p_A$.1 We assume that $p_A$ is uniform and, since only gradients of pressure matter, we set it to zero. The total pressure $p$ must therefore approach zero at the surface and, from Equation $\ref{eqn:3}$,
$p^{*}=-p_{H}=\rho_{0} g \eta \quad \text { at } \quad z=\eta. \nonumber$
By the same logic employed in Equation $\ref{eqn:12}$, we can apply this boundary condition at $z$ = 0:
$\left.p^{*}\right|_{z=\eta}=\rho_{0} g \eta=\left.p^{*}\right|_{z=0}+\left.\underbrace{\left.\frac{\partial p^{*}}{\partial z}\right|_{z=0} \eta}_{\approx 0} \approx p^{*}\right|_{z=0}, \nonumber$
and therefore
$p^{*}=\rho_{0} g \eta \quad \text { at } \quad z=0.\label{eqn:14}$
Taking inventory: Our problem now consists of the equations Equation $\ref{eqn:7}$ and Equation $\ref{eqn:9}$ and the boundary conditions Equation $\ref{eqn:10}$, Equation $\ref{eqn:11}$, Equation $\ref{eqn:13}$ and Equation $\ref{eqn:14}$. Once these are solved for $p^∗$ and $w$, the horizontal momentum equation Equation $\ref{eqn:8}$ can be solved separately to obtain $u$.
8.2.3 The normal mode solution
We are looking for solutions for the nonhydrostatic pressure $p^∗$ and the vertical velocity $w$. To begin with, we will seek a solution in which $p^∗$ has the following form:
$p^{*}=P(z) \cos (k x-\omega t).\label{eqn:15}$
This is called a normal mode or a plane wave solution. Several features of this important functional form should be noted.
• $P(z)$ is a function yet to be determined
• The $x$-dependence is determined by the $x$-wavenumber $k$, which is also equal to $2\pi/\lambda$, $\lambda$ being the wavelength.
• The time-dependence is described by the radian frequency $\omega$, which is related to the period $T$ by $\omega = 2\pi/T$. It is often more convenient to use the cyclic frequency $f=\omega/2\pi$, with units of cycles per second (cps.) or Hertz (Hz.)
• The pattern moves with phase speed $c=\omega/k$.
The assumption Equation $\ref{eqn:15}$ is not as restrictive as it may appear at first. By virtue of Fourier’s theorem (see any applied math text), an arbitrary dependence on $x$ and $t$ can be expressed as a superposition of trigonometric functions of the form Equation $\ref{eqn:15}$. Moreover, because the equations are linear, if Equation $\ref{eqn:15}$ is a solution, any such superposition will also be a solution.
Substituting Equation $\ref{eqn:15}$ into Equation $\ref{eqn:7}$, we have
$\frac{\partial^{2} p^{*}}{\partial x^{2}}+\frac{\partial^{2} p^{*}}{\partial z^{2}}=\left[-k^{2} P+P^{\prime \prime}\right] \cos (k x-\omega t)=0, \nonumber$
where primes denote derivatives with respect to $z$. Because the last equality must be valid for all $x$ and $t$, the quantity in square brackets must be zero:
$P^{\prime \prime}=k^{2} P. \nonumber$
This is a very common ordinary differential equation whose solutions can be expressed in terms of hyperbolic functions. To satisfy the bottom boundary condition Equation $\ref{eqn:11}$, we choose
$P=P_{0} \cosh k(z+H)\label{eqn:16}$
where $P_0$ is an undetermined constant
Next we need a corresponding solution for $w$. In the vertical momentum equation Equation $\ref{eqn:9}$ we try a solution of the form
$w=W(z) \sin (k x-\omega t).\label{eqn:17}$
This results in
$-\rho_{0} \omega W \cos (k x-\omega t)=-P^{\prime} \cos (k x-\omega t). \nonumber$
Since this must be true for all $x$ and $t$, the coefficients of the cosine functions must be equal. Therefore Equation $\ref{eqn:17}$ is a solution provided that:
$W=\frac{P^{\prime}}{\rho_{0} \omega}, \nonumber$
or, substituting Equation $\ref{eqn:16}$,
$W=\frac{P_{0} k}{\rho_{0} \omega} \sinh k(z+H).\label{eqn:18}$
Note that Equation $\ref{eqn:18}$ automatically satisfies Equation $\ref{eqn:10}$, the bottom boundary condition on $w$. In summary, we now have solutions for $p^*$ and $w$ in the form of Equation $\ref{eqn:15}$, Equation $\ref{eqn:16}$, Equation $\ref{eqn:17}$, and Equation $\ref{eqn:18}$, and these satisfy the bottom boundary conditions Equation $\ref{eqn:11}$ and Equation $\ref{eqn:10}$.
To satisfy the surface boundary conditions Equation $\ref{eqn:13}$ and Equation $\ref{eqn:14}$, we must consider the surface deflection $\eta(x,t)$. Equation $\ref{eqn:14}$ requires $p^*=\rho_0gn$ at $z$ = 0. We solve for $\eta$ and substitute Equation $\ref{eqn:15}$ and Equation $\ref{eqn:16}$ to get
$\eta=\frac{1}{\rho_{0} g} P_{0} \cosh k H \cos (k x-\omega t). \nonumber$
Since the surface deflection is easier to observe than the pressure, we will rewrite this solution as
$\eta=\eta_{0} \cos (k x-\omega t),\label{eqn:19}$
and note that $P_0=\rho_0g\eta_0/\cosh kH$. Our solutions Equation $\ref{eqn:16}$ and $\ref{eqn:18}$ for $P$ and $W$ become
$P=\rho_{0} g \eta_{0} \frac{\cosh k(z+H)}{\cosh k H};\label{eqn:20}$
and
$W=\frac{g k}{\omega} \eta_{0} \frac{\sinh k(z+H)}{\cosh k H}.\label{eqn:21}$
We now have a complete solution for $p^*$, $w$ and $\eta$, but we have not yet satisfied Equation $\ref{eqn:13}$, the surface condition on $w$, i.e., we have an overdetermined system. The solution can only work for certain combinations of the wave parameters $k$ and $\omega$. Substituting our solutions Equation $\ref{eqn:19}$ and Equation $\ref{eqn:21}$ into Equation $\ref{eqn:13}$, we have
$\frac{g k}{\omega} \eta_{0} \frac{\sinh k H}{\cosh k H} \sin (k x-\omega t)=\eta_{0} \omega \sin (k x-\omega t) \quad \forall x, t, \nonumber$
and thus
$\frac{g k}{\omega} \eta_{0} \frac{\sinh k H}{\cosh k H}=\eta_{0} \omega, \nonumber$
or
$\omega^{2}=g k \tanh k H.\label{eqn:22}$
This is called the dispersion relation. The normal mode expressions Equation $\ref{eqn:15}$, Equation $\ref{eqn:17}$, and Equation $\ref{eqn:19}$ can satisfy the equations and all of the boundary conditions only if $\omega$ and $k$ satisfy Equation $\ref{eqn:22}$.
We can gain some insight into the propagation of surface gravity waves by writing the dispersion relation in terms of the phase speed $c=\omega/k$:
$c^{2}=\frac{g}{k} \tanh k H=g H \frac{\tanh k H}{k H},\label{eqn:23}$
as shown in Figure $2$. The fastest propagation occurs when $kH$ is small, i.e., in the limit of low wavenumber, or wavelength long compared to the water depth. This is in contrast to sound waves, in which the short waves travel fastest. (Imagine, for example, the sound of thunder. It begins with a high-frequency clap, then decays away to a low-frequency rumble.) In the ocean, surface waves are most often generated by storms. An observer some distance from the storm will see long, low-frequency waves first.
Test your understanding: In exercise 33 you will extend this derivation to include the effects of surface tension.
To complete the picture of small-amplitude surface waves, we now go back and add the horizontal velocity $u$. This is easily done by substituting our solution Equation $\ref{eqn:15}$ and Equation $\ref{eqn:20}$ into the horizontal momentum equation Equation $\ref{eqn:8}$. This results in
$u=U(z) \cos (k x-\omega t),\label{eqn:24}$
where
$U(z)=\frac{g k}{\omega} \eta_{0} \frac{\cosh k(z+H)}{\cosh k H}.\label{eqn:25}$
We can write this in a slightly more intuitive form if we note that (1) the dispersion relation Equation $\ref{eqn:22}$ can be rearranged to give $kg/\omega=\omega/\tanh kH$ and (2) $\cosh x\tanh x = \sinh x$, and therefore
$U(z)=\omega \eta_{0} \frac{\cosh k(z+H)}{\sinh k H}.\label{eqn:26}$
The velocity profile rises from a minimum value at the bottom to a maximum at the surface. Similarly, Equation $\ref{eqn:21}$ can be rewritten as
$W(z)=\omega \eta_{0} \frac{\sinh k(z+H)}{\sinh k H},\label{eqn:27}$
which rises from zero at the bottom to the maximum value $\omega\eta_0$ at the surface.
8.2.4 How small is small?
The solution derived above depends on the assumption that the amplitude of the waves is “small”. How large can the amplitude be before this assumption is violated? It depends on how much inaccuracy we can tolerate. As a general introduction to the way theorists think about such questions, let us make order-of-magnitude estimates of the two terms in the material derivative of $\vec{u}$ cf. Equation $\ref{eqn:5}$. Suppose we are dealing with waves having period $T$, wavelength $\lambda$ and velocity amplitude $u_0$. We estimate the time derivative as $u_0/T$ and the space derivative as $u_0/\lambda$, leading to:
$\frac{\partial \vec{u}}{\partial t} \approx \frac{u_{0}}{T} ; \quad[\vec{u} \cdot \vec{\nabla}] \vec{u} \approx \frac{u_{0}^{2}}{\lambda}.\label{eqn:28}$
We then require that the second term be much smaller than the first, which is equivalent to
$u_{0} \ll \lambda / T \nonumber$
In other words, the maximum fluid velocity should be much smaller than the phase speed of the waves.
8.2.5 Particle paths
Suppose that, in the absence of waves, a fluid particle is located at $\vec{x}_0$, and in the presence of waves its position is $\vec{x}_0+\vec{x}^\prime(t)$. The particle’s motion is described by
$\frac{d \vec{x}^{\prime}}{d t}=\vec{u}\left(\vec{x}_{0}, t\right) \nonumber$
recognising that the difference between $\vec{u}(\vec{x}_0,t)$ and $\vec{u}(\vec{x},t)$ is negligible.
Having solved for the components of $\vec{u}$ Equation $\ref{eqn:24}$, Equation $\ref{eqn:26}$, Equation $\ref{eqn:17}$, and Equation $\ref{eqn:27}$, we can substitute the results and integrate in time to obtain the trajectory of the particle. The result is
$x^{\prime}=-\frac{U\left(z_{0}\right)}{\omega} \sin \left(k x_{0}-\omega t\right) ; \quad z^{\prime}=\frac{W\left(z_{0}\right)}{\omega} \cos \left(k x_{0}-\omega t\right).\label{eqn:29}$
We can rewrite this as
$\frac{x^{\prime 2}}{L_{x}^{2}}+\frac{z^{\prime 2}}{L_{z}^{2}}=1,\label{eqn:30}$
where
$L_{x}=\eta_{0} \frac{\cosh k\left(z_{0}+H\right)}{\sinh k H} ; \quad L_{z}=\eta_{0} \frac{\sinh k\left(z_{0}+H\right)}{\sinh k H}.\label{eqn:31}$
We can now make three observations about the particle paths.
• Each path is an ellipse with radii $L_x$ and $L_z$ (Figure $3$). Both $L_x$ and $L_z$ decrease with depth, with $L_z$ = 0 at the bottom (where $z_0$ = −H).
• The vertical excursion $z^\prime$ is proportional to the surface deflection cf. Equation $\ref{eqn:19}$:$z^{\prime}=\eta(x, t) \frac{\sinh k\left(z_{0}+H\right)}{\sinh k H}, \nonumber$ and therefore reaches a maximum directly beneath the wave crests for all $z_0$.
• The horizontal particle velocity is also proportional to $\eta$: $u=\frac{g}{c} \eta(x, t) \frac{\cosh k\left(z_{0}+H\right)}{\cosh k H}, \nonumber$ where Equation $\ref{eqn:25}$ and the definition of the phase velocity $c=\omega/k$ have been used. Its value at $z$ = 0 is $u=\frac{g}{c} \eta(x, t). \nonumber$ Directly beneath the wave crests, the horizontal motion has the same sign as the phase speed.
If we extend the theory to take nonlinear effects into account, we find that there is actually a slight drift in the direction of the wave propagation. Called the Stokes drift, it results from the fact that the particle speed at the top of each ellipse is slightly greater than the speed at the bottom of the ellipse. Appendix J gives a more detailed discussion.
1If centimeter-scale waves are of interest, we must also take account of surface tension. The force that holds water molecules together is stronger on the water side than on the air side, creating an additional pressure at the surface given by $p=-\sigma\nabla^2\eta$, where $\sigma$ is a constant and small amplitude is assumed. You will examine this effect in homework problem 33.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/08%3A_Waves/8.01%3A_Introduction.txt
|
8.3.1 Beats
The “sneaker waves” that beach visitors are warned about, the spectacular spring tides (which have nothing to do with the season), and a musician’s ability to hear when two strings are exactly in tune are all examples of the phenomenon of beats (Figure $1$). When two oscillations with slightly different frequencies $\omega_1$ and $\omega_2$ occur together, what we perceive is a single oscillation with frequency equal to the average $(\omega_1 +\omega_2)/2$, modulated in amplitude by a slower frequency equal to the difference $|\omega_1 −\omega_2|$.
Here, we will see how these results follow from the addition rule for the sine function (e.g., exercise 10f). Consider the sum of two waves having equal amplitude but different wavenumbers and frequencies:
\begin{align} \frac{\eta(x, t)}{\eta_{0}} &=\frac{1}{2} \cos \left(k_{1} x-\omega_{1} t\right)+\frac{1}{2} \cos \left(k_{2} x-\omega_{2} t\right) \ &=\cos \left(\frac{k_{1}+k_{2}}{2} x-\frac{\omega_{1}+\omega_{2}}{2} t\right) \cos \left(\frac{k_{1}-k_{2}}{2} x-\frac{\omega_{1}-\omega_{2}}{2} t\right) \ &=\underbrace{\cos (\bar{k} x-\bar{\omega} t)}_{\text {average wave }} \underbrace{\cos \left(\frac{\Delta k}{2} x-\frac{\Delta \omega}{2} t\right)}_{\text {envelope }}\label{eqn:1} \end{align} \nonumber
The first cosine function describes the “average wave”, an oscillation with wavenumber and frequency equal to the means of the individual values. The amplitude of this oscillation is modulated by a slower oscillation called the envelope, shown by the dashed curve in Figure $1$.
Let us now assume for simplicity that we are observing the waves at a fixed location, $x$ = 0, so the surface deflection is
$\frac{\eta(0, t)}{\eta_{0}}=\cos (\bar{\omega} t) \cos \left(\frac{\Delta \omega}{2} t\right)=\cos (2 \pi \bar{f} t) \cos (\pi \Delta f t) \nonumber$
(The second form employs the cyclic frequency 2$\pi f$.) A subtlety to notice is that the oscillation has maximum amplitude when the envelope $\cos\left( 2\pi\frac{f_1-f_2}{2}t \right)$ is equal to either +1 or -1. Beats therefore occur with double the frequency of the envelope:
$f_{b}=2 \times\left|\left(f_{1}-f_{2}\right) / 2\right|=\left|f_{1}-f_{2}\right|; \nonumber$
just the absolute difference between the two frequencies.
Example $1$
To tune a string, one begins by playing the string together with another oscillation having the correct pitch. If the string is slightly out of tune, the ear perceives beats. The tension in the string is then adjusted in whichever direction makes the beats slower. As the string approaches the correct frequency, the beat frequency approaches zero. You know the string is in tune when the beats are no longer heard.
Example $2$
Suppose that the waves from two ocean storms reach the beach at the same time with frequencies 10 s and 12 s. The beat frequency is then $f_{b}=\frac{1}{10 s}-\frac{1}{12 s}=\frac{1}{60 s}. \nonumber$ There will be a big wave every 60 s, or about every 5th wave.1
8.3.2 Group velocity
In Equation $\ref{eqn:1}$ we added two waves and found the average wave modulated by the envelope function
$\cos \left(\frac{\Delta k}{2} x-\frac{\Delta \omega}{2} t\right), \nonumber$
which can also be written
$\cos \left[\frac{\Delta k}{2}\left(x-\frac{\Delta \omega}{\Delta k} t\right)\right]. \nonumber$
The amplitude of the envelope is therefore constant in a reference frame travelling at the speed $\Delta\omega/\Delta k$. More generally, a wave field consists of a continuous spectrum of components with different frequencies, wavenumbers and amplitudes. Wave pulses travel at the group velocity, given by $c_g =\partial\omega/\partial k$.
In the still more general case where motion is allowed in both $x$ and $y$, i.e., $\eta = \eta_0\cos\left( kx+ly-\omega t \right)$,
$\vec{c}_{g}=\left\{\frac{\partial \omega}{\partial k}, \frac{\partial \omega}{\partial l}\right\}.\label{eqn:2}$
1In Henri Charriere’s classic adventure novel “Papillon”, the hero escapes an island prison by hurling himself from a cliff into the ocean, having first determined that every 7th wave is large enough to carry him safely past the rocks and out to sea. The image was also used as a metaphor in the popular song “Love is the Seventh Wave”, by Sting. To the contrary, Russian sailors traditionally believe that the 9th wave is the big one (Figure $2$, or the album “Hounds of Love” by Kate Bush). In reality, the ratio depends on the wind field and is as likely on any given day to be 5 or 10 as 7 or 9, or to not be discernible at all.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/08%3A_Waves/8.03%3A_Superposition_phenomena.txt
|
The nondimensional parameter $kH$ is the ratio of water depth to wavelength (times 2$\pi$). Two important limiting cases are $kH\rightarrow\inf$ (short waves or deep water) and $kH \rightarrow 0$ (long waves or shallow water).
8.4.1 Deep water (short) waves
In the short wave limit $|kH|\rightarrow\inf$ we can simplify the dispersion relation using:
$\lim _{k H \rightarrow \infty} \tanh k H=1. \nonumber$
Substituting this into Equation 8.2.35 or Equation 8.2.36 gives
$\omega=(g k)^{1 / 2}, \quad \text { or } \quad c=\left(\frac{g}{k}\right)^{1 / 2}. \nonumber$
The group velocity is obtained from the first of these:
$c_{g}=\frac{\partial \omega}{\partial k}=\frac{1}{2}(g k)^{-1 / 2} g=\frac{1}{2}\left(\frac{g}{k}\right)^{1 / 2}=\frac{1}{2} c. \nonumber$
This relationship between group and phase velocities is often visible in the wake of a small boat or canoe. The wake propagates to either side of the boat’s trajectory as a wave pulse, or envelope. If you look carefully, the envelope is constantly changing its shape as individual wave crests appear at the back, propagate to the front, and disappear. This is because they propagate with the phase velocity while the pulse as a whole propagates at the group velocity, only half as fast (see Figure 8.3.2).
8.4.2 Shallow water (long) waves
We now consider the opposite limit, $kH\rightarrow 0$. When $kH \ll 1$, the $\tanh$ function is approximated as
$\tanh k H=k H. \nonumber$
Using this approximation in Equation 8.2.36 leads to
$c=(g H)^{1 / 2}.\label{eqn:1}$
Note that this phase speed is the same for all frequencies. We therefore say that shallow water waves are nondispersive. An arbitrary wave shape made of many such waves will retain its shape rather than dispersing, because all of its Fourier components travel with the same speed (e.g., Figure $1$a). Bores and tsunamis are examples. Another indication of the nondispersive character of shallow water waves is the group velocity, computed from Equation $\ref{eqn:1}$ as
$c_{g}=\frac{\partial}{\partial k}(k c)=c, \nonumber$
i.e., the phase and group velocities are the same. In this regime a wave superposition such as that shown in Figure 8.3.1 retains its shape.
In contrast, deep-water waves do not retain their shape but instead have an irregular, choppy appearance (Figure $1$b). This is because they are dispersive, i.e., their different Fourier components travel at different speeds.
Referring to Figure 8.2.2 we see that $\left( gH\right)^{1/2}$ is the fastest speed possible for small-amplitude surface waves. In most of the ocean, $H$ is about 4000 m, so $\left( gH\right)^{1/2}$ = 200 m/s. This is the speed of a tsunami crossing the Pacific, for example. The shallow water limit applies, even though the depth is great, because the wavelength of a tsunami is greater. We will refer to this speed as the linear long wave speed.
The Froude number
Suppose that something disturbs the surface of a channel so as to radiate waves both upstream and downstream. The fastest waves will travel at $c =\pm c_0$, where $c_0=\left( gH\right)^{1/2}$ is the linear long wave speed. Now suppose that the water in the channel is flowing at speed $u$. If $u < c_0$, then long waves will still radiate both upstream and downstream, but the upstream waves will move more slowly (to a stationary observer). But if $u = c_0$ the motion of the upstream wave will be arrested, and if $u > c_0$ there will be no propagation upstream. This distinction is quantified by the Froude number1:
$F=\frac{|u|}{(g H)^{1 / 2}}./label{eqn:2} \nonumber$
The flow is called critical if $F$ = 1, supercritical (no upstream propagation) if $F$ > 1 and subcritical (propagation in both directions) if $F$ < 1.
If you turn back to Figure 1.1.1 and look at the upper right frame, you will see a rather dramatic transition from smooth to turbulent flow in the Nile River. Upstream of the transition, the flow is smooth, almost glassy, apparently unaffected by the violent churning of the water only a short distance away. The upstream flow is supercritical, so that disturbances from the turbulent region downstream cannot reach it. At the transition, the flow changes from supercritical to subcritical. This happens because the flow slows down and gets deeper. (Verify by inspection of Equation $\ref{eqn:2}$ that both of these changes act to reduce the Froude number.) This transition is called a hydraulic jump. In the downstream, subcritical region disturbances can propagate in all directions, and they clearly do. We will have much more to say about hydraulic jumps and the Froude number in chapter 9.
Vertical structure
We turn next to an examination of the vertical structure of the velocity and pressure fields in the shallow water regime. The vertical structure function for the horizontal velocity component is
$U(z)=\omega \eta_{0} \frac{\cosh k(z+H)}{\sinh k H}, \nonumber$
reproduced from Equation 8.2.39 . The limit of $\cosh kH$ as $kH\rightarrow 0$ is 1. The same is true of $\cosh k(z+H)$ because $|z+H| \leq H$. On the other hand, $\sinh kH \approx kH$ for $|kH| \ll 1$. Therefore, in the shallow water limit, we find that the horizontal velocity is independent of depth:
$U=\frac{\omega \eta_{0}}{k H}=c \frac{\eta_{0}}{H}. \nonumber$
For the vertical velocity, we begin with
$W(z)=\omega \eta_{0} \frac{\sinh k(z+H)}{\sinh k H}, \nonumber$
reproduced from 8.2.32. As $kH\rightarrow 0$, $\sinh kH \approx kH$, and also $\sinh k(z+H) \approx k(z+H)$ because $|z+H| \leq H$. Meanwhile, $\cosh kH \approx 1$ as before. Therefore:
$W(z)=\omega \eta_{0} \frac{k(z+H)}{k H}=\omega \eta_{0}\left(1+\frac{z}{H}\right). \nonumber$
Vertical velocity is a linear function of depth with a nonzero surface value $W(0) = \omega\eta_0$.
The vertical distribution of the nonhydrostatic pressure is
$P=\rho_{0} g \eta_{0} \frac{\cosh k(z+H)}{\cosh k H}; \nonumber$
(reproduced from Equation 8.2.31). Again, the limit of $\cosh kH$ as $kH \rightarrow 0$ is 1, and the same is true of $\cosh k(z+H)$. The nonhydrostatic pressure is independent of depth:
$P=\rho_{0} g \eta_{0}. \nonumber$
With the $x$ and $t$ dependence restored,
$p^{*}=\rho_{0} g \eta_{0} \cos (k x-\omega t)=\rho_{0} g \eta. \nonumber$
This is just the weight of the water between $z$ = 0 and $z$ = $\eta$ (negative if $η$ < 0), i.e., what we have been calling the “nonhydrostatic pressure” actually becomes hydrostatic in the long-wave limit. This is because vertical accelerations are extremely small, so the vertical pressure gradient almost balances gravity. Adding this to the hydrostatic pressure for the motionless state $−\rho_0 gz$, we obtain the total pressure
$p=\rho_{0} g(\eta-z). \nonumber$
The total pressure is therefore hydrostatic even in the presence of long waves.
1William Froude (1810-1870) was a British naval architect concerned with designing an efficient hull shape for naval vessels. As a ship moves, it radiates waves, and those waves rob the ship of its momentum. The resulting drag increases when the ship speed exceeds the maximum wave speed (i.e., the motion becomes supercritical), and it is therefore dependent on what we now call the Froude number.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/08%3A_Waves/8.04%3A_Limiting_cases.txt
|
Our third example of the application of the Navier-Stokes equations to natural flows is hydrostatic flows over topography. These occur in many natural settings such as downslope windstorms (Figure $1$b), tidal ocean currents and flow over dams. They are perhaps most recognizable in the flow of a small stream over a rocky bottom (Figure $1$a). For this discussion, we abandon the assumption that the amplitude of the disturbance is small. Instead, we assume that the flow is in hydrostatic balance, similar to the long-wave limit discussed above.
Assumptions:
1. The fluid is inviscid: $ν$ = 0.
2. The fluid is homogeneous: $\rho = \rho_0$.
3. The flow is confined to the $x$−$z$ plane, so there is no dependence on $y$ and no motion in the $y$ direction.
4. The channel width $W$ is uniform and the walls are vertical.
5. The flow has the character of gravity waves in the long-wave limit:
1. The horizontal (streamwise) velocity is independent of depth: $\vec{u}=u(x,t)\hat{e}^{(x)}$.
2. The pressure is hydrostatic: $p = \rho_0g(\eta −z)$.
Some of these assumptions will be relaxed as we go along.
9.02: Hydraulic control
Here we will look at the phenomenon of hydraulic control, which constrains the state of flow over an obstacle. For this initial look we will assume that the flow takes place in a channel of rectangular cross-section.
9.2.1 Equations of motion for flow in a rectangular channel
We now use these assumptions to simplify the equations of motion. The momentum equation for inviscid, homogeneous flow is
$\rho_{0} \frac{D \vec{u}}{D t}=-\rho_{0} g \hat{e}^{(z)}-\vec{\nabla} p. \nonumber$
Substituting assumption 5.1, this becomes
$\rho_{0} \frac{D \vec{u}}{D t}=-\rho_{0} g \hat{e}^{(z)}-\rho_{0} g\left(\vec{\nabla} \eta-\hat{e}^{(z)}\right)=-\rho_{0} g \vec{\nabla} \eta, \nonumber$
and the streamwise ($x$) component is
$\frac{D u}{D t}=u_{t}+u u_{x}+\cancelto{0}{v u_{y}}+\cancelto{0}{w u_{z}}=-g \eta_{x}, \nonumber$
where subscripts indicate partial derivatives. Our streamwise momentum equation is now
$u_{t}+u u_{x}=-g \eta_{x}.\label{eqn:1}$
Mass conservation is expressed by $\vec{\nabla}\cdot\vec{u}=0$. Integrating this over $z$ gives
$\int_{-H+h(x)}^{\eta(x, t)}\left(u_{x}+w_{z}\right) d z=0. \nonumber$
Because $u$ is independent of $z$ (assumption 5.1), we can take the first term out of the integral, while the second term integrates trivially:
$u_{x}(\eta+H-h)+\left.w\right|_{-H+h} ^{\eta}=0.\label{eqn:2}$
Both the surface and the bottom boundary are material surfaces, so the boundary conditions on $w$ are:
\begin{aligned} \left.w\right|_{z=\eta} &=\frac{D \eta}{D t}=\eta_{t}+u \eta_{x} \ \left.w\right|_{z=-H+h(x)} &=\frac{D}{D t}(-H+h)=\underbrace{h_{t}}_{=0}+u h_{x}=u h_{x}. \end{aligned} \nonumber
Combining these, we have
$\left.w\right|_{-H+h} ^{\eta}=\eta_{t}+u \eta_{x}-u h_{x}. \nonumber$
Substituting this into Equation $\ref{eqn:2}$ gives
$\eta_{t}+u \eta_{x}-u h_{x}+u_{x}(\eta+H-h)=0.\label{eqn:3}$
Insight into Equation $\ref{eqn:3}$ may be gained by interpreting it in terms of the volume flux:
$Q=u A=u W \underbrace{(\eta+H-h)}_{\text {total depth }}. \nonumber$
In Equation $\ref{eqn:3}$, note that the 2nd and 3rd terms are equivalent to $u(\eta +H −h)x$, since $H$ is a constant. Therefore:
$\eta_{t}+u(\eta+H-h)_{x}+u_{x}(\eta+H-h)=\eta_{t}+[u(\eta+H-h)]_{x}=0, \nonumber$
or,
$\eta_{t}=-\frac{Q_{x}}{W}.\label{eqn:4}$
So Equation $\ref{eqn:3}$ tells us that the surface moves up and down so as to balance convergences and divergences in the volume flux.
9.2.2 Steady flow and the Froude number
In a steady flow, time derivatives are zero and Equation $\ref{eqn:1}$ and Equation $\ref{eqn:3}$ can be written as
$u u_{x}=-g \eta_{x},\label{eqn:5}$
$u \eta_{x}+u_{x}(\eta+H-h)=u h_{x}.\label{eqn:6}$
We now multiply Equation $\ref{eqn:6}$ by $u$:
$u^{2} \eta_{x}+(\eta+H-h) u u_{x}=u^{2} h_{x} \nonumber$
and substitute Equation $\ref{eqn:5}$:
$u^{2} \eta_{x}-(\eta+H-h) g \eta_{x}=u^{2} h_{x}. \nonumber$
Note that we now have $\eta_x$ as a factor in both terms on the left-hand side. Dividing by $g(\eta +H −h)$, we have
$[\underbrace{\frac{u^{2}}{g(\eta+H-h)}}_{F^{2}}-1] \eta_{x}=\underbrace{\frac{u^{2}}{g(\eta+H-h)}}_{F^{2}} h_{x}. \nonumber$
The quantity $u^2/g(\eta +H −h)$ that appears on both sides is the squared velocity over $g$ times the total water depth. We recognize this as the square of the Froude number (cf. section 8.4.2). In the case of small-amplitude waves over a flat bottom ($\eta\rightarrow 0$;$h$ = 0), $F^2$ matches our previous definition $u^2/gH$. Recall that flow is supercritical (i.e., no information can propagate upstream) if $F$ > 1 and subcritical if $F$ < 1.
In summary, steady flow over an obstacle requires the following relation between the surface and bottom slopes:
$\left(F^{2}-1\right) \eta_{x}=F^{2} h_{x},\label{eqn:7}$
where
$F=\frac{|u|}{\sqrt{g(\eta+H-h)}}.\label{eqn:8}$
An obstacle of small amplitude
If the amplitudes of the surface deflection and the bottom topography are small compared to the water depth, the Froude number can be treated as a constant, allowing us to integrate Equation $\ref{eqn:7}$. To see this, suppose that $u$ is close to its upstream value $u_0$ and $|\eta| \ll H$ and $|h| \ll H$. In that case the Froude number is nearly constant:
$F=F_{0}+F^{\prime}, \nonumber$
where $F_0 = |u_0|/ \sqrt{gH}$ and $F^\prime$ is the perturbation caused by nonzero (but small) perturbations in $u$, $\eta$ and $h$. Then
$F^{2}=F_{0}^{2}+2 F_{0} F^{\prime}+F^{\prime 2}. \nonumber$
As we did in section 8.2.1, we discard the term that is a product of small quantities, giving
$F^{2}=F_{0}^{2}+2 F_{0} F^{\prime}. \nonumber$
Substituting into Equation $\ref{eqn:7}$, we now have
$\left(F_{0}^{2}+2 F_{0} F^{\prime}-1\right) \eta_{x}^{\prime}=\left(F_{0}^{2}+2 F_{0} F^{\prime}\right) h_{x}^{\prime}, \nonumber$
where primes have been placed on $\eta$ and $h$ to indicate that they are small quantities. Once again we discard the products of primes, and arrive at
$\left(F_{0}^{2}-1\right) \eta_{x}^{\prime}=F_{0}^{2} h_{x}^{\prime},\label{eqn:9}$
i.e., fluctuations in $F$ can be neglected in the limit of small amplitude. We now dispense with the subscripts and primes as they have served their purpose.
Integrating Equation $\ref{eqn:9}$, we obtain
$\left(F^{2}-1\right) \eta=F^{2} h+C.\label{eqn:10}$
We assume that, far upstream, the bottom is flat and there is no surface deflection, so the constant of integration is zero. Hence:
$\eta=\frac{F^{2}}{F^{2}-1} h./label{eqn:11} \nonumber$
So the surface deflection over a small bump depends on the flow speed, as shown in Figure $2$. If the flow is fast enough to make $F$ > 1, then the surface is elevated in proportion to the topography. If $F$ < 1, though, the surface deflection is opposite to the topography: low over a bump (dashed curve on Figure $2$); raised over a deep spot.
An obstacle of arbitrary amplitude: hydraulic control
From Equation $\ref{eqn:7}$ we can infer a critical fact about flow over an obstacle:
If $h_x$ = 0, then either $\eta_x$ = 0 or $F$ = 1.
So at a high or low spot in the bottom topography, either the surface is flat or the flow is critical.
In the small-amplitude limit, we have seen that the first condition is satisfied; $\eta_x$ = 0 at the crest of the obstacle (Figure $\PageIncdex{2}$). But in nature, we often observe plunging flow over an obstacle, in which case $\eta_x$ is not zero (Figure $3$). In that case, the Froude number can only be 1. This restriction on the flow state is called hydraulic control. Since plunging flow does not happen in the small-amplitude limit, we identify it as a fundamentally nonlinear effect. To explore it we must consider disturbances of arbitary amplitude.
If the amplitudes are not small, we can no longer integrate Equation $\ref{eqn:7}$ to obtain a quantitative relationship between $\eta$ and $h$. Instead, we analyze Equation $\ref{eqn:7}$ qualitatively. We will do this one step at a time for the case of flow over a bump, with careful reference to Figure $3$.
1. At the crest of the bump, $F$ = 1.
2. We therefore expect that $F$ < 1 upstream of the bump, where the flow is deeper and slower.
3. When we first encounter the bump, $|\eta|$ and $|h|$ are small, and we therefore expect $\eta$ to decrease as in the small-amplitude case with $F$ < 1 (Figure $2$).
4. Approaching the crest of the bump, the disturbance is no longer small, but $F$ is still less than 1 and $h_x$ is still positive, so Equation $\ref{eqn:7}$ tells is that $\eta$ must continue to drop.
5. On the lee side of the bump, $F$ > 1 and $h_x$ < 0, so we see from Equation $\ref{eqn:7}$ that $\eta_x$ must remain negative.
6. Beyond the bump, the increase in $F$ from subcritical to supercritical values produces relatively shallow, fast flow.
9.2.3 Generalization: a channel of arbitrary cross-section
We now relax the assumption #4 of the previous section, namely that the the channel has rectangular cross-section. Instead, we let the channel have arbitrary shape. The width of the stream is now variable, both in $x$ as the channel narrows and widens and in $t$ as the surface elevation changes. We also define the “wetted” cross-sectional area $A$, which varies in both $x$ and $t$ for the same reasons. The only restriction is that we assume the existence of an “upstream” region where the channel shape is uniform in $x$, the velocity is even and $\eta$ = 0. We would like to see how $u$ and $\eta$ vary in response to changes in the channel shape.
The streamwise momentum equation is now
$\frac{D u}{D t}=u_{t}+u u_{x}+v u_{y}+\underbrace{w u_{z}}_{=0}=-g \eta_{x}. \nonumber$
We add a new assumption, namely that the flow is “mostly” streamwise, in the sense that $|u| \gg |v|$, and/or $|u_x| \gg |u_y|$. As a result $|vu_y| \ll |uu_x|$, i.e., the spanwise advection term is negligible, leaving
$u_{t}+u u_{x}=-g \eta_{x} \nonumber$
as before. We now assume that the flow is steady ($u_t$ = 0), and substitute $u = Q/A$ to obtain
$\frac{Q}{A} \frac{d}{d x}\left(\frac{Q}{A}\right)=-g \frac{d \eta}{d x},\label{eqn:12}$
or
$-\frac{Q^{2}}{A^{3}} \frac{d A}{d x}=-g \frac{d \eta}{d x}.\label{eqn:13}$
Can we solve this for $\eta$, and thereby predict the surface response to a given change in channel shape? No, because $A$ is determined in part by $\eta$. Considering $A$ as a function of $x$ and $\eta(x)$, the total $x$-derivative of $A$ has two parts:
$\frac{d A}{d x}=\underbrace{\left(\frac{\partial A}{\partial x}\right)_{\eta}}_{\text {channel shape }}+\underbrace{\left(\frac{\partial A}{\partial \eta}\right)_{x} \frac{d \eta}{d x}}_{\text {water depth }} \nonumber$
The first term describes variations due only to the shape of the walls, the second only to the surface elevation. Note that the change in $A$ due to a small change in surface elevation is $\delta A = W\delta\eta$, so
$\left(\frac{\partial A}{\partial \eta}\right)_{x}=W, \nonumber$
and we are left with
$\frac{u^{2}}{A}\left[\left(\frac{\partial A}{\partial x}\right)_{\eta}+W \frac{d \eta}{d x}\right]=g \frac{d \eta}{d x}. \nonumber$
We now divide by $g$ and collect terms proportional to $d\eta /dx$:
$\left(\frac{W u^{2}}{g A}-1\right) \frac{d \eta}{d x}=-\frac{u^{2}}{g A}\left(\frac{\partial A}{\partial x}\right)_{\eta}. \nonumber$
Defining the Froude number as
$F=\frac{|u|}{\sqrt{g A / W}},\label{eqn:14}$
we finally have
$\left(F^{2}-1\right) \frac{d \eta}{d x}=-\frac{F^{2}}{W}\left(\frac{\partial A}{\partial x}\right)_{\eta}.\label{eqn:15}$
This is a generalization of our previous results Equation $\ref{eqn:7}$ and Equation $\ref{eqn:8}$ for the rectilinear channel. In that previous case $W$ is constant and $A = W(\eta +H −h)$, giving
from which we recover Equation $\ref{eqn:7}$ and Equation $\ref{eqn:8}$.
We can understand plunging flow through an arbitrary constriction in the same way we did in the rectilinear case. Approaching the constriction (Figure $\PageIndex{eqn:5}$a), the wetted area decreases. If $F$ < 1, the elevation must also decrease ($d\eta/dx < 0$; Figure $5$b). At the throat of the constriction, $F$ = 1. Leaving the constriction, the wetted area increases but $F$ > 1, so the surface continues to descend. Downstream, the flow is supercritical, i.e., shallower and faster.
Figure $6$ shows an example from the Smith River in California. Here, rafters are crossing a hydraulic control caused by a constriction. They will now enjoy (one hopes) an exciting few seconds as they negotiate a hydraulic jump, the subject of the next section.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/09%3A_Nonlinear%2C_Hydrostatic_Flow_Over_Topography/9.01%3A_Introduction.txt
|
9.3.1 A stationary hydraulic jump in a rectilinear channel
Downstream of a hydraulic control, the supercritical flow state is unstable. As a result it becomes turbulent and returns to a subcritical state (slower, deeper flow). This transition is called a hydraulic jump. As the name suggests, it can be quite sudden (Figure $1$).
The hydraulic jump is a situation where we cannot ignore turbulence, but we will model its effects in the simplest way possible. The effect of turbulence on a unidirectional channel flow is to divert some of the mean downstream motion into chaotic swirls and eddies. These motions carry no net momentum (the swirling motion is as likely to go in one direction as another), but they do carry kinetic energy. Energy diverted into turbulence does not usually return to the mean flow. Instead, it is converted into progressively smaller eddies (e.g., Figures 5.3.4, $2$) and ultimately converted to internal energy via frictional dissipation (sections 6.4.2, 6.4.3).
Our goals here are to make testable predictions of the downstream flow state. Is it deeper or shallower than upstream? Slower or faster? And by how much? Our tools will be the familiar equations of momentum and mass conservation plus a new energy conservation law that accounts for turbulence.
To simplify the analysis, we return to the case of a rectilinear channel. We retain the assumptions that the fluid is inviscid and homogeneous, and we assume further that, in the vicinity of the hydraulic jump, the bottom is flat ($h$ = 0). The momentum and mass equations describing the mean flow are
$u_{t}=-u u_{x}-g \eta_{x}\label{eqn:1}$
$\eta_{t}=-[u(H+\eta)]_{x}.\label{eqn:2}$
Now consider the vertically-integrated streamwise momentum
$M(x, t)=\int_{-H}^{\eta} u d z=u(\eta+H). \nonumber$
The evolution of $M$ is governed by:
\begin{aligned} M_{t} &=u_{t}(\eta+H)+u \eta_{t} \ &=\left(-u u_{x}-g \eta_{x}\right)(\eta+H)-u[u(H+\eta)]_{x} \ &=-u_{x} u(\eta+H)-g(H+\eta)_{x}(\eta+H)-u[u(H+\eta)]_{x}. \end{aligned} \nonumber
We have used the fact that $\eta_x = (H +\eta)_x$, because $H$ is constant. The first and third terms combine to form a complete derivative, as does the middle term:
$M_{t}=-\left[u^{2}(\eta+H)\right]_{x}-g \frac{1}{2}\left[(H+\eta)^{2}\right]_{x}, \nonumber$
or
$M_{t}=-\mathscr{F}_{x}^{(m)}, \quad \text { where } \quad \mathscr{F}^{(m)}=u^{2}(\eta+H)+\frac{g}{2}(H+\eta)^{2}.\label{eqn:3}$
This tells us that the vertically-integrated momentum is governed by the convergence of the momentum flux $\mathscr{F}(m)$.
Now consider a steady hydraulic jump, as shown in Figure $1$. Equations Equation $\ref{eqn:2}$ and Equation $\ref{eqn:3}$ provide two constraints that determine the change in depth and velocity across the jump. First, the mass flux exiting the jump must be the same as that entering it:
$[u(H+\eta)]_{x_{u}}^{x_{d}}=0.\label{eqn:4}$
Second, the momentum flux must be the same exiting as entering:
$\left[u^{2}(\eta+H)+\frac{g}{2}(H+\eta)^{2}\right]_{x_{u}}^{x_{d}}=0.\label{eqn:5}$
To apply these constraints, we first relabel the upstream and downstream velocities as $u_u$ and $u_d$ (see Figure $1$) . The upstream depth is $H$ and the downstream depth is
$H+\eta=r H, \nonumber$
where the depth ratio $r$ is a constant. If the flow deepens across the jump as shown in Figure $1$, then $r$ > 1. Substituting into Equation $\ref{eqn:4}$ and Equation $\ref{eqn:5}$,
$u_{u} H=u_{d} r H,\label{eqn:6}$
$H u_{u}^{2}+\frac{g}{2} H^{2}=r H u_{d}^{2}+\frac{g}{2} r^{2} H^{2}.\label{eqn:7}$
From Equation $\ref{eqn:6}$, we have
$u_{d}=\frac{u_{u}}{r}.\label{eqn:8}$
Substituting this in Equation $\ref{eqn:7}$ and cancelling a factor $H$, we obtain
$u_{u}^{2}+\frac{g}{2} H=\frac{u_{u}^{2}}{r}+\frac{g}{2} r^{2} H. \nonumber$
This is easily solved for $u_u$:
$u_{u}=\sqrt{\frac{g H}{2} r(r+1)}.\label{eqn:9}$
Using Equation $\ref{eqn:8}$, we also have
$u_{d}=\sqrt{\frac{g H}{2} \frac{r+1}{r}}.\label{eqn:10}$
To predict the downstream flow state, assume that we know the upstream velocity $u_u$ and depth $H$, which gives the upstream Froude number:
$F_{u}^{2}=\frac{u_{u}^{2}}{g H}=\frac{r(r+1)}{2}. \nonumber$
This is a quadratic equation for $r$ with a single positive solution:
$r=\frac{\sqrt{1+8 F_{u}^{2}}-1}{2}.\label{eqn:11}$
We are now able to predict both the downstream velocity $u_d = u_u/r$ and the downstream depth $rH$. To complete the picture we add the downstream Froude number:
$F_{d}^{2}=\frac{u_{d}^{2}}{g(H+\eta)}=\frac{u_{u}^{2} / r^{2}}{g r H}=\frac{F_{u}^{2}}{r^{3}}=\frac{r+1}{2 r^{2}}. \nonumber$
Example $1$
Consider a dam spillway with water depth $H$ = 1 m and velocity $u_u$ = 6 m/s. The Froude number is easily computed:
$F_{u}=\frac{6 \mathrm{m} / \mathrm{s}}{\sqrt{9.8 \mathrm{m} / \mathrm{s}^{2} \times 1 \mathrm{m}}}=1.92. \nonumber$
Because the flow is supercritical, we expect to see a hydraulic jump. From Equation $\ref{eqn:11}$ we compute $r$ = 2.26. The downstream flow is therefore slower and deeper, with depth $rH$ = 2.26 m and velocity $u_u/r$ = 2.65 m/s. The downstream Froude number is subcritical: $F_d$ = $F_u/r^{3/2}$ = 0.32.
An important question remains to be addressed. So far, we have assumed that $r$ > 1, in which case
$F_{u}^{2}=\frac{r(r+1)}{2}>1, \quad \text { and } \quad F_{d}^{2}=\frac{r+1}{2 r^{2}}<1. \nonumber$
Now, suppose that $r$ < 1. The results are opposite: $F_u$ < 1 and $F_d$ > 1, i.e., the transition is from a subcritical to a supercritical state, with flow becoming shallower and faster. This is a perfectly valid solution of the momentum and mass equations as developed above. Yet, it is a phenomenon never seen in nature (Figure $3$). Why? This paradox exists because we have not yet considered the conservation of energy.
9.3.2 Energy flux and the effects of turbulence
At a given location, the specific mechanical energy of the mean flow (kinetic plus potential, per unit mass) is $\frac{1}{2}u^2 +gz$. Integrating this over the vertical extent of the flow, and remembering that $u$ is independent of $z$, we have
\begin{aligned} E(x, t) &=\int_{-H}^{\eta}\left(\frac{1}{2} u^{2}+g z\right) d z \ &=\left[\frac{1}{2} u^{2} z+\frac{1}{2} g z^{2}\right]_{-H}^{\eta} \ &=\frac{1}{2} u^{2}(\eta+H)+\frac{1}{2} g\left(\eta^{2}-H^{2}\right). \end{aligned} \nonumber
Our task now is to find the equation that governs $E(x,t)$. Differentiating with respect to time, we obtain
$E_{t}=u u_{t}(\eta+H)+\frac{1}{2} u^{2} \eta_{t}+g \eta \eta_{t}. \nonumber$
Now from Equation 9.2.4,
$u_{t}=-u u_{x}-g \eta_{x}=-\left(\frac{1}{2} u^{2}+g \eta\right)_{x} \nonumber$
and from 9.2.11,
$\eta_{t}=-\left(\frac{Q}{W}\right)_{x}, \nonumber$
where $Q/W = u(\eta +H)$. Substituting, we have
\begin{aligned} E_{t} &=-u\left(\frac{1}{2} u^{2}+g \eta\right)_{x}(\eta+H)-\left(\frac{1}{2} u^{2}+g \eta\right)\left(\frac{Q}{W}\right)_{x} \ &=-\frac{Q}{W}\left(\frac{1}{2} u^{2}+g \eta\right)_{x}-\left(\frac{Q}{W}\right)_{x}\left(\frac{1}{2} u^{2}+g \eta\right) \ &=-\left[\frac{Q}{W}\left(\frac{1}{2} u^{2}+g \eta\right)\right]_{x}, \end{aligned} \nonumber
or
$E_{t}=-\mathscr{F}_{x}^{(e)},\label{eqn:12}$
in which
$\mathscr{F}^{(e)}=\frac{Q}{W}\left(\frac{1}{2} u^{2}+g \eta\right) \nonumber$
is the downstream flux of mechanical energy in the mean flow.
Now suppose we wanted to account for the fact that some mechanical energy is diverted into turbulent motions. The equations of motion would then include some very complicated additional terms, and Equation $\ref{eqn:12}$ would take the form
$E_{t}=-\mathscr{F}_{x}^{(e)}-\mathscr{E},\label{eqn:13}$
where the new term $\mathscr{E}$ represents the rate of energy loss to turbulence. Explicit calculation of $\mathscr{E}$ is not practical. Happily, all we need to know here is that $\mathscr{E}$ is positive, i.e., turbulent dissipation only works one way, and that is to reduce the mechanical energy of the flow.
In steady state, then, $E_t$ = 0 and Equation $\ref{eqn:13}$ becomes
$\mathscr{F}_{x}^{(e)}=-\mathscr{E}<0. \nonumber$
This inequality simply states that the flux of mechanical energy exiting a turbulent region is less than the flux entering it. To apply this to the hydraulic jump shown in Figure $1$, we integrate from $x_u$ to $x_d$ and obtain
$\int_{x_{u}}^{x_{d}} \mathscr{F}_{x}^{(e)} d x=\left.\mathscr{F}^{(e)}\right|_{x_{u}} ^{x_{d}}=\frac{Q}{W}\left(\frac{1}{2} u_{d}^{2}+g \eta-\frac{1}{2} u_{u}^{2}\right)<0. \nonumber$
We have oriented our coordinates such that $Q$ > 0, and therefore
$\frac{1}{2} u_{d}^{2}+g \eta-\frac{1}{2} u_{u}^{2}<0. \nonumber$
Substitution from Equation $\ref{eqn:8}$, Equation $\ref{eqn:9}$ and Equation $\ref{eqn:11}$ gives
$\frac{1}{2} \frac{g H}{2} \frac{r+1}{r}+g(r-1) H-\frac{1}{2} \frac{g H}{2} r(r+1)<0. \nonumber$
With a little algebra, this becomes
$(1-r)^{3}<0, \quad \text { therefore } \quad r>1. \nonumber$
If $r$ were less than 1, this condition would be violated, meaning that the mean flow would gain energy from the turbulence, an impossibility. As a result, hydraulic jumps in the real world always carry the flow from a supercritical to a subcritical state, as in Figure $3$a.
9.3.3 A hydraulic bore
Until now we have assumed that our hydraulic jump is stationary, as it would be downstream of a fixed obstruction. A very similar phenomenon is a bore, which is basically a moving hydraulic jump. A good example is swash, i.e., a beach wave after it breaks.
Mathematical analysis requires only that we subtract the appropriate velocity from Equation $\ref{eqn:8}$ and Equation $\ref{eqn:9}$. In the stationary case considered previously, we have upstream velocity $u_u$ and downstream velocity $u_d$, while the velocity of the jump, which we will call $u_J$, is zero (Figure $4$a). Now suppose the jump is moving into a quiescent region (Figure $4$b). To use our previous results, we subtract $u_u$ from all velocities, so that the new upstream velocity $u^\prime_u$ = 0. Similarly, the downstream velocity $u^\prime_d$ = $u_d −u_u$ and the jump velocity $u^\prime_J$ = $−u_u$, where $u_u$ and $u_d$ are still as given by Equation $\ref{eqn:9}$ and Equation $\ref{eqn:8}$.
Two features are worth noting:
• The bore propagates leftward relative to the upstream fluid at a velocity $u_{J}^{\prime}=-u_{u}=-\sqrt{\frac{g H}{2} r(r+1)}.\label{eqn:14}$ Its speed is then $\left|u_{J}^{\prime}\right|=\sqrt{g H r} \sqrt{\frac{(r+1)}{2}}>\sqrt{g H r} \text { for } r>1. \nonumber$ This speed is greater than the linear long wave speed, the “speed limit” for small-amplitude waves. This is an example of nonlinear speedup, the tendency for nonlinear effects to increase the speed of a disturbance. The greater the height of the bore (i.e., r), the greater the speedup.
• The flow behind the bore moves to the left with speed $\left|u_{d}^{\prime}\right|=\left|u_{d}-u_{u}\right|=\sqrt{\frac{g H}{2} r(r+1)}\left(1-\frac{1}{r}\right)<\sqrt{\frac{g H}{2} r(r+1)} \nonumber$ This means that the current behind the bore moves more slowly than the bore itself. More specifically, the ratio of the speed of the water behind the bore to that of the bore itself is $\frac{u_{d}^{\prime}}{u_{J}^{\prime}}=1-\frac{1}{r}. \nonumber$ Two limiting cases are of interest:
• In the limit $r\rightarrow 1$, the water behind the bore is nearly stationary ($u^\prime_d\rightarrow 0$). This is similar to the case of small-amplitude waves, where the surrounding water does not travel with the wave.
• In the limit $r \rightarrow \inf$, $u^\prime_d \rightarrow u^\prime_J$, i.e. when a large bore travels into shallow water, it becomes a “wall of water”, traveling almost as a solid object.
In some cases, both the bore and the upstream water are moving, e.g., a tidal bore advancing up a river. In that case, Equation $\ref{eqn:14}$ still gives the speed of the wave relative to the upstream flow that it is propagating into. If you know the speed of the upstream (or the downstream) flow, you can calculate the speed of the wave relative to the shore, or relative to any other reference frame that may be relevant.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/09%3A_Nonlinear%2C_Hydrostatic_Flow_Over_Topography/9.03%3A_Hydraulic_jumps_and_bores.txt
|
In homework exercise 29, we predict river speed using the stress-strain relation for a Newtonian fluid, and find that the speed is ∼ 8 km/s! The result is rendered reasonable only by imposing an eddy viscosity, much greater than the true viscosity, to represent the retarding effect of turbulent fluctuations. Here we will do that calculation again with turbulence represented in a more realistic fashion. We will also allow for a variable surface elevation $\eta(x,t)$ and hence the possibility of waves.
We begin with Cauchy’s equation Equation 6.3.18, with the stress tensor expanded into pressure and deviatoric parts as per Equation 6.3.27:
$\rho \frac{D u_{j}}{D t}=\rho g_{j}-\frac{\partial p}{\partial x_{j}}+\frac{\partial \sigma_{i j}}{\partial x_{i}}.\label{eqn:1}$
We assume that the flow is homogeneous ($\rho=\rho_0$) and two-dimensional ($v=0$, $\partial/\partial y=0$). We also assume that the stress varies primarily in $z$, and therefore neglect its $x$-derivatives. Finally, we assume that the motion is nearly hydrostatic and therefore $\partial u/\partial z=0$.
Writing out the components of Equation $\ref{eqn:1}$, we have
$\frac{D u}{D t}=g \sin \theta-\frac{1}{\rho_{0}} \frac{\partial p}{\partial x}+\frac{1}{\rho_{0}} \frac{\partial \sigma_{31}}{\partial z}\label{eqn:2}$
$\frac{D w}{D t}=-g \cos \theta-\frac{1}{\rho_{0}} \frac{\partial p}{\partial z}+\frac{1}{\rho_{0}} \frac{\partial \sigma_{33}}{\partial z}.\label{eqn:3}$
Assuming that the vertical acceleration $Dw/Dt$ is negligible, the vertical momentum equation gives us an altered form of hydrostatic balance:
$\frac{1}{\rho_{0}} \frac{\partial p}{\partial z}=-g \cos \theta+\frac{1}{\rho_{0}} \frac{\partial \sigma_{33}}{\partial z}. \nonumber$
Integrating over $z$ and assuming $p = 0$ at $z = \eta$, we have
$p=-\rho_{0} g \cos \theta(\eta-z)+\left.\sigma_{33}\right|_{z} ^{\eta}. \nonumber$
We now differentiate with respect to $x$ (remembering that stress is independent of $x$) to get
$\frac{\partial p}{\partial x}=\rho_{0} g \cos \theta \frac{\partial \eta}{\partial x}. \nonumber$
Substituting this into the $x$-momentum equation, we have
$\frac{D u}{D t}=g \sin \theta-g \cos \theta \frac{\partial \eta}{\partial x}+\frac{1}{\rho_{0}} \frac{\partial \sigma_{31}}{\partial z}. \nonumber$
In summary, the along-stream flow is driven by three forces: the downhill pull of gravity, pressure gradients due to surface deflection, and the stress divergence. The tangential stress $\sigma_{31}$ is exerted on the fluid by the solid bottom boundary and by the air above the surface, then transmitted through the fluid interior by turbulent eddies.
Assuming again that $u$ is independent of $z$ (the long-wave approximation), we can integrate in the vertical to obtain
$(H+\eta) \frac{D u}{D t}=(H+\eta)\left(g \sin \theta-g \cos \theta \frac{\partial \eta}{\partial x}\right)+\left.\frac{\sigma_{31}}{\rho_{0}}\right|_{-H} ^{\eta}\label{eqn:4}$
Here we see a major advantage of working directly from Cauchy’s equation: all we need to know about the stress tensor is the values of $\sigma_{31}$ at the surface and the bottom. The bottom stress is given by a very well-tested empirical relationship:
$\left.\sigma_{31}\right|_{z=-H}=\rho_{0} C_{D} u^{2}, \nonumber$
where $C_D$ is the drag coefficient. Typically, $C_D$ is in the range 10−3 −10−2. Here we treat $C_D$ as a constant. A similar relationship holds at the surface:
$\left.\sigma_{31}\right|_{z=\eta}=\rho_{A} C_{D}\left(u_{A}-u\right)^{2}, \nonumber$
where $\rho_A$ is the density of air and $u_A$ is the wind speed. For a typical river flow the effect of wind is negligible, so we will consider only the bottom stress. Substituting in Equation $\ref{eqn:4}$ and dividing by $H +\eta$, we have
$\frac{\partial u}{\partial t}+u \frac{\partial u}{\partial x}=\underbrace{g \sin \theta}_{\text {gravity }}-g \cos \theta \frac{\partial \eta}{\partial x}-\underbrace{\frac{C_{D} u^{2}}{H+\eta}}_{\text {friction }}.\label{eqn:5}$
We now assume that the terms involving $\partial/\partial t$ and $\partial/\partial x$ in Equation $\ref{eqn:5}$ are negligible. In other words, the dominant balance is between the downhill pull of gravity and the retarding effect of bottom friction. This assumption makes it easy to solve for the flow velocity1:
$u=\sqrt{\frac{g(H+\eta) \sin \theta}{C_{D}}}.\label{eqn:6}$
For example, consider the river described in homework exercise 29, with $H$ = 2 m, $\eta$ = 0, $g$ = 9.8 ms-2, $\sin\theta$ = 4.3 × 10−4 and set $C_D$ = 3×10−3 (Li et al. 2004). The result is u = 1.7 m/s, a reasonable river speed.
Exercise $1$
Derive an expression for the Froude number based on Equation $\ref{eqn:6}$. What is its value for the river parameters given above? Now derive a simple formula for the slope angle at which the flow becomes supercritical. If $C_D$ = 3×10−3, what is this critical slope angle in meters per kilometer? Based on this model, could a river become supercritical as a result of increased flow rate (in flood conditions, for example)?2
We now allow for small but nonzero variations in $x$ and $t$. Note that we have not yet had to invoke mass conservation: with such simple flow geometry mass is conserved automatically. To enforce mass conservation in the presence of variations in $x$ and $t$, we write Equation 9.2.6 with bottom topography omitted ($h$ = 0):
$\frac{\partial \eta}{\partial t}+\frac{\partial}{\partial x}[u(H+\eta)]=0.\label{eqn:7}$
Substituting for $u$ using Equation $\ref{eqn:6}$, this becomes3:
$\frac{\partial \eta}{\partial t}+\frac{\partial}{\partial x} \sqrt{\frac{g(H+\eta)^{3} \sin \theta}{C_{D}}}=0. \nonumber$
After carrying out the differentiation with respect to $x$, this becomes
$\frac{\partial \eta}{\partial t}+\frac{3}{2} u \frac{\partial \eta}{\partial x}=0. \nonumber$
What this tells us is that $\eta(x,t)$ is constant on trajectories $x(t)$ given by
$\left.\frac{d x}{d t}\right|_{\eta=\text { const. }}=-\frac{\partial \eta / \partial t}{\partial \eta / \partial x}=\frac{3}{2} u,\label{eqn:8}$
i.e., signals propagate 50% faster than the current itself. In the river described in exercise 29, the current travels 1.7 m/s. Suppose now that a rain event upstream causes a sudden increase in flow rate. Our model predicts that increase will propagate downstream at 2.6 m/s.
1This is called the Chézy formula for river speed. It was developed by the French engineer Antoine de Chézy (1718-1798) and tested using measurements of the River Seine.
2In fact, measurements show that the drag coefficient decreases with increasing depth, approximately as depth−1/3, so the Froude number is proportional to depth1/6 (e.g., White 2003).
3Does it bother you that we discarded terms involving $\partial/\partial t$ and $\partial/\partial x$ in Equation $\ref{eqn:5}$ but retain such terms in Equation $\ref{eqn:7}$? In Equation $\ref{eqn:5}$, we neglected the terms involving $\partial/\partial t$ and $\partial/\partial x$ not on the grounds that these terms are zero (i.e., that nothing varies in $x$ or $t$), but rather because the remaining terms are much bigger. In the mass equation Equation $\ref{eqn:7}$, both terms involve partial derivatives, so there are no larger terms to dominate the balance. It is therefore permissible to retain the partial derivatives in Equation $\ref{eqn:7}$.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/09%3A_Nonlinear%2C_Hydrostatic_Flow_Over_Topography/9.04%3A_Flood_waves_in_a_turbulent_river.txt
|
We have developed skills in mathematics and advanced scientific reasoning that allow us to take a set of assumptions (hypotheses) and develop from them a testable prediction in the form of a set of partial differential equations. The result of this is the Navier-Stokes equations and the accompanying mass equation, heat equation, etc.
To test these predictions is not easy, for one must not only solve the equations but measure a real flow with precision sufficient to tell whether the solution matches reality or not. We have played at this by extracting some extremely simple solutions for idealized model flows and comparing them with reality on scales that we can perceive easily without specialized equipment. In some cases, the match is good. When it is not, the culprit has most commonly been a failure to account for turbulence. This is not a failing of the Navier-Stokes equations but rather of the unrealistically simple flow geometries for which we are able to solve them.
Even in nature, where flows are invariably turbulent, we see things like waves, vortices, and hydraulic jumps, and they behave much as our simple idealizations predict. If predicted scales are seriously inaccurate, inserting a simple model of the turbulent energy cascade often gives realistic results.
So where do we go from here? The student of oceanography will go on to study the effects of density stratification and planetary rotation. The atmospheric physicist will need these and also the thermodynamics of water vapor. Plasmas found in the ionosphere and stellar atmospheres may be understood by adding Maxwell’s equations for electromagnetism (Choudhuri 1996). In smaller systems like lakes, rivers and beaches, often of concern to civil and environmental engineers, density stratification is important but planetary rotation is less so. In all geophysical systems, turbulence must be accounted for to achieve a realistic level of understanding and a predictive capacity.
For further exploration, check out Fluid Mechanics (Kundu et al. 2016). It contains concise summaries of most of these advanced aspects of the discipline and many more (and I do mean many) not listed here.
As you walk through the world, you are surrounded by flow. You contain flow. You now carry with you a conceptual understanding developed over centuries by humans like you who also walked through, and were part of, this world of flow. Remember the words of Bruce Lee: Be water, my friend.
Bill Smyth
Bibliography
Aris, R., 1962: Vectors, Tensors and the Basic Equations of Fluid Mechanics. Cambridge Univ. Press.
Bronson, R. and G. Costa, 2009: Matrix Methods: Applied Linear Algebra (3rd ed.). Academic Press, New York.
Choudhuri, A., 1996: The Physics of Fluids and Plasmas, an Introduction for Astrophysicists. Cambridge Univ. Press.
Curry, J. A. and P. J. Webster, 1998: Thermodynamics of atmospheres and oceans, volume 65. Academic Press.
Gill, A., 1982: Atmosphere-Ocean Dynamics. Academic Press, San Diego„ 662 pp.
Kundu, P., I. Cohen, and D. Dowling, 2016: Fluid Mechanics (6th ed.). Academic Press, San Diego.
Li, C., A. Valle-Levinson, L. Atkinson, K. Wong, and K. Lwiza, 2004: Estimation of drag coefficient in James River Estuary using tidal velocity data from a vessel-towed ADCP. J. Geophys. Res.. J. Geophys. Res., 109, doi:10.1029/2003JC001991, c03034.
Marion, J., 2013: Classical dynamics of particles and systems. Academic Press, New York.
Moffatt, H., S. Kida, and K. Ohkitani, 1994: Stretched vortices - the sinews of turbulence; large reynolds number asymptotics. J. Fluid Mech., 259, 241–264.
Smyth, W., 1999: Dissipation range geometry and scalar spectra in sheared, stratified turbulence. J. Fluid Mech., 401, 209–242.
Smyth, W. and S. Thorpe, 2012: Glider measurements of overturning in a kelvin-helmholtz billow train. J. Mar. Res., 70, 119–140.
Vallis, G., 2006: Atmospheric and Oceanic Fluid Dynamics. Cambridge Univ. Press.
White, F., 2003: Fluid Mechanics. McGraw-Hill, New York.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/10%3A_Postface.txt
|
All matrices are 3$\times$3 and all vectors are 3-vectors, unless otherwise specified.
1. Combining vectors Let $\vec{u}=\left[\begin{array}{l} 0 \ 1 \ 2 \end{array}\right] ; \quad \vec{v}=\left[\begin{array}{l} 1 \ 2 \ 3 \end{array}\right] \nonumber$ Compute the following:
1. $\vec{u}\cdot\vec{v}$
2. $|\vec{u}|$
3. $|\vec{v}|$
4. $\theta$
5. component of $\vec{u}$ in the direction of $\vec{v}$
6. component of $\vec{v}$ in the direction of $\vec{u}$
2. Self-consistency of free indices 1 In which of the following equations are the free indices consistent?
1. $A_{ij}B_j=\Gamma_i$
2. $A_{ij}B_i=\Gamma_i$
3. $A_{ij}B_j=\Gamma_i$
3. Matrix multiplication Which of the following expressions are equivalent to $A_{ij}B_{jk}$?
1. $A_{im}B_{mk}$
2. $A^T_{mi}B_{mk}$
3. $B_{jk}A_{ij}$
4. Self-consistency of free indices 2 Fill in the indices on Γ to make the equation self - consistent:
1. $A_{ij}B_{j}=\Gamma_?$
2. $A_{kj}B_{j}=\Gamma_?$
3. $A_{ik}B_k=\Gamma_?$
4. $A_{iq}B_q=\Gamma_?$
5. $A_{ij}B_{jk}=\Gamma_?$
6. $A_{ik}B_{kj}=\Gamma_?$
7. $A_{lm}B_{ma}=\Gamma_?$
8. $A_{ij}B_{kj}C_{kl}=\Gamma_?$
5. Using the identity matrix Simplify the following by summing over the dummy indices. Assume that vectors have 3 elements and matrices are 3$\times$3.
1. $\delta_{3k}p_k$
2. $\delta_{3i}\delta_{ij}$
3. $\delta_{i2}\delta_{i2}$
4. $\delta_{ij}\delta_{ij}$
5. $\delta_{i2}\delta_{ik}\delta_{3k}$
6. $\delta_{ij}v_iv_j$
6. Vector and matrix properties For each statement on the left, choose the appropriate statement from the list on the right.
• $u_iv_i=0$ $\underset{\sim}{A}$ is an orthogonal matrix.
$A_{pq}B_{qr}=\delta_{pr}$ $\vec{u}$ is an eigenvector of $\underset{\sim}{A}$.
$A_{i j}+A_{j i}=0$ $\underset{\sim}{B}$ is the inverse of $\underset{\sim}{A}$.
\{u_{i} v_{i}=\pm|\vec{u}||\vec{v}|\) $\vec{u}$ and $\vec{v}$ are orthogonal vectors.
$|\vec{v}|=1$ $\underset{\sim}{A}$ is a symmetric matrix.
$A_{i j}^{T} A_{j k}=\delta_{i k}$ $\vec{v}$ is a unit vector.
$\underset{\sim}{A}$ is an antisymmetric matrix.
$\vec{u}$ is parallel to $\vec{v}$
7. Hyperbolic functions review The hyperbolic sine and cosine functions are defined as: $\sinh x=\frac{e^{x}-e^{-x}}{2} ; \quad \cosh x=\frac{e^{x}+e^{-x}}{2} \nonumber$
1. Show that $\frac{d}{d x} \sinh x=\cosh x ; \quad \frac{d}{d x} \cosh x=\sinh x \cdot \quad \frac{d^{2}}{d x^{2}} \sinh x=\sinh x. \nonumber$
2. Calculate the Taylor series expansions about $x$ = 0 of $f(x)$ = $\sinh x$ and $f(x)$ = $\cosh x$ up to (and including) the term proportional to $x^2$. [See Appendix A if you need a refresher on Taylor series.]
3. Sketch the $\sinh$ and $\cosh$ functions.
4. The hyperbolic tangent function is $\tanh x=\frac{\sinh x}{\cosh x} \nonumber$ Using your results from (a) and (b) above, show that $\tanh x \simeq x$ for $x$ close to zero.
5. Determine the limiting values of $\tanh x$ as $x \rightarrow \inf$ and as $x \rightarrow −\inf$. (Hint: start by writing $tanh x$ in terms of $e^x$ and $e^{−x}$.)
6. Sketch the $\tanh$ function.
8. Matrix transformations 1 For each matrix, compute the determinant and describe in words the effect the matrix has on a general column vector $\vec{v}$. Is the effect reversible?
1. $\left[\begin{array}{lll} 3 & 0 & 0 \ 0 & 3 & 0 \ 0 & 0 & 3 \end{array}\right] \nonumber$
2. $\left[\begin{array}{lll} 0 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{array}\right] \nonumber$
3. $\left[\begin{array}{ccc} 3 & 0 & 0 \ 0 & 3 & 0 \ 0 & 0 & 3 \end{array}\right]\left[\begin{array}{lll} 0 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{array}\right] \nonumber$
9. Matrix transformations 2 Let $\underset{\sim}{A} = \left[\begin{array}{lll} 1 & 0 & 0 \ 0 & 2 & 0 \ 0 & 0 & 1 \end{array}\right]. \nonumber$
1. Compute the determinant. Is $\underset{\sim}{A}$ singular?
2. Describe in words the effect the matrix has on a general column vector $\vec{v}$.
3. Consider a set of unit vectors emanating from the origin in all possible directions. Their tips comprise a sphere of radius 1, the unit sphere. Now suppose that each unit vector is multiplied by $\underset{\sim}{A}$. What becomes of the sphere?
4. Compute eigenvalues and eigenvectors of $\underset{\sim}{A}$. Guided by these results, describe in words the class (or classes) of vectors whose direction is unchanged after multiplication by $\underset{\sim}{a}$.
10. Rotation exercise
1. Compute $\underset{\sim}{C}(\theta)$ for rotation around $\hat{e}^{(3)}$ by a nonzero angle $\theta$.
2. Verify that $\underset{\sim}{C}(-\theta)=\underset{\sim}{C}^T=\underset{\sim}{C}^{-1}$.
3. Compute the eigenvalues of $\underset{\sim}{C}$. You should find that one eigenvalue is 1 and the other two are complex.
4. Show that $\hat{e}^{(3)}$ is the eigenvector with eigenvalue 1. [Hint: You don’t have to compute the eigenvector, just show that it works, i.e. that Equation 2.4.1 is satisfied by $\hat{e}^{(3)}$ with $\lambda^{(m)}$ = 1.
5. Express the tensor $\underset{\sim}{A}=\left(\begin{array}{lll} a & 0 & 0 \ 0 & b & 0 \ 0 & 0 & c \end{array}\right) \nonumber$ in the rotated frame defined by $\underset{\sim}{C}$. What special property does $\underset{\sim}{A}$ have when $a$ = $b$ (i.e. how does the transformed matrix depend on $\theta$ when $a$ = $b$)?
6. Now suppose we rotate the coordinates around $\hat{e}^{(3)}$ by an additional angle $\phi$. Write down the corresponding rotation matrix $\underset{\sim}{C}(\phi)$, then multiply onto the matrix $\underset{\sim}{C}(\theta)$ that you derived in (10.1) to get a matrix that describes the net rotation. From this, deduce the standard trigonometric identities for $\cos(\theta +\phi)$ and $\sin(\theta +\phi)$.
11. Index notation exercise Assuming that $\underset{\sim}{S}$ is symmetric and $\underset{\sim}{A}$ is antisymmetric, prove that $S_{ij}A_{ij}=0.$.
12. Symmetric-antisymmetric decomposition Let an arbitrary matrix $T_{ij}$ be decomposed into a sum of symmetric and antisymmetric parts: $T_{i j}=S_{i j}+A_{i j} \nonumber$ where $S_{i j}=\frac{1}{2}\left(T_{i j}+T_{j i}\right) ; \quad A_{i j}=\frac{1}{2}\left(T_{i j}-T_{j i}\right). \nonumber$
1. Show that $T_{ij}T_{ij}=S_{ij}S_{ij}+A_{ij}A_{ij}$. [Hint: Your result from 11 will be useful.]
2. Show that $u_iT_{ij}u_j=u_iS_{ij}u_j$, for any vector $\vec{u}$.
13. Tensor order 1 Each of the following expressions represents a tensor (or a component of a tensor). Give the order of the tensor.
1. $A_{ij}v_j$
2. $A_{ij}B_{jk}$
3. $v_iA_{ij}$
4. $A_{ij}v_k$
5. $A_{ij}B_{kl}$
6. $B_{kl}A_{lm}$
7. $\underset{\sim}{A}\vec{v}$
8. $\vec{u}\cdot\vec{v}$
9. $\vec{u}\cdot\underset{\sim}{A}\vec{v}$
10. $u_iA_{ij}v_j$
14. Tensor order 2 Each of the following expressions represents a component of a tensor that varies in space. Give the order of the tensor.
1. $\partial\phi/\partial x_i$
2. $\partial v_i/\partial x_i$
3. $\partial v_i/\partial x_j$
4. $\partial A_{ij}/\partial x_j$
15. Is it a tensor? Assume that $\vec{u}$ is a vector. Show that $\partial u_i/\partial x_j$ transforms as a second order tensor under coordinate rotations.
16. Rotation rule for a 3rd-order tensor Assume that $\vec{u}$ is a vector and $\underset{\sim}{A}$ is a 2nd-order tensor. Derive the forward transformation rule for a 3rd-order tensor $Z_{ijk}$ such that the relation $u_{i}=Z_{i j k} A_{j k}, \nonumber$ remains valid after a coordinate rotation.
17. The $\varepsilon$ - $\delta$ relation Rewrite using the $\varepsilon$ - $\delta$:
1. $\varepsilon_{i k j} \varepsilon_{m j l}$
2. $\varepsilon_{\text {tom}} \varepsilon_{j i m}$
18. Properties of the cross product
1. Using the properties of alternating tensor, prove that $\vec{u}\times\vec{v}$ is perpendicular to both $\vec{u}$ and $\vec{v}$. (Hint: One way to show that two vectors are perpendicular is to show that their dot product is zero.)
2. Consider 3 vectors $\vec{u}$, $\vec{v}$ and $\vec{w}$ related by $\vec{w}=\vec{u}\times\vec{v}$. Using the $\varepsilon$ - $\delta$ relation, prove that $|\vec{w}|=|\vec{u}||\vec{v}||\sin\theta|$, where $\theta$ is the angle between $\vec{u}$ and $\vec{v}$. (Hint: Start by computing $\vec{w}\cdot\vec{w}$.)
19. Transforming matrix properties Suppose a 2nd-order tensor $\underset{\sim}{A}$ has elements $A_{ij}$ in one coordinate frame and $A^\prime_{ij}$ in a second frame which is related to the first by the rotation matrix $\underset{\sim}{C}$. Prove the following.
1. If $\underset{\sim}{A}$ is symmetric, it remains so after the coordinate rotation.
2. If $\underset{\sim}{A}$ is antisymmetric, it remains so after the coordinate rotation.
3. The trace is unchanged, i.e., $\operatorname{Tr}(\underset{\sim}{A})$ is a scalar.
4. The eigenvalues of $\underset{\sim}{A}$ are scalars.
20. Proving vector identities Assume that $\phi$ and $\psi$ are scalars and $\vec{u}$, $\vec{v}$ and $\vec{w}$ are vectors, all of which vary smoothly in space. Prove the following:
1. $\vec{\nabla} \cdot(\vec{\nabla} \times \vec{u})=0$
2. $\vec{\nabla}(\phi \psi)=\psi \vec{\nabla} \phi+\phi \vec{\nabla} \psi$
3. $\vec{\nabla} \cdot(\phi \vec{u})=\vec{u} \cdot \vec{\nabla} \phi+\phi \vec{\nabla} \cdot \vec{u}$
4. $\vec{\nabla} \times(\phi \vec{u})=\vec{\nabla} \phi \times \vec{u}+\phi \vec{\nabla} \times \vec{u}$
5. $\vec{u} \times(\vec{v} \times \vec{w})=(\vec{u} \cdot \vec{w}) \vec{v}-(\vec{u} \cdot \vec{v}) \vec{w}$
6. $\vec{\nabla} \times(\vec{u} \times \vec{v})=(\vec{v} \cdot \vec{\nabla}) \vec{u}-(\vec{\nabla} \cdot \vec{u}) \vec{v}-(\vec{u} \cdot \vec{\nabla}) \vec{v}+(\vec{\nabla} \cdot \vec{v}) \vec{u}.$
21. Advection A rainstorm, moving from the west, has passed over Philomath and is just reaching Corvallis1. The storm moves eastward at a constant speed $V$ > 0. The distribution of the rain rate, $R$, remains steady as the storm moves, i.e., $R(x,t)$ = $R(s)$, where $s$ = $x−Vt$. ($R$ is the amount of rain per unit time, e.g., 10 mm/hour, as would be measured using a rain guage. $R(s)$ is a maximum at $s$ = 0 and drops off to zero as $s \rightarrow \pm \inf$.)
1. Using the chain rule, show that the rain rate at a fixed point evolves as:$\frac{\partial R}{\partial t}=-V \frac{\partial R}{\partial x}. \nonumber$
2. Describe in words the meaning of each of the expressions $V$, $\partial R/\partial x$, and $\partial R/\partial t$ in this equation. Referring to the diagram, give and interpret the signs of these quantities, both at Philomath and at Corvallis.
22. Proving more identities Prove the following:
1. $(\vec{u} \times \vec{v}) \cdot(\vec{w} \times \vec{x})=(\vec{u} \cdot \vec{w})(\vec{v} \cdot \vec{x})-(\vec{u} \cdot \vec{x})(\vec{v} \cdot \vec{w})$
2. $\vec{\nabla} \cdot(\vec{u} \times \vec{v})=\vec{v} \cdot(\vec{\nabla} \times \vec{u})-\vec{u} \cdot(\vec{\nabla} \times \vec{v})$
23. Streamlines and strains in a 2D flow A 2D steady flow has components $\vec{u}= y$, $\vec{v} = x$.
1. Compute the streamfunction and show that the streamlines are hyperbolas: $x^2 −y^2 = const$.
2. Sketch a few representative streamlines, showing the direction of the flow.
3. Compute the vorticity vector $\vec{\omega}$ and the strain tensor $\underset{\sim}{e}$.
4. Compute the eigenvalues and eigenvectors of $\underset{\sim}{e}$. Choose the eigenvectors to have unit length.
5. Confirm that the eigenvectors are orthogonal.
6. Form the rotation matrix that diagonalizes $\underset{\sim}{e}$, ensuring that its determinant is $\pm 1$ (reorder the eigenvalues and eigenvectors if necessary).
7. Describe the rotation that this matrix represents. Give both angle and axis of rotation. (If this is not clear, try reordering the eigenvalues and eigenvectors.)
8. Use your rotation matrix to diagonalize the strain tensor.
9. Identify the principal axes and principal strains. Indicate these with arrows on your sketch from part 2.
10. Characterize the principal strains as extensional or compressive.
24. An isolated vortex An isolated vortex is one such that the circulation in the limit $r\rightarrow\inf$ is zero. Consider an axisymmetric vortex (no dependence on the azimuthal angle $\theta$) for which the azimuthal velocity $u_\theta$ is proportional to $r^{-\alpha}$. What is the range of possible values of the exponent $\alpha$ such that the circulation $\Gamma(r)$ is finite as $r\rightarrow\inf$?
25. The Rankine vortex A Rankine vortex has angular velocity $\dot{\theta}$ for $0\leq r \leq R$, and is irrotational for larger $r$.
1. Compute and sketch the azimuthal velocity $u_\theta(r)$ for all $r$. Require that $u_\theta(r)$ be continuous.
2. Compute and sketch the vorticity $\omega(z)$ and the circulation $\Gamma(r)$.
3. Interpret the result of 25.2 in terms of your result for the previous problem 24.
26. Conservation of property $X$ Suppose that $X$ is the concentration of “something” per unit mass of fluid.
1. Show that the concentration per unit volume is $\rho X$, where $\rho$ is the density of the fluid.
2. Now suppose that the flux of $X$ is given by $\vec{F}_{X}=-\rho \gamma \vec{\nabla} X, \nonumber$ where $\gamma$ is a scalar. Write down a Lagrangian conservation equation stating that the net amount of $X$ in a fluid parcel changes according to the flux normal to its surface. (More specifically, the net $X$ increases if the total flux through the surface is inward, and vice versa.)
3. Assuming that mass is conserved, show that $X$ must obey this equation: $\frac{\partial X}{\partial t}+\vec{u} \cdot \vec{\nabla} X=\frac{1}{\rho} \vec{\nabla} \cdot(\rho \gamma \vec{\nabla} X). \nonumber$
27. Force on a small cube
1. Compute the net contact force (per unit volume) on a cube with edge length $\Delta$ in the limit $\Delta\rightarrow 0$. Do this by integrating the stress vector $\vec{f}=\hat{n}\underset{\sim}{\tau}$ over all 6 surfaces. Follow the sequence of steps we used in section 4.2.3 to compute the net volume flux out of a cube.
2. Obtain the same result using the generalized divergence theorem.
28. Preserving diagonality Under what conditions does a diagonal tensor remain diagonal after an arbitrary coordinate rotation $\underset{\sim}{C}$? [Hint: First assume that $A_{11}$ = $A_{22}$ = $A_{33}$, and show that the tensor remains diagonal after an arbitrary rotation. Now, are there any other diagonal matrices that remain diagonal after an arbitrary rotation? Pick an easy rotation, one for which you can calculate $\underset{\sim}{A}^\prime$ explicitly. Identify the conditions under which it is diagonal. (You may want to review your answer to exercise 10.5.) Now pick a different simple rotation and repeat the argument. If you can show that diagonality is preserved for these two particular rotations only if $A_{11}$ = $A_{22}$ = $A_{33}$, then you have shown that diagonality is preserved for every rotation only if $A_{11}$ = $A_{22}$ = $A_{33}$.]
29. River flow In this problem you will use the Navier-Stokes equation, together with a list of simplifying assumptions, to predict the flow profile of the Willamette River as it flows past Corvallis (Figure $2$).
1. Set up the equation for the flow velocity parallel to the bottom ($u$) as a function of perpendicular distance from the surface ($z$). Apply the appropriate boundary conditions and solve for $u(z)$. [Hint: Two coordinate systems are useful here: gravity-aligned and tilted so that $\hat{e}^{(x)}$ is parallel to the bottom. Although the gravity vector is simplest in gravity - aligned coordinates, everything else is simpler in tilted coordinates. So, solve the equations in the tilted coordinate system after performing the appropriate rotation to express $\vec{g}$ in those coordinates.] Assume the following:
1. The flow obeys the Navier - Stokes equation for an incompressible flow ( $\vec{\nabla}\cdot\vec{u}=0$).
2. The current is steady (independent of time), so that $\partial/\partial t$ of anything is zero.
3. The density is uniform ($\rho=\rho_0$).
4. The current is a parallel shear flow $\vec{u}=u(z)\hat{e}^{(x)}$ in a coordinate frame parallel to the bottom.
5. No fluid property varies in the direction parallel to the bottom ($\partial/\partial x =0$).
6. No fluid property varies in the cross - stream ($y$) direction.
7. At the river bottom, $u = 0$.
8. At the surface, $du/dz = 0$. (This is true when wind effects are not important.)
2. What is the velocity at the surface? Give a numerical value based on the following parameter values:
• The mean grade (slope) of the Willamette River is 43 cm per km.
• The kinematic viscosity $ν$ = $\mu/\rho$ = 10−6 m2/s.
• The water depth $h$ = 2 m.
• The vertical gravitational acceleration $g$ = 9.8 m/s2.
• Does your result seem consistent with the observed velocity?
3. Suppose that frictional effects are supplied not by molecular viscosity but by a much larger “effective” viscosity due to turbulence (see section 6.3.5). If this turbulent viscosity is 10−2m2s−1, what is the maximum flow speed? Verify that this velocity is more consistent with observations than the value you obtained in 29.2.
30. Pressure drop and surface deflection in a Rankine vortex Consider the Rankine vortex, a steady, axisymmetric vortex whose velocity is purely azimuthal and is given by $u_{\theta}(r)=\left\{\begin{array}{cl} \dot{\theta} r & \text { for } r \leq R \ \dot{\theta} R^{2} r^{-1} & \text {for } r \geq R \end{array}\right. \nonumber$ (see exercise 25).
1. Beginning with the radial velocity equation for inviscid, homogeneous flow (Appendix I.1), $\frac{D u_{r}}{D t}=\frac{u_{\theta}^{2}}{r}-\frac{1}{\rho_{0}} \frac{\partial p}{\partial r}, \nonumber$ write down a differential equation whose solution is the r-dependence of the pressure p(r) in a Rankine vortex.
2. Solve the equation, requiring that $p(r)$ be a continuous function. Sketch a graph of your solution. (We did the inner part, $r < R$, in class. Your solution should include all $r$.)
3. Show that the pressure drop between $r$ = $\inf$ and $r$= 0 is $\rho_0V^2$, where $\rho_0$ is the density and $V$ is the maximum value of the azimuthal velocity $u_\theta$.
4. Suppose that the Rankine vortex in 30.1 exists in a body of water with surface $z = \eta(r)$, and that the vertical pressure gradient is hydrostatic. Compute the profile of surface elevation $\eta(r)$. You can assume that, as $r\rightarrow\inf$, both the surface pressure and the surface displacement approach zero. Show that $\eta(0) = −V^2/g$, and sketch $\eta(r)$.
5. Consider the angular frequency $\dot{\theta}$. How does the surface deflection $\eta(0)$ vary with this frequency, all else being equal?
31. Kitchen sink experiment Obtain a clear cylindrical container such as a big measuring cup. A glass coffee mug will do, but the bigger the better. You’ll also need a watch (or other way to measure seconds), a ruler, and something to stir with. Fill the container about 3/4 full with water. Stir the water until it is rotating evenly, and notice that the water surface is depressed at the center and elevated at the outside. Notice also that the faster you stir, the greater the depression/elevation. Your objective is to determine how the depression (or elevation) depends on the frequency of stirring. It will be a power - law dependence, i.e., depression/elevation will be proportional to frequency, or frequency2, or frequency3 or something, and you’re going to determine the exponent. Submit a clear description of your method, results and interpretation.
1. Devise some way to measure the depression/elevation to within, say, 1 mm. This could be the depression, the elevation, or the difference between the two, whichever you find easiest to measure. Call this distance $h$. [Here is one way: with the fluid motionless, mark the level on the outside of the container. Then, with the fluid turning, mark the elevation again and measure the difference.]
2. Devise a way to measure the frequency of stirring (e.g., 1 stir per second). You can probably use your watch or some app on your phone. Call that frequency $f$.
3. Make the measurement at several frequencies and plot a graph of logh versus log $f$. Try at least 4 frequencies (more is better) covering a wide range (a factor of 4 or more, say).
4. If $h = a f^n$, where $a$ and $n$ are constants, then this graph should give a straight line. Fit a straight line through your measurements, determine its slope and intercept, and from that information estimate values for $a$ and $n$. (Do not forget to include units!)
5. Compare your estimate of n with the theoretical prediction developed in exercise 30 for the Rankine vortex. List possible reasons why this comparison may be imperfect. (This should include both measurement inaccuracies and physical assumptions that may be wrong.) Say which you think is most likely to be important.
32. The Froude number The Froude2 number is important in hydraulic channel flows. It is defined as $F=\frac{|u|}{\sqrt{g H}}, \nonumber$ where $u(x)$ is the flow velocity in a unidirectional current (a river, say), $g$ is gravity and $H(x)$ is the depth of the fluid. Consider a channel of uniform width where the flow velocity varies inversely with depth so as to conserve the volume flux: $uH = const$.3 Compute the Froude number for each of the following situations. Approximate $g$ as 10 ms-2 for simplicity.
1. $u=1 m s^{-1}, H=0.1 m$
2. Depth $H$ increases to 0.4 m (with the same volume flux).
3. Depth $H$ decreases to 0.025 m.
33. Effects of surface tension on surface waves
1. Derive the dispersion relation $\omega = \omega (k)$ for surface waves in the presence of surface tension. Surface tension imposes an effective pressure at the surface given by $p = −\sigma \nabla^2\eta$, where $\sigma$ is a constant. This alters the surface boundary condition on pressure. Aside from this, make the same assumptions we made in class: the fluid is homogeneous and inviscid, and there is no variation in the $y$ - direction.
2. Show that the frequency $\omega$ is increased by a factor $\sqrt{1+\sigma k^2/\rho_0g}$ over the case without surface tension.
3. Compute the wavelength of waves for which the second term in the dispersion relation equals the first (i.e., the contribution from surface tension equals that from gravity). Give a numerical value for this wavelength for the water/air interface ($\sigma/\rho_0 = 7.4\times 10^{−5}m^3s^{−2}$). Waves shorter than this are called capillary waves.
34. Mass conservation in a channel flow
1. Consider inviscid, incompressible, 2 - dimensional flow (independent of $y$) in a channel with a corrugated, impermeable bottom $z = −H +h(x)$ and an undulating surface $z = \eta(x,t)$ (Figure $3$). Integrate the divergence vertically over the water column: $\int_{-H+h(x)}^{\eta(x, t)}\left(\frac{\partial u}{\partial x}+\frac{\partial w}{\partial z}\right) d z, \nonumber$ simplify where possible and set the result equal to zero. Use the result to show that $\frac{\partial \eta}{\partial t}=-\frac{\partial}{\partial x} \int_{-H+h(x)}^{\eta(x, t)} u d z. \nonumber$ [Hints: For the first term, use the 1 - dimensional form of Leibniz’ rule Equation 6.1.3 to bring the partial derivative outside the integral. Simplify the second term by using the appropriate boundary conditions. Do not assume that the disturbance is small-amplitude.]
2. What difference does it make to 34.1 if the fluid is viscous, i.e., if the current goes to zero at the bottom boundary?
1Corvallis, Oregon is the home of Oregon State University, and Philomath is a town a few miles to the west. A few miles further to the west is the Pacific ocean, which makes for a lot of rainstorms.
2Rhymes with “food”.
3Hence the saying “Still waters run deep.”
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/11%3A_Exercises.txt
|
Taylor’s theorem (which we will not prove here) gives us a way to take a complicated function $f(x)$ and approximate it by a simpler function $\tilde{f}(x)$. The price of this simplification is that $\tilde{f} \approx f$ only in a small region surrounding some point $x = x_0$ (Figure $1$).
The formula is:
$\tilde{f}(x)=f\left(x_{0}\right)+f^{\prime}\left(x_{0}\right)\left(x-x_{0}\right)+\frac{1}{2} f^{\prime \prime}\left(x_{0}\right)\left(x-x_{0}\right)^{2}+\frac{1}{6} f^{\prime \prime \prime}\left(x_{0}\right)\left(x-x_{0}\right)^{3}+\dots\label{eqn:1}$
The sequence goes on forever, but we typically only use the first few terms. To apply (1), we choose a point $x_0$ where we need the approximation to be accurate. We then compute the derivatives at that point, $f^\prime(x_0)\, \(f^{\prime\prime} (x_0)$, $f^{\prime\prime\prime} (x_0)$, etc., for as far as we want to take it. For accuracy, use a lot of terms; for simplicity, use only a few.
For example, if
$f(x)=(1-x)^{-1} \nonumber$
and we choose $x_0$ = 0, then the Taylor series is just the well-known expression
$\tilde{f}(x)=1+x+x^2+x^3+\dots. \nonumber$
Figure A.2 shows the expansion with successively larger numbers of terms retained. Near x0, good accuracy can be achieved with only a few terms. The further you get from x0, the more terms must be retained for a given level of accuracy.
The real benefit of Taylor series is evident when working with more complicated functions. For example, $f(x) = \sin(\tan(x))$ can be approximated by $\tilde{f}$ = $x$ for $x$ close to zero.
Note that the first - order terms in Equation $\ref{eqn:1}$:
$\tilde{f}(x)=f(x_0)+f^\prime(x_0)(x-x_0), \nonumber$
give a valid approximation of $f(x)$ in the limit $x \rightarrow x_0$, and can be rearranged to form the familiar definition of the first derivative:
$f^\prime(x_0)=\lim{x\to x_0}\frac{f(x)-f(x_0)}{x-x_0} \nonumber$
The Taylor series expansion technique can be generalized for use with multivariate functions. Suppose that $f = f(\vec{x})$, where $\vec{x} = (x, y)$. Then
$\tilde{f}(\vec{x})=f(\vec{x_0})+f_x(\vec{x_0})\delta x+ f_y(\vec{x_0})\delta y + \frac{1}{2} f_{xx}(\vec{x_0})\delta x^2 + f_{xy}(\vec{x_0})\delta_x\delta_y+\frac{1}{2}f_{yy}(\vec{x_0})\delta_{y^2}+\dots,\label{eqn:2}$
where $\delta_x=x-x_0$, $\delta_y=y-y_0$ and subscripts denote partial derivatives. Note that the first - order terms in Equation $\ref{eqn:2}$ can be written using the directional derivative:
$f(\vec{x})=f(\vec{x_0})+\vec{\nabla}f(\vec{x_0})\cdot\delta \vec{x}. \nonumber$
You will notice that $\tilde{f}$ has been replaced by $f$; this is valid in the limit $\vec{x} \rightarrow \vec{x_0}$, or $\delta \vec{x} \rightarrow 0$.
13.01: B.1- Torque
In this section we take a brief excursion into solid-body mechanics, specifically rotational motion. This will give us an example to use in the next section when we define a tensor, and also a simple result that we will need later to understand forces acting within a fluid.
13: Appendix B- Torque and the Moment of Inertia
Newton’s second law $\vec{F}=m\vec{a}$ has a rotational analogue. When a force $\vec{F}$ is exerted at a location $\vec{r}$ measured from some axis of rotation (e.g., the bolt in Figure $1$), then the cross product $\vec{r}\times\vec{F}$ is called the torque, $\vec{T}$. The cross product is defined in Equation 2.1.1, and is derived in detail in section D.3.1. For now, it is a vector perpendicular to both $\vec{F}$ and $\vec{r}$, with direction given by the right-hand rule. The magnitude is
$|\vec{r}\times\vec{F}|=|\vec{r}||\vec{F}||\sin\phi| \nonumber$
where $\phi$ is the angle between $\vec{r}$ and $\vec{F}$.
13.02: B.2- The moment of inertia tensor
The angle $\theta$ increases in time (if you push hard enough1) in accordance with
$\vec{T}=\underset{\sim}{I}\vec{\alpha},\label{eqn:1}$
in which $\vec{\alpha}$ is the angular acceleration and $\underset{\sim}{I}$ is a matrix called the moment of inertia. For the simple case shown in Figure 13.1.1, $\underset{\sim}{I}$ is proportional to the identity matrix $\underset{\sim}{\delta}$, $\vec{\alpha}$ is parallel to the axis of rotation (the bolt), and its magnitude $|\vec{\alpha}|$ is $d^2\theta/dt^2$.
The general definition of the moment of inertia matrix is
$I_{i j}=\int_{V} d V \rho(\vec{x})\left(x_{k} x_{k} \delta_{i j}-x_{i} x_{j}\right),\label{eqn:2}$
where $\rho(\vec{x})$ is the density (mass per unit volume). Details can be found in most classical mechanics texts, e.g., Marion (2013).
Example $1$
The particular case illustrated in Figure $1$ is the rotation of a rectangular prism, with uniform density and edge dimensions $a$, $b$ and $c$, about the $\hat{e}^{(1)}$ axis. In this case both torque and angular acceleration are parallel to $\hat{e}^{(1)}$, and the only nonzero component of $\underset{\sim}{I}$ is $I_{11}$, computed as follows:
\begin{aligned} I_{11} &=\int_{V} d V \rho\left(x_{2}^{2}+x_{3}^{2}\right) \ &=\rho \int_{-a / 2}^{a / 2} d x_{1} \int_{-b / 2}^{b / 2} d x_{2} \int_{-c / 2}^{c / 2} d x_{3}\left(x_{2}^{2}+x_{3}^{2}\right) \ &=\rho \frac{a b c\left(b^{2}+c^{2}\right)}{12}. \end{aligned} \nonumber
For the simple case of a cube with $a$ = $b$ = $c$ = $\Delta$,
$I_{11}=\rho \frac{\Delta^{5}}{6}.\label{eqn:3}$
Is the moment of inertia matrix $\underset{\sim}{I}$ a tensor? We would expect so, since it connects two physically real vectors via Equation $\ref{eqn:1}$. We can also establish this directly from Equation $\ref{eqn:2}$, the general formula for $\underset{\sim}{I}$. Like any other integral, $\underset{\sim}{I}$ can be written as the limit of a sum:
$I_{i j}=\sum \Delta V \rho\left(x_{k} x_{k} \delta_{i j}-x_{i} x_{j}\right), \nonumber$
where each term in the sum is evaluated at the center of a volume element $\Delta V$. Now $\Delta V$ and $\rho$ are scalars, and so is the dot product $x_k x_k$ (section 3.2). Moreover, we know that both $\delta_{ij}$ and the dyad $x_ix_j$ transform according to Equation 3.3.8. Each term in the sum is therefore a tensor, and so then is the sum itself. Taking the limit as $\Delta V\rightarrow 0$, we conclude that $\underset{\sim}{I}$ transforms according to Equation 3.3.8. We therefore refer to $\underset{\sim}{I}$ as the moment of inertia tensor.
1In the case shown here, $\vec{F}$ is really the sum of the force exerted by the person and the opposing force exerted by friction, and similarly for $\vec{T}$.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/12%3A_Appendix_A-_Taylor_Series_Expansions.txt
|
Isotropic tensors play a fundamental role in tensor algebra as well as in fluid mechanics. We can easily test whether or not a given tensor is isotropic, but is there a way to identify all isotropic tensors?
Because an isotropic tensor is invariant under under every rotation, it must in particular be invariant under very slight rotations. In the limit of an infinitesimal rotation, the rotation matrix is just the identity plus a small added matrix. This allows us to simplify the algebra needed to derive requirements for isotropy.
14: Appendix C- Isotropic Tensors
Any rotation matrix can be written as the identity plus something:
$\underset{\sim}{C}=\underset{\sim}{\delta}+\underset{\sim}{r} \nonumber$
The orthogonality requirement $\underset{\sim}{C}^T \underset{\sim}{C} = \underset{\sim}{\delta}$ can then be expressed as:
\begin{aligned} \underset{\sim}{C}^{T} \underset{\sim}{C} &=\left(\underset{\sim}{\delta}^{T}+\underset{\sim}{r}^{T}\right)(\underset{\sim}{Q}+\underset{\sim}{r}) \ &=\underset{\sim}{\delta}^{T} \underset{\sim}{\delta}+\underset{\sim}{\delta}^{T} \underset{\sim}{r}+\underset{\sim}{\delta} \underset{\sim}{r}^{T}+\underset{\sim}{r}^{T} \underset{\sim}{r} \ &=\underset{\sim}{\delta}+ \underset{\sim}{\delta} \underset{\sim}{r}+\underset{\sim}{\delta} \underset{\sim}{r}^{T}+\underset{\sim}{r}^{T} \underset{\sim}{r}=\underset{\sim}{\delta}, \end{aligned} \nonumber
(using the identities $\underset{\sim}{\delta}^T$ = $\underset{\sim}{\delta}$ and $\underset{\sim}{\delta}\underset{\sim}{\delta}$ = $\underset{\sim}{\delta}$), and therefore
$\underset{\sim}{r}+\underset{\sim}{r}^{T}+\underset{\sim}{r}^{T} \underset{\sim}{r}=0.\label{eqn:1}$
Now suppose that the rotation is through a very small angle1. In this case, all components of $\underset{\sim}{r}$ are $\ll 1$, and the third term on the left-hand side of Equation $\ref{eqn:1}$ is therefore negligible, leaving us with
$\underset{\sim}{r}+\underset{\sim}{r}^{T}=0. \nonumber$
So, for an infinitesimal rotation, the rotation matrix equals the identity plus an antisymmetric matrix whose elements are $\ll 1$.
1More precisely, we take the limit as the angle goes to zero.
14.02: C.2- 1st-order isotropic tensors
A 1st-order tensor is a vector. If the vector $\vec{v}$ is isotropic, then under an infinitesimal rotation
$v_{i}^{\prime}=v_{j} C_{j i}=v_{j}\left(\delta_{j i}+r_{j i}\right)=v_{i}+v_{j} r_{j i}=v_{i}. \nonumber$
Therefore,
$v_{j} r_{j i}=0. \nonumber$
This represents three algebraic equations, one for each value of $i$:
$\begin{array}{l} v_{1} r_{11}+v_{2} r_{21}+v_{3} r_{31}=0 \ v_{1} r_{12}+v_{2} r_{22}+v_{3} r_{32}=0 \ v_{1} r_{13}+v_{2} r_{23}+v_{3} r_{33}=0. \end{array}\label{eqn:1}$
Because $\underset{\sim}{r}$ is antisymmetric, $r_{11}$ = $r_{22}$ = $r_{33}$ = 0, removing one term from each equation.
Now consider the first equation of Equation $\ref{eqn:1}$:
$v_{2} r_{21}+v_{3} r_{31}=0.\label{eqn:2}$
Here is a crucial point: if $\vec{v}$ is isotropic, then Equation $\ref{eqn:1}$ must be true for all antisymmetric matrices $\underset{\sim}{r}$, i.e., regardless of the values of $r_{21}$ and $r_{31}$. The only way this can be true is if $v_2$ = 0 and $v_3$ = 0. The same considerations applied to the second equation of Equation $\ref{eqn:1}$ tell us that $v_1$ must also be zero, hence the only isotropic 1st-order tensor is the trivial case
$\vec{v}=0. \nonumber$
14.03: C.3- 2nd-order isotropic tensors
Let a 2nd-order isotropic tensor $\underset{\sim}{A}$ be subjected to an infinitesimal rotation $\underset{\sim}{\delta} + \underset{\sim}{r}$. Then
\begin{aligned} A_{i j}^{\prime}=A_{k l} C_{k i} C_{l j} &=A_{k l}\left(\delta_{k i}+r_{k i}\right)\left(\delta_{l j}+r_{l j}\right)=A_{k l}\left(\delta_{k i} \delta_{l j}+\delta_{k i} r_{l j}+r_{k i} \delta_{l j}+r_{k i} r_{l j}\right) \ &=A_{i j}+A_{i l} r_{l j}+A_{k j} r_{k i}=A_{i j} \end{aligned} \nonumber
where we have neglected the product of the infinitesimal matrices $\underset{\sim}{r}$. This leaves us with
$A_{i k} r_{k j}+A_{k j} r_{k i}=0,\label{eqn:1}$
which must be true for all antisymmetric matrices $\underset{\sim}{r}$ whose elements are $\ll 1$. (Note that we have renamed the dummy index $l$ as $k$ for tidiness. There is no potential for confusion because the pairs of $k$s are in separate terms.)
Equation $\ref{eqn:1}$ represents nine equations, one for each combination of the free indices $i$ and $j$. It will be enough to consider three of these.
Case 1: $i$ = 1, $j$ = 1
$A_{11} r_{11}+A_{12} r_{21}+A_{13} r_{31}+A_{11} r_{11}+A_{21} r_{21}+A_{31} r_{31}=0. \nonumber$
Remembering that $r_{11}$ = 0, we can write this as
$\left(A_{12}+A_{21}\right) r_{21}+\left(A_{13}+A_{31}\right) r_{31}=0. \nonumber$
Because this must be true for all values of $r_{21}$ and $r_{31}$, the coefficients of those quantities must vanish separately:
$A_{12}+A_{21}=0 ; \quad A_{13}+A_{31}=0.\label{eqn:2}$
Case 2: $i$ = 1, $j$ = 2
$A_{11} r_{12}+A_{12} r_{22}+A_{13} r_{32}+A_{12} r_{11}+A_{22} r_{21}+A_{32} r_{31}=0. \nonumber$
Because $r_{11}$ = $r_{22}$ = 0 and $r_{21}$ = $−r_{12}$, we can write this as
$\left(A_{11}-A_{22}\right) r_{12}+A_{13} r_{32}+A_{32} r_{31}=0. \nonumber$
Because this must be true for all $\underset{\sim}{r}$,
$A_{11}=A_{22} ; \quad A_{13}=0 ; \quad A_{32}=0.\label{eqn:3}$
Case 3: $i$ = 1, $j$ = 3
The same reasoning leads to
$A_{11}=A_{33} ; \quad A_{12}=0 ; \quad A_{23}=0.\label{eqn:4}$
Combining Equation $\ref{eqn:2}$, $\ref{eqn:3}$ and $\ref{eqn:4}$, we have
$A_{11}=A_{22}=A_{33} ; \quad A_{12}=A_{21}=A_{32}=A_{23}=A_{13}=A_{31}=0.\label{eqn:5}$
If $A_{11}$ has the value $a$, then $\underset{\sim}{A}$ must therefore be proportional to the identity matrix:
$\underset{\sim}{A}=\left(\begin{array}{lll} a & 0 & 0 \ 0 & a & 0 \ 0 & 0 & a \end{array}\right)=a \underset{\sim}{\delta} \nonumber$
We conclude that the only isotropic 2nd order tensors are those that are proportional to the identity.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/14%3A_Appendix_C-_Isotropic_Tensors/14.01%3A_C.1-_Infintesimal_Rotations.txt
|
Let a 3rd-order isotropic tensor $\underset{\sim}{A}$ be subjected to an infinitesimal rotation $\underset{\sim}{\delta} +\underset{\sim}{r}$. Then
$A_{i j k}^{\prime}=A_{l m n} C_{l i} C_{m j} C_{n k}=A_{i j k}. \nonumber$
Reasoning as in the last section (see derivation of Equation 14.3.1), we obtain
$A_{i l k} r_{l j}+A_{l j k} r_{l i}+A_{i j l} r_{l k}=0.\label{eqn:1}$
Again, we have changed all dummy indices to $l$ for tidiness. (And again, this is safe only because they appear in separate terms!) Equation $\ref{eqn:1}$ represents 27 equations, one for each combination of the index values 1, 2 and 3. It will simplify things if we classify those 27 combinations as follows:
• all values equal (111, 222 and 333)
• two values equal and one different (e.g., 223)
• all values different (123, 231, 312, 213, 321 and 132).
Case 1: $i$ = 1, $j$ = 1, $k$ = 2
With {$i, j, k$} = {1,1,2}, Equation $\ref{eqn:1}$ becomes
\begin{aligned} A_{112} r_{11} &+A_{122} r_{21}+A_{132} r_{31} \ +A_{112} r_{11} &+A_{212} r_{21}+A_{312} r_{31} \ +A_{111} r_{12} &+A_{112} r_{22}+A_{113} r_{32}=0. \end{aligned} \nonumber
Using the antisymmetry of $\underset{\sim}{r}$, we can write this as
$\left(A_{122}+A_{212}-A_{111}\right) r_{21}+\left(A_{132}+A_{312}\right) r_{31}+A_{113} r_{32}=0. \nonumber$
Since the coefficients must vanish separately, we have three equations:
\begin{align} A_{122}+A_{212} &=A_{111} \label{eqn:2}\ A_{132} &=-A_{312} \label{eqn:3}\ A_{113} &=0 \label{eqn:4} \end{align} \nonumber
The third equation tells us that an element with two equal indices is zero. Let us guess that this is true for all such elements. If we have guessed right, then the first equation tells us that an element with all three indices equal is zero. Finally, in the second equation, interchanging two indices changes the sign.
Are these patterns generally true? Let us try another case to check.
Case 2: $i$ = 2, $j$ = 3, $k$ = 2
Repeating the previous case with {$i, j, k$} changed to {2,3,2}, we have
\begin{aligned} A_{212} r_{13} &+A_{222} r_{23}+A_{232} r_{33} \ +A_{132} r_{12} &+A_{232} r_{22}+A_{332} r_{32} \ +A_{231} r_{12} &+A_{232} r_{22}+A_{233} r_{32}=0 \end{aligned} \nonumber
or
\begin{align} A_{332}+A_{233} &=A_{222}\label{eqn:5} \ A_{132} &=-A_{231}\label{eqn:6} \ A_{212} &=0.\label{eqn:7} \end{align} \nonumber
The pattern is the same as in case 1. The third equation tells us that an element with two equal indices is zero. If this is generally true, then the first equation tells us that an element with three equal indices is zero. Finally, interchanging two indices changes the sign.
You can check as many cases as you like; the results are always the same. Note that the only “rule” that matters here is the second one: interchanging two indices changes the sign. The other two rules follow from this one, because they both involve elements with two or more equal indices. Interchanging two equal indices makes no difference, but it also changes the sign. That only works if the value is zero.
We conclude that a 3rd-order tensor can be isotropic only if it is completely antisymmetric, i.e., interchanging any two indices changes the sign. In section D.1, we show that the only completely antisymmetric 3rd-order tensor is, to within a multiplicative constant, the Levi-Civita alternating tensor $\underset{\sim}{\varepsilon}$. The most general isotropic 3rd-order tensor is therefore
$\underset{\sim}{A}=a \underset{\sim}{\varepsilon} \nonumber$
where $a$ is any scalar.
14.05: C.5 4th-order isotropic tensors
To identify the isotropic 4th-order tensors, one uses the same logic as in the 3rd-order case (section C.4) but, as you might guess, there is considerably more of it. The details may be found, for example, in Aris (1962). Here we will just quote the result. The most general isotropic 4th-order tensor is a bilinear combination of 2nd-order isotropic tensors:
$A_{i j k l}=\lambda \delta_{i j} \delta_{k l}+\mu \delta_{i k} \delta_{j l}+\gamma \delta_{i l} \delta_{j k},\label{eqn:1}$
where $\lambda$, $\mu$ and $\gamma$ are scalars.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/14%3A_Appendix_C-_Isotropic_Tensors/14.04%3A_C.4_3rd-Order_Isotropic_Tensors.txt
|
The importance of the alternating tensor follows from the fact that it is the only isotropic, 3rd-order tensor1. Appendix C shows that a 3rd-order tensor can be isotropic only if it is completely antisymmetric, meaning it changes sign when any two of its three indices are interchanged. In this appendix, we will see how this property of complete antisymmetry defines the Levi-Civita tensor. We will then see how some fundamental operations in algebra follow from it.
1up to a multiplicative constant.
15: Appendix D- The Leva-Cevita Alternating Tensor
We will start by recalling a few important aspects of antisymmetry in ordinary matrices. The matrix shown in Figure $1$ illustrates the idea of antisymmetry. If you flip the matrix about its main diagonal, you get the transpose. Now notice that every element in the transpose is the negative of the corresponding element in the original matrix. So $\underset{\sim}{A}^T=-\underset{\sim}{A}$, or in index notation, $A_{ji}=-A_{ij}$.
A property shared by every antisymmetric matrix is that the elements on the main diagonal are all zero. That is obvious if you think about it: transposing the matrix does not change those elements, so if transposing the matrix changes the sign, then those elements can only be zero (because only zero is its own negative). So there are an infinite number of antisymmetric matrices, but $A_{11}$ = $A_{22}$ = $A_{33}$ = 0 in all of them.
Now imagine that there exists a three-dimensional array that has the property of complete asymmetry. We will call it $\underset{\sim}{\varepsilon}$. Two additional properties of $\underset{\sim}{\varepsilon}$ follow from complete antisymmetry.
1. First, any element with two equal indices must be zero. The logic is the same as in the case of the two-dimensional matrix. Interchanging two indices changes the sign, but if the two indices are identical, the interchange makes no difference. So the value of the element equals its own negative and must therefore be zero.
2. A second property that results from complete antisymmetry is that cyclic permutations1 of the indices have no effect. For example, starting with $\varepsilon_{ijk}$ (where $i$, $j$ and $k$ are any combination of 1,2 and 3), move the third index back to the first position and shift the other two indices one place to the right. The result is $\varepsilon_{kij}$. Now, is the element $\varepsilon_{kij}$ related to the original element, $\varepsilon_{ijk}$? Yes. We can tell this because we can recover the original ordering of the indices by making two successive interchanges, each of which changes the sign: $\varepsilon_{kij}=-\varepsilon_{ikj}=\varepsilon_{ikj} \nonumber$ The result is $\varepsilon_{ijk}$, just what we started with. This shows that $\varepsilon_{ijk}$ is invariant under any cyclic permutation of its indices.
At this stage we know some things that must be true about this hypothetical array if it exists, but does it? If so what does it look like? We will now deduce the specific form of $\underset{\sim}{\varepsilon}$ in three steps.
• The integers 1, 2 and 3 have 27 combinations, only six of which are non-repeating: 123, 312, 231, 213, 321, and 132. The corresponding six elements of $\underset{\sim}{\varepsilon}$ are the only ones that can be nonzero (by property 1 above).
• The first three combinations, 123, 312 and 231, are cyclic permutations, and so are the remaining three. Because cyclic permutations make no difference (property 2 above), $\varepsilon_{123}=\varepsilon_{312}=\varepsilon_{231},\label{eqn:1}$ and $\varepsilon_{213}=\varepsilon_{321}=\varepsilon_{132},\label{eqn:2}$ So there are only two different nonzero values that elements of $\underset{\sim}{\varepsilon}$ can have.
• Finally, note that these two values must be additive inverses. For example, consider the first member of each triplet: 123 and 213. These are related by an exchange of the first and second indices, and therefore $\varepsilon_{213}=-\varepsilon_{123}.\label{eqn:3}$ So, if we choose a value for $\varepsilon_{123}$, we can deduce the values of all the other elements.
To obtain the Levi-Civita tensor, we make the simplest choice $\varepsilon_{123}$ = 1. We then have
$\varepsilon_{123}=\varepsilon_{312}=\varepsilon_{231}=1,\label{eqn:4}$
$\varepsilon_{213}=\varepsilon_{321}=\varepsilon_{132}=-1,\label{eqn:5}$
and all other elements are zero. In summary:
$\varepsilon_{i j k}=\left\{\begin{array}{cc} 1, & \text { if } i j k=123,312,231, \ -1, & \text { if } i j k=213,321,132, \ 0, & \text { otherwise. } \end{array}\right.\label{eqn:6}$
Note that every completely antisymmetric, three-dimensional array must be proportional to $\underset{\sim}{\varepsilon}$, i.e., equal to $\underset{\sim}{\varepsilon}$ times some scalar. To see this, recall that we chose $\varepsilon_{123}$ = 1 arbitrarily. What if we had chosen $\varepsilon_{123}$ = 2? The resulting array would be exactly the same except multiplied by 2.
1Recall that cyclic permutations are accomplished by moving the final value to the beginning, or the first value to the end, as shown in Figure 3.3.3. Interchanging any two values changes a cyclic to a non-cyclic permutation, or vice versa.
15.02: D.2 The ε-δ relation
As was stated without proof in section 3.3.7, the alternating tensor is related to the 2nd-order identity tensor by
$\varepsilon_{i j k} \varepsilon_{k l m}=\delta_{i l} \delta_{j m}-\delta_{i m} \delta_{j l}.\label{eqn:1}$
The easiest way to convince yourself of this is to try a few tests. First, set $i$ = $j$ and verify that the right-hand side is zero, as it should be. Then try interchanging $i$ and $j$ and check that the right-hand side changes sign, as it should. The same tests work with $l$ and $m$. To remember Equation $\ref{eqn:1}$, note that the first $\delta$ on the right-hand side has subscripts $i$ and $l$; these are the first free indices of the two $\varepsilon$’s on the left-hand side. After this, the remaining pairs of indices fall into place naturally.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/15%3A_Appendix_D-_The_Leva-Cevita_Alternating_Tensor/15.01%3A_D.1_A_completely_antisymmetric%2C_three_dimensional_array.txt
|
In this section we will look at three fundamental algebraic operations that are based on $\underset{\sim}{\varepsilon}$: the cross product, the triple product and the determinant. We will see how each can be interpreted geometrically in terms of areas and volumes, and derive their most useful properties.
D.3.1 The cross product
One function of a 3rd-order tensor is to define a mathematical relationship between three vectors, e.g., a way to multiply two vectors to get a third vector. To be generally useful, such an operation should work the same in every reference frame, i.e., the tensor should be isotropic. Because $\underset{\sim}{\varepsilon}$ is the only isotropic 3rd-order tensor, there is only one such operation:
$z_{k}=\varepsilon_{i j k} u_{i} v_{j}.\label{eqn:1}$
This is the $k$th component of the familiar cross (or vector) product: $\vec{z}$ = $\vec{u}\times\vec{v}$ whose properties were discussed in section 3.3.7.
D.3.2 The triple product
The triple product of three vectors $\vec{u}$, $\vec{v}$ and $\vec{w}$ is a trilinear scalar product given by
$T=(\vec{u} \times \vec{v}) \cdot \vec{w}=\varepsilon_{i j k} u_{i} v_{j} w_{k}.\label{eqn:2}$
Because $\underset{\sim}{\varepsilon}$ is the only isotropic 3rd-order tensor, the triple product is the only trilinear scalar product that is computed the same way in all reference frames. We now derive some algebraic and geometrical properties of the triple product.
1. The triple product is equal to the volume $V$ of the parallelepiped enclosed by the three vectors (Figure $1$), give or take a minus sign. The plane that encloses $\vec{u}$ and $\vec{v}$ divides space into two half-spaces, and the cross product $\vec{u}\times\vec{v}$ extends into one or the other of those half-spaces, determined by the right-hand rule. If $\vec{w}$ points into the same half-space as the cross product (i.e., the angle $\phi$, defined in Figure $1$, is acute), $\vec{u}$, $\vec{v}$ and $\vec{w}$ are called a right-handed triple, a distinction that will be of great use to us. In this case, the triple product $T$ is positive and is equal to the volume $V$. Conversely, if $\vec{u}$, $\vec{v}$ and $\vec{w}$ form a left-handed triple ($\phi$ obtuse), $T$ = $-V$.
2. The triple product is unchanged by a cyclic permutation of $\vec{u}$, $\vec{v}$ and $\vec{w}$, i.e. $(\vec{u} \times \vec{v}) \cdot \vec{w}=(\vec{w} \times \vec{u}) \cdot \vec{v}=(\vec{v} \times \vec{w}) \cdot \vec{u}. \nonumber$ This follows from Equation $\ref{eqn:2}$ and the invariance of $\underset{\sim}{\varepsilon}$ to cyclic permutations. Geometrically, a cyclic permutation corresponds to a rotation of the parallelepiped, which obviously leaves its volume unchanged.
3. Interchanging any two of $\vec{u}$, $\vec{v}$ and $\vec{w}$ changes the sign of the triple product. This is seen easily as a consequence of Equation $\ref{eqn:2}$ and the complete antisymmetry of $\underset{\sim}{\varepsilon}$. Geometrically, it means that the parity of the set {$\vec{u}$,$\vec{v}$,$\vec{w}$} changes from right-handed to left-handed, and hence the triple product changes from $+V$ to $-V$.
4. Now imagine that the parallelepiped is squashed so that $\vec{w}$ lies in the plane of $\vec{u}$ and $\vec{v}$ (Figure $1$). Clearly the volume of the parallelepiped goes to zero; hence, the triple product must also go to zero. A special case is if $\vec{w}$ is equal to either $\vec{u}$ or $\vec{v}$. In fact, if any two of $\vec{u}$, $\vec{v}$ and $\vec{w}$ are equal, the triple product is zero.
Properties 2-4 above can be expressed succinctly using $\underset{\sim}{\epsilon}$. Suppose we relabel our three vectors $\vec{u}$, $\vec{v}$ and $\vec{w}$ as $\vec{u}(i)$, $\vec{u}(j)$ and $\vec{u}(k)$, where $i$, $j$ and $k$ are any combination of 1, 2 and 3 (including those with repeated values). Now let $T$ stand for the triple product where the indices $i$, $j$ and $k$ actually are 1, 2 and 3, i.e., $T = (\vec{u}(1) \times \vec{u}(2) )\cdot\vec{u}(3)$. Next, let the indices $ijk$ be any combination of 1, 2 and 3. The aforementioned properties of the triple product (namely that cyclic permutations change nothing, interchanging changes the sign and setting two vectors equal makes the triple product zero) are equivalent to
$\left(\vec{u}^{(i)} \times \vec{u}^{(j)}\right) \cdot \vec{u}^{(k)}=T \varepsilon_{i j k}.\label{eqn:3}$
As a special case, suppose that the $\vec{u}(i)$ are the basis vectors of a right-handed Cartesian coordinate system $\hat{e}^{(i)}$, where $i$ = 1,2,3. In that case the parallelepiped is just a unit cube and
$\left(\hat{e}^{(i)} \times \hat{e}^{(j)}\right) \cdot \hat{e}^{(k)}=\varepsilon_{i j k}. \nonumber$
D.3.3 The determinant
Definition and elementary properties
The determinant of a $3\times 3$ matrix $\underset{\sim}{A}$ can be written as
$\operatorname{det}(\underset{\sim}{A})=\varepsilon_{i j k} A_{i 1} A_{j 2} A_{k 3},\label{eqn:4}$
i.e., if we regard the columns as vectors, the determinant is their triple product.1 As taught in elementary algebra classes, the determinant is calculated as a linear combination of multiples of three elements, one from each column (Figure $2$). Three of those combinations are added, three are subtracted, and the rest have a coefficient of zero. Referring back to Equation 15.1.6, you can check that the coefficients of that linear combination are just the corresponding elements of $\underset{\sim}{\varepsilon}$.
Referring again to Figure $2$, turn the book sideways and verify that the determinant can just as easily be written as the triple product of rows:
$\operatorname{det}(\underset{\sim}{A})=\varepsilon_{i j k} A_{1 i} A_{2 j} A_{3 k}\label{eqn:5}$
The expressions Equation $\ref{eqn:4}$ and Equation $\ref{eqn:5}$ lead immediately to several elementary properties of the determinant:
1. Transposing the matrix does not change the determinant: $\operatorname{det}\left(\underset{\sim}{A}^{T}\right)=\operatorname{det}(\underset{\sim}{A}). \nonumber$
2. Interchanging two columns of a matrix changes the sign of the determinant. This corresponds to the fact that $\underset{\sim}{\varepsilon}$ is antisymmetric with respect to every pair of indices.
3. Remembering that transposing does not change the determinant, we can make the same statement about rows: interchanging any two rows changes the sign of the determinant.
4. If any two columns (or two rows) are the same, the determinant is zero. This is because elements of $\underset{\sim}{\varepsilon}$ are zero if two of their indices are the same.
5. Finally, because cyclic permutations of the indices of $\underset{\sim}{\varepsilon}$ make no difference, cyclic permutations of either the rows or the columns of a matrix leave the determinant unchanged.
6. As an alternative to the pattern shown in Figure $2$, Equation $\ref{eqn:4}$ can also be arranged into a formula commonly called the method of cofactors. One chooses a row or column to expand along, then forms a linear combination of $2\times 2$ subdeterminants. Here is the formula for expanding along the top row: $\operatorname{det}(A)=A_{11} \operatorname{det}\left(\begin{array}{cc} A_{22} & A_{23} \ A_{32} & A_{33} \end{array}\right)-A_{12} \operatorname{det}\left(\begin{array}{cc} A_{21} & A_{23} \ A_{31} & A_{33} \end{array}\right)+A_{13} \operatorname{det}\left(\begin{array}{cc} A_{21} & A_{22} \ A_{31} & A_{32}.\label{eqn:6} \end{array}\right) \nonumber$ The other expansions work similarly. The minus sign applied to the middle term results from the antisymmetry of $\underset{\sim}{\varepsilon}$.
Easy mnemonics for the triple and cross products.
The identification of the triple product with $\underset{\sim}{\varepsilon}$ provides an easy way to calculate it. Just arrange the three vectors $\vec{u}$, $\vec{v}$ and $\vec{w}$ into a matrix:
$(\vec{u} \times \vec{v}) \cdot \vec{w}=\varepsilon_{i j k} u_{i} v_{j} w_{k}=\operatorname{det}\left(\begin{array}{ccc} u_{1} & v_{1} & w_{1} \ u_{2} & v_{2} & w_{2} \ u_{3} & v_{3} & w_{3} \end{array}\right),\label{eqn:7}$
and calculate the determinant using either the elementary formula shown in Figure $2$ or the method of cofactors (e.g., Equation $\ref{eqn:6}$).
The cross product of two vectors can be computed by arranging the components of the vectors, together with the three basis vectors, to form a matrix.
$\vec{u} \times \vec{v}=\varepsilon_{i j k l}, v_{j}, \vec{e}^{(k)}=\operatorname{det}\left(\begin{array}{ccc} u_{1} & v_{1} & \hat{e}^{(1)} \ u_{2} & v_{2} & \hat{e}^{(2)} \ u_{3} & v_{3} & \hat{e}^{(3)} \end{array}\right), \quad \text { or det }\left(\begin{array}{ccc} u_{1} & u_{2} & u_{3} \ v_{1} & v_{2} & v_{3} \ e^{(1)} & \hat{e}^{(2)} & \hat{e}^{(3)} \end{array}\right).\label{eqn:8}$
This construct makes no sense as a matrix, because three of its elements are vectors. That is ok; it is just a device that allows us to compute the cross product using a pattern we have already memorized. We now calculate the determinant using the method of cofactors, expanding along the row or column that contains the unit vectors:
$\vec{u} \times \vec{v}=\hat{e}^{(1)}\left(u_{2} v_{3}-u_{3} v_{2}\right)-\hat{e}^{(2)}\left(u_{1} v_{3}-u_{3} v_{1}\right)+\hat{e}^{(3)}\left(u_{1} v_{2}-u_{2} v_{1}\right) \nonumber$
(cf. Equation 4.1.10).
Geometrical meaning of the determinant
Consider the volume $V$ of the parallelepiped formed by three arbitrary vectors $\vec{u}$, $\vec{v}$ and $\vec{w}$ (Figure $3$). Assume for now that $\vec{u}$, $\vec{v}$ and $\vec{w}$ form a right-handed triple, so that their triple product is positive. In that case, the triple product is equal to $V$. Now suppose that the three vectors are transformed by the same matrix $\underset{\sim}{A}$:
$u_{i}^{\prime}=A_{i j} u_{j} ; \quad v_{i}^{\prime}=A_{i j} v_{j} ; \quad w_{i}^{\prime}=A_{i j} w_{j}. \nonumber$
The volume of the new parallelepiped is
\begin{aligned} V^{\prime} &=\varepsilon_{i j k} u_{i}^{\prime} v_{j}^{\prime} w_{k}^{\prime} \&=\varepsilon_{i j k} A_{i l} u_{l} A_{j m} v_{m} A_{k n} w_{n}\&= \varepsilon_{i j k} A_{i l} A_{j m} A_{k n} u_{l} v_{m} w_{n}\quad\text{(reordering)}\&= B_{l m n} u_{l} v_{m} w_{n}\end{aligned} \nonumber
where we have defined a new three-dimensional array $B_{lmn} = \varepsilon_{ijk}A_{il}A_{jm}A_{kn}$.
Now we explore the meaning of $\underset{\sim}{B}$. $B_{lmn}$ is the triple product of columns $l$, $m$ and $n$, where $l$, $m$ and $n$ can be any integers 1, 2 or 3. In the special case $lmn = 123$, $B_{123}$ is just the determinant of $\underset{\sim}{A}$ (cf. $4$). But what is $\underset{\sim}{B}$ for arbitrary $lmn$? This situation is familiar; the triple product of three arbitrary vectors $\vec{u}^{(i)}$, $\vec{u}^{(j)}$ and $\vec{u}^{(k)}$ is:
$\left(\vec{u}^{(i)} \times \vec{u}^{(j)}\right) \cdot \vec{u}^{(k)}=T \varepsilon_{i j k}, \nonumber$
where $T = (\vec{u}^{(1)} \times \vec{u}^{(2)} )\cdot \vec{u}^{(3)}$ (cf. Equation $\ref{eqn:3}$). By the same reasoning,
$B_{l m n}=\operatorname{det}(\underset{\sim}{A}) \varepsilon_{l m n}. \nonumber$
Assembling these results, we find that $V^\prime$ is $|\underset{\sim}{A}|$ times the triple product of the original vectors $\vec{u}$, $\vec{v}$ and $\vec{w}$. Equivalently,
$V^{\prime}=\operatorname{det}(\underset{\sim}{A}) V.\label{eqn:9}$
This is our geometrical interpretation of $\operatorname{det}(\underset{\sim}{A})$: it is the factor by which the volume of the parallelepiped changes after transformation by $\underset{\sim}{A}$. This interpretation also works for left-handed triples. But, if $\operatorname{det}(\underset{\sim}{A})$ < 0, a right-handed triple will be converted into a left-handed triple, and vice versa.
Incidentally, this tells us something interesting about matrix transformations in general. You can transform any three vectors by the same matrix, and the parallelepiped they form will always expand by the same factor! Therefore, you can think of a matrix transformation as expanding all of space by a uniform factor.
Further properties of the determinant
Having identified the determinant as an expansion factor, we can now understand more of its properties.
1. The determinant of a 2nd-order tensor is a scalar, clearly, because the expansion factor is invariant to rotations. If you double the volume of a parallelipped in one coordinate frame, you double it in all coordinate frames.
2. The product rule says that the determinant of the product of two matrices is the product of the determinants. This can be understood by considering two successive transformations. Let us say a vector $\vec{v}$ is transformed by the matrix $\underset{\sim}{B}$, and then the result is transformed by the matrix $\underset{\sim}{A}$. This can also be expressed as a single transformation by the product $\underset{\sim}{A}\underset{\sim}{B}$. The net expansion has to be the same, whether you impose the expansions sequentially or together. For example, suppose that in successive transformations by the two matrices, you expand by a factor 2 and then by a factor 3, so the net expansion is 2$\times$3=6. When you apply the two transformations together, the expansion factor must also be 6.
3. The inverse rule states that the determinant of the inverse is the inverse of the determinant. The inverse matrix reverses the initial transformation, giving you back your original vectors. So the net expansion factor, which is the product of the expansion factors for $\underset{\sim}{A}$ and $\underset{\sim}{A}^{-1}$, must be one.
4. How about the fact that a matrix with zero determinant has no inverse? Well if $\underset{\sim}{A}$ flattens all vectors onto the same plane, then that transformation is not reversible. This corresponds to the fact that $\underset{\sim}{A}^{-1}$ doesn’t exist.
5. In section 3.1.3 it was stated without proof that the determinant of an orthogonal matrix, i.e., a matrix whose inverse equals its transpose, is $\pm$1. We can now see why that is true. The determinant of the inverse equals the determinant of the transpose, which we know from property 1 is the same as the determinant of the original matrix. Therefore $\operatorname{det}(\underset{\sim}{A})$ is a number that equals its own reciprocal. The only two such numbers are +1 and -1. Therefore, every orthogonal matrix is a rotation matrix. Those with determinant -1 change the sign of the volume of the parallelepiped, effectively turning it inside-out: improper rotations. Orthogonal matrices with determinant +1 represent proper rotations.
1This definition does not require that the columns actually transform as vectors; in general they do not.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/15%3A_Appendix_D-_The_Leva-Cevita_Alternating_Tensor/15.03%3A_D.3_Applications_of_%28epsilon%29.txt
|
The following are true for all vectors $\vec{u}$, $\vec{v}$, $\vec{w}$, and $\vec{x}$ and scalars $\phi$ and $\psi$ that vary continuously in space.
Algebraic identities:
1. $(\vec{u} \times \vec{v}) \cdot \vec{w}=(\vec{w} \times \vec{u}) \cdot \vec{v}=(\vec{v} \times \vec{w}) \cdot \vec{u}$
2. $\vec{u} \times(\vec{v} \times \vec{w})=(\vec{u} \cdot \vec{w}) \vec{v}-(\vec{u} \cdot \vec{v}) \vec{w}$
3. $(\vec{u} \times \vec{v})(\vec{w} \times \vec{x})=(\vec{u} \cdot \vec{w})(\vec{v} \cdot \vec{x})-(\vec{u} \cdot \vec{x})(\vec{v} \cdot \vec{w})$
4. Identities involving the gradient
5. $\vec{\nabla}(\phi+\Psi)=\vec{\nabla} \phi+\vec{\nabla} \psi$
6. $\vec{\nabla}(\phi \psi)=\psi \vec{\nabla} \phi+\phi \vec{\nabla} \psi$
7. $\vec{\nabla}(\vec{u} \cdot \vec{v})=[\vec{v} \cdot \vec{\nabla}] \vec{u}+\vec{v} \times(\vec{\nabla} \times \vec{u})+[\vec{u} \cdot \vec{\nabla}] \vec{v}+\vec{u} \times(\vec{\nabla} \times \vec{v})$
8. Identities involving the divergence
9. $\vec{\nabla} \cdot(\vec{u}+\vec{v})=\vec{\nabla} \cdot \vec{u}+\vec{\nabla} \cdot \vec{v}$
10. $\vec{\nabla} \cdot(\phi \vec{u})=\vec{u} \cdot \vec{\nabla} \phi+\phi \vec{\nabla} \cdot \vec{u}$
11. $\vec{\nabla} \cdot(\vec{u} \times \vec{v})=\vec{v} \cdot(\vec{\nabla} \times \vec{u})-\vec{u} \cdot(\vec{\nabla} \times \vec{v})$
12. Identities involving the curl
13. $\vec{\nabla} \times(\vec{u}+\vec{v})=\vec{\nabla} \times \vec{u}+\vec{\nabla} \times \vec{v}$
14. $\vec{\nabla} \times(\phi \vec{u})=\vec{\nabla} \phi \times \vec{u}+\phi \vec{\nabla} \times \vec{u}$
15. $\vec{\nabla} \times(\vec{u} \times \vec{v})=[\vec{v} \cdot \vec{\nabla}] \vec{u}-\vec{v}(\vec{\nabla} \cdot \vec{u})-[\vec{u} \cdot \vec{\nabla}] \vec{v}+\vec{u}(\vec{\nabla} \cdot \vec{v})$
16. $\vec{\nabla} \times(\vec{\nabla} \times \vec{u})=\vec{\nabla}(\vec{\nabla} \cdot \vec{u})-\nabla^{2} \vec{u}$
17. $\vec{\nabla} \cdot(\vec{\nabla} \times \vec{u})=0$
18. $\vec{\nabla} \times(\vec{\nabla} \phi)=0$
19. Identities involving the Laplacian
20. $\nabla^{2}(\phi \psi)=\psi \nabla^{2} \phi+\phi \nabla^{2} \psi+2 \vec{\nabla} \phi \cdot \vec{\nabla} \psi$
21. $\nabla^{2}(\phi \vec{u})=\vec{u} \nabla^{2} \phi+\phi \nabla^{2} \vec{u}+2(\vec{\nabla} \phi) \cdot \vec{\nabla} \vec{u}$
22. Identities involving the advective derivative
23. $[\vec{u} \cdot \vec{\nabla}](\phi \vec{v})=(\vec{u} \cdot \vec{\nabla} \phi) \vec{v}+\phi([\vec{u} \cdot \vec{\nabla}] \vec{v})$
24. $[\vec{u} \cdot \vec{\nabla}](\vec{v} \cdot \vec{w})=([\vec{u} \cdot \vec{\nabla}] \vec{v}) \cdot \vec{w}+\vec{v} \cdot([\vec{u} \cdot \vec{\nabla}] \vec{w})$
25. $[\vec{u} \cdot \vec{\nabla}](\vec{v} \times \vec{w})=([\vec{u} \cdot \vec{\nabla}] \vec{v}) \times \vec{w}+\vec{v} \times([\vec{u} \cdot \vec{\nabla}] \vec{w})$
26. $[\vec{u} \cdot \vec{\nabla}] \vec{u} \equiv(\vec{\nabla} \times \vec{u}) \times \vec{u}+\frac{1}{2} \vec{\nabla}(\vec{u} \cdot \vec{u})$
17.01: F.1 Local Equilibrium
In section 6.3.2 we defined the Cauchy array, whose elements are the components of the stress vector $\vec{f}$ acting on each of the three coordinate planes:
$\tau_{i j}=f_{j}^{(i)}. \nonumber$
In this appendix we will demonstrate three additional properties of this array:
• The stress vector acting on any plane is given by $f_j=\tau_{ij}n_i$, where $\hat{n}$ is the unit normal to the plane in question.
• The array $\underset{\sim}{\tau}$ transforms as a 2nd-order tensor.
• $\underset{\sim}{\tau}$ is symmetric.
We will do this by applying Newton’s second law to carefully chosen fluid parcels and imagining the result if we take the size of the parcel to zero.
17: Appendix F- The Cauchy Stress Tensor
Recall the famous story of Galileo dropping cannon balls from a tower to demonstrate that balls of different sizes fall at the same rate. This showed that gravity pulls more strongly on a big ball than on a small one. Suppose this was not so, and the force acting on both balls was in fact the same. In that case, by Newton’s second law \(a = F/m\), the smaller ball would accelerate faster. If it were light enough, it could zip to the ground like a bullet! (Conversely, a sufficiently massive ball would seem to hang in mid-air, a prisoner of its own inertia.)
In our universe that is not how it works: as the size of an object goes to zero, the force acting on it must go to zero also, so that the acceleration remains finite. This is true not only of gravity but of any force. It is also true of rotational motion: as an object’s size shrinks to zero, the net torque acting on its surface must also vanish, or else it would spin infinitely fast.
These observations support the principle of local equilibrium, which goes a long way towards specifying how force is transmitted within a fluid. Picture an imaginary closed surface within the fluid, with intermolecular forces and torques acting on it, and now imagine that the closed surface shrinks to infinitesimal size. Both the net force and the net torque acting on the surface must go to zero. In the next two sections we will see how these requirements lead to the three statements about the stress tensor listed at the beginning of this appendix.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/16%3A_Appendix_E-_Vector_Identities.txt
|
Consider a plane of arbitrary orientation as shown in Figure $1$. The intersection of the tilted plane with the coordinate planes forms a tetrahedron. The tilted surface, outlined in red, is called the hypotenuse of the tetrahedron. We will first find an expression for the force acting on each face of the tetrahedron. We will then argue that this total force must vanish as the tetrahedron shrinks to infinitesimal size. Therefore, the force on the hypotenuse must balance the sum of the forces on the other surfaces.
Each of the coordinate faces has outward unit normal opposite to the corresponding basis vector. For example, the face labelled $A^{(1)}$ has outward unit normal $−\hat{e}^{(1)}$. Therefore, the stresses acting on those faces are minus the corresponding elements of the Cauchy array. The net force is the sum of the stress vectors times the areas of the faces act upon:
$\vec{F}=\vec{f} A-\tau_{i j} \hat{e}^{(j)} A^{(i)}.\label{eqn:1}$
We now assume that the tetrahedon is filled with a continuous medium of density $\rho$. By Newton’s second law, the sum of forces equals the product of the mass of the tetrahedron, $m$, and resultant acceleration $\vec{a}$. Taking the $j$ component, we have
$f_{j} A-\tau_{i j} A^{(i)}=m a_{j}.\label{eqn:2}$
To proceed, we need two facts from three-dimensional geometry that may be unfamiliar:
1. . The volume of a tetrahedron is $Ah/3$, where $A$ is the area of the hypotenuse and $h$ is the perpendicular distance from the hypotenuse to the origin (Figure $2$a). This is analogous to the formula for the area of a triangle: one half the base times the height. The mass is therefore $m=\rho A h / 3.\label{eqn:3}$
2. The area of each of the orthogonal coordinate planes is proportional to the area of the hypotenuse: $A^{(i)}=A \cos \theta^{(i)}. \nonumber$ Here, $\theta^{(i)}$ is the angle between $\hat{e}^{(i)}$ and $\hat{n}$, $\hat{n}$ being the outward unit normal to the tilted surface. (Figure $2$ shows the example $i$ = 3.) This may be understood by recognizing that $\theta^{(i)}$ is the angle between the two planes, and the projection of the hypotenuse onto the coordinate plane is $A$ times cosine of that angle. The cosine can also be expressed as the component of $\hat{n}$ perpendicular to the coordinate plane. To see this, note that the dot product of $\hat{n}$ and $\hat{e}^{(i)}$ is the product of their magnitudes (both equal to one) times $\cos\theta^{(i)}$. Moreover, that dot product is also equal to the $i$ component of $\hat{n}$, so $\cos\theta^{(i)}$ = $n_i$. As a result, we have $A^{(i)}=A n_{i}.\label{eqn:4}$
Substituting Equation $\ref{eqn:3}$ and Equation $\ref{eqn:4}$ into Equation $\ref{eqn:2}$, we obtain
$f_{j} A-\tau_{i j} A n_{i}=\rho A \frac{h}{3} a_{j}, \nonumber$
or
$f_{j}-\tau_{i j} n_{i}=\rho \frac{h}{3} a_{j}. \nonumber$
We now shrink the size of the tetrahedron to zero by taking the limit $h\rightarrow 0$, resulting in
$f_{j}=\tau_{i j} n_{i}.\label{eqn:5}$
From the foregoing, we can conclude that $\underset{\sim}{\tau}$ is a tensor. This can be seen in two ways.
1. In deriving Equation $\ref{eqn:5}$, we did not assume a specific coordinate system; any Cartesian coordinate system would do. Referring to figure (Figure $2$b), if we rotated the coordinate axes (keeping the tilted plane fixed), the areas $A$ and $A^{(i)}$ would change, as would the values of the $\tau_{i j}$, but we would still arrive at Equation $\ref{eqn:5}$.
2. Both $\vec{f}$ and $\vec{n}$ are vectors, since they exist in physical space independently of the coordinate system. Therefore $\underset{\sim}{\tau}$, as a relationship between two vectors, must transform as a second order tensor (section 3.3.1).
We therefore refer to $\tau$ as the Cauchy stress tensor, or just the stress tensor. While the stress vector $\vec{f}(\vec{x},t,\hat{n})$ depends on the orientation of the plane it acts on, the stress tensor depends only on location and time: $\tau = \tau (\vec{x},t)$. It contains the information needed to evaluate the stress vector acting on any plane passing through $\vec{x}$.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/17%3A_Appendix_F-_The_Cauchy_Stress_Tensor/17.02%3A_F.2_Stress_on_a_tilted_plane.txt
|
The symmetry of the stress tensor will be demonstrated in two ways. The first is fairly intuitive. We argue that stress components located above and below the main diagonal represent torques that are equal but opposite. If the tensor is symmetric, then, those torques add up to zero. This simple argument is deficient in that (1) it does not show why the torques should add to zero, and (2) it neglects the spatial variability of the stress tensor. We will address these issues in the more rigorous version that follows.
F.3.1 The hand-waving argument
Consider a cube with edge length $\Delta$, as shown in Figure $1$a, and the distribution of forces that act to rotate the cube counterclockwise about $\hat{e}^{(1)}$ (blue arrow). Assume that the stress tensor is uniform in space.
Consider the point labelled “A”, which is located above the $x_2$-axis at $x_2$ = $\Delta/2$. The force (per unit area) $\vec{f}$ acting on the right hand face at this point can be resolved into two components vectors $\tau_{23}\hat{e}^{(3)}$ and $\tau_{22}\hat{e}^{(2)}$ as shown by the dashed arrows. As drawn here, $\tau_{23}$ > 0 and $\tau_{22}$ < 0, and each component exerts a counterclockwise torque about $\hat{e}^{(1)}$.
Now, consider the force acting at point B, which is located the same distance below the $x_2$-axis. Because the stress tensor is uniform, the force is the same, but now the normal component $\tau_{22}\hat{e}^{(2)}$ exerts a clockwise torque, equal but opposite to that exerted by the same component at point A. The torque exerted by the tangential component $\tau_{23}\hat{e}^{(3)}$ is unchanged. You can now imagine that, if we integrate over the right-hand face, the net torque exerted by the normal force component $\tau_{22}\hat{e}^{(2)}$ will vanish by symmetry, i.e., the net torque is due entirely to the tangential force component.
Now examine Figure $1$b. On the left-hand face, the applied force is reversed because the unit normal is $-\hat{e}^{(2)}$. The tangential component is directed oppositely to that on the right-hand face, but the torque it exerts is the same.
Next we consider the upper face. The force acting there is $\tau_{3j}\hat{e}^{(j)}$, and its tangential component is $\tau_{32}\hat{e}^{(2)}$. As drawn here, $\tau_{32}$ > 0, and the torque is clockwise. On the bottom face, the force is again opposite but the torque is the same. We now conclude that the net torque about $\hat{e}^{(1)}$ is proportional to the difference between $\tau_{23}$ and $\tau_{32}$. If $\tau_{23}$ = $\tau_{32}$, the net torque about $\hat{e}^{(1)}$ is zero.
Repeating this analysis for rotations about the other two axes, we find that the condition for equilibrium is $\tau_{ij}$ = $\tau_{ji}$.
F.3.2 The quantitative argument
As before, consider a cube with edge length $\Delta$, as shown in Figure $2$a, and the distribution of stresses that act to rotate the cube about $\hat{e}^{(1)}$. Our plan is to compute the torque on each face and add the results.
On the right-hand face, the unit normal is $\hat{e}^{(2)}$. The force component $\tau_{23}$ acts in the “3” direction. The torque per unit area at any point is the stress vector crossed with the moment arm $\vec{r}$ (Figure $2$b), which is the perpendicular distance from the “1” axis to the point where the force acts. The magnitude of that cross product is just $|\tau_{23}|$ times $|\vec{r}| \cos \theta$, where $\theta$ is the angle between $\vec{r}$ and the horizontal, and this in turn is equal to $|\tau_{23}|\Delta/2$. If $\tau_{23}$ is positive as shown, then the torque is positive (i.e., counterclockwise).
Now expand $\tau_{23}$ in a first-order Taylor series about the origin:
$\tau_{23}\left(x, \frac{\Delta}{2}, z\right)=\tau_{23}^{0}+\frac{\partial \tau_{23}^{0}}{\partial x} x+\frac{\partial \tau_{23}^{0}}{\partial y} \frac{\Delta}{2}+\frac{\partial \tau_{23}^{0}}{\partial z} z+O\left(\Delta^{2}\right).\label{eqn:1}$
The superscript “0” denotes the value of $\tau_{23}$ or one of its derivatives evaluated at the origin. The y coordinate has been set to the uniform value $\Delta/2$ corresponding to the right-hand face. Integrating $\tau_{23}\Delta/2$ over the right-hand face, we find
\begin{aligned} T_{1}^{[r i g h t]} &=\int_{-\Delta / 2}^{\Delta / 2} d x \int_{-\Delta / 2}^{\Delta / 2} d z \frac{\Delta}{2} \tau_{23}\left(x, \frac{\Delta}{2}, z\right) \ &=\Delta^{2} \frac{\Delta}{2}\left(\tau_{23}^{0}+\frac{\partial \tau_{23}^{0}}{\partial y} \frac{\Delta}{2}\right). \end{aligned} \nonumber
Note that the terms in Equation $\ref{eqn:1}$ proportional to $x$ and $z$ have integrated to zero.
The torque on the left-hand face is calculated similarly. The stress vector acts oppositely (downward, if $\tau_{23}$ > 0), but the sign of $\cos\theta$ is reversed, and those two changes cancel. Ultimately, the only difference is that the stress is evaluated at $y$ = $-\Delta/2$ rather than $y$ = $\Delta/2$, so that
$T_{1}^{[l e f t]}=\Delta^{2} \frac{\Delta}{2}\left(\tau_{23}^{0}-\frac{\partial \tau_{23}^{0}}{\partial y} \frac{\Delta}{2}\right). \nonumber$
The net torque on the right and left faces is
$T_{1}^{[r i g h t]}+T_{1}^{[l e f t]}=\Delta^{3} \tau_{23}^{0}. \nonumber$
The torques on the top and bottom faces are calculated in the same manner, and give $-\Delta^3\tau^0_{32}$, so that the net torque about $\hat{e}^{(1)}$ is
$T_{1}=\Delta^{3}\left(\tau_{23}^{0}-\tau_{32}^{0}\right). \nonumber$
Now the rotational form of Newton’s second law states that this torque equals $I_{11}\alpha_1$, where $I_{11}$ is the moment of inertia for torque and rotation about $\hat{e}^{(1)}$ and $\alpha_1$ is the corresponding angular rotation. For this cube, $I_{11}$ = $\rho\Delta^5/6$ (appendix B), hence
$\Delta^{3}\left(\tau_{23}^{0}-\tau_{32}^{0}\right)=\rho \frac{\Delta^{5}}{6} \alpha_{1}, \nonumber$
or
$\tau_{23}^{0}-\tau_{32}^{0}=\rho \frac{\Delta^{2}}{6} \alpha_{1}. \nonumber$
We now take the limit as $\Delta\rightarrow 0$. The right-hand side goes to zero (provided $\alpha_1$ is finite) while the superscripts on the left-hand side become superfluous, leaving us with
$\tau_{23}-\tau_{32}=0. \nonumber$
The same calculation can be repeated for rotation about $\hat{e}^{(2)}$ and $\hat{e}^{(3)}$ with analogous results, so that
$\tau_{i j}-\tau_{j i}=0,\label{eqn:2}$
i.e., the stress tensor is symmetric at every point in space.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/17%3A_Appendix_F-_The_Cauchy_Stress_Tensor/17.03%3A_F.3_Symmetry_of_the_stress_tensor.txt
|
The neglect of the inertial effect of inhomogeneity (section 7.4) permits a powerful simplification of the Navier-Stokes momentum equation.
18: Appendix G- Boussinesq Approximation
Let us look again at Equation 6.3.37, this time in the form
$\rho \frac{D \vec{u}}{D t}=-\rho g \hat{e}^{(z)}-\vec{\nabla} p+\mu \nabla^{2} \vec{u}+\mu \vec{\nabla}(\vec{\nabla} \cdot \vec{u}).\label{eqn:1}$
Here we are working in gravity-aligned coordinates, so that the gravity vector $\vec{g} = -g\hat{e}^{(z)}$.1 The left-hand side is nonlinear in two respects. First, the material derivative contains a quadratic combination of unknown fields, $[\vec{u}\cdot \vec{\nabla}]\vec{u}$. In addition, the factor $\rho$ multiplying the material derivative adds another layer of nonlinearity. In geophysical fluids, density variations are often small enough that this additional nonlinearity can be removed.
To begin, we write the density $\rho$ as the sum of a uniform “background” value $\rho_0$ and a fluctuating part $\rho^\prime$:
$\rho=\rho_{0}+\rho^{\prime}.\label{eqn:2}$
In addition, we decompose the pressure as
$p=p_{0}+p^{*}, \nonumber$
where the “background” part is in hydrostatic balance with the background density:
$\vec{\nabla} p_{0}=-\rho_{0} g \hat{e}^{(z)}, \nonumber$
so that
$\vec{\nabla} p=-\rho_{0} g \hat{e}^{(z)}+\vec{\nabla} p^{*}.\label{eqn:3}$
With the substitution of Equation $\ref{eqn:2}$ and Equations $\ref{eqn:3}$, $\ref{eqn:1}$ becomes
$\left(\rho_{0}+\rho^{\prime}\right) \frac{D \vec{u}}{D t}=-\rho^{\prime} g \hat{e}^{(z)}-\vec{\nabla} p^{*}+\mu \nabla^{2} \vec{u}+\mu \vec{\nabla}(\vec{\nabla} \cdot \vec{u}),\label{eqn:4}$
Now assume that $|\rho^\prime| \ll \rho_0$, and neglect $\rho^\prime$ in favor of $\rho_0$ on the left-hand side. Dividing through by the constant $\rho_0$, we then have
$\frac{D \vec{u}}{D t}=b \hat{e}^{(z)}-\vec{\nabla} \frac{p^{*}}{\rho_{0}}+v \nabla^{2} \vec{u}+v \vec{\nabla}(\vec{\nabla} \cdot \vec{u}).\label{eqn:5}$
Here,
$b=-\frac{\rho^{\prime}}{\rho_{0}} g \nonumber$
is the buoyancy and
$v=\frac{\mu}{\rho_{0}} \nonumber$
is the kinematic viscosity. By replacing $\rho$ with the constant $\rho_0$ everywhere except in the buoyancy, we simplify the solution of the momentum equation considerably. Equation $\ref{eqn:5}$ is called the Boussinesq approximation of the momentum equation.
Note that we have yet to define the background density $\rho_0$. The only guidance we have for this choice is that the accuracy of the approximation depends on fluctuations about the background density being small. It therefore makes sense to define $\rho_0$ so as to minimize those fluctuations, e.g., by using the volume average.
1This restriction is not necessary; it just simplifies the discussion.
18.02: G.2 Alternative derivation
Another way to derive Equation 18.1.7 is by rearranging Equation 18.1.6 as
$\rho_{0} \frac{D \vec{u}}{D t}=\rho^{\prime}\left(\vec{g}-\frac{D \vec{u}}{D t}\right)-\vec{\nabla} p^{*}+\mu \nabla^{2} \vec{u}+\mu \vec{\nabla}(\vec{\nabla} \cdot \vec{u}). \nonumber$
Now consider the first two terms in parentheses. If we assume that all accelerations are small compared with gravity, then the second term, $D\vec{u}/Dt$, can be discarded. We then divide through by $\rho_0$ to obtain the Boussinesq equation Equation 18.1.7 as before. Note that the smallness of accelerations compared with gravity is the same assumption that justifies neglecting the inertial terms in the baroclinic torque (section 7.4).
19: Appendix H- Bernoulli's Equation
Recall the momentum equation for a homogeneous, inviscid fluid, written in gravity-aligned coordinates:
$\frac{D \vec{u}}{D t}=-g \hat{e}^{(z)}-\vec{\nabla} \frac{p}{\rho_{0}}. \nonumber$
Using the vector identity
$[\vec{u} \cdot \vec{\nabla}] \vec{u} \equiv(\vec{\nabla} \times \vec{u}) \times \vec{u}+\frac{1}{2} \vec{\nabla}(\vec{u} \cdot \vec{u}), \nonumber$
we can rewrite this as
$\frac{\partial \vec{u}}{\partial t}+\vec{\omega} \times \vec{u}+\frac{1}{2} \vec{\nabla}(\vec{u} \cdot \vec{u})=-g \hat{e}^{(z)}-\vec{\nabla} \frac{p}{\rho_{0}}, \nonumber$
where $\vec{\omega} = \vec{\nabla} \times\vec{u}$ is the vorticity. Next, note that the vertical unit vector is the gradient of the vertical coordinate: $\hat{e}^{(z)}$ = $\vec{\nabla}_z$. We now substitute this and collect all of the terms that can be expressed as gradients:
$\frac{\partial \vec{u}}{\partial t}+\vec{\omega} \times \vec{u}=-\vec{\nabla}\left(\frac{1}{2}(\vec{u} \cdot \vec{u})+g z+\frac{p}{\rho_{0}}\right), \nonumber$
or
$\frac{\partial \vec{u}}{\partial t}+\vec{\omega} \times \vec{u}=-\vec{\nabla} B, \nonumber$
where
$B=\frac{1}{2}(\vec{u} \cdot \vec{u})+g z+\frac{p}{\rho_{0}} \nonumber$
is called the Bernoulli function1.
Now assume that the flow is in steady state, i.e., $\partial\vec{u}/\partial t$ = 0
$\vec{\nabla} B=\vec{u} \times \vec{\omega}. \nonumber$
This tells us that the gradient of the Bernoulli function is perpendicular to both $\vec{u}$ and $\vec{\omega}$, and therefore that $B$ does not vary in the direction of either of those vectors. In other words, in steady flow of a homogeneous, inviscid fluid,
• B is uniform along a vortex filament, and
• a fluid particle maintains a constant value of $B$ (since $\vec{u}\cdot\vec{\nabla}B = DB/Dt = 0$) as it travels.
The second point, often called Bernoulli’s Law, famously explains how an airplane flies. Because the upper and lower surfaces of the wing are convex, flow past them is forced to speed up, so that the first term in $B$ increases. Variation in the second term is negligible, so the third term must decrease to maintain a constant value of $B$, i.e., the pressure must drop. Wings are designed with the upper surface more convex than the lower, so that the pressure drop is greater. The resulting pressure difference exerts a net upward force (“lift”) on the wing.
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/18%3A_Appendix_G-_Boussinesq_Approximation/18.01%3A_G.1_The_Navier-Stokes_equation_for_nearly-uniform_density.txt
|
Let ψ be a scalar and $\vec{u}$ be a vector expressed as a linear combination of the cylindrical basis vectors: $\vec{u} = u_r\hat{e}^{(r)} +u_\theta \hat{e}^{(θ)} +u_z\hat{e}^{(z)}$.
Gradient of a scalar
$\vec{\nabla} \psi=\hat{e}^{(r)} \frac{\partial \psi}{\partial r}+\hat{e}^{(\theta)} \frac{1}{r} \frac{\partial \psi}{\partial \theta}+\hat{e}^{(z)} \frac{\partial \psi}{\partial z}\label{eqn:1}$
Divergence of a vector
$\vec{\nabla} \cdot \vec{u}=\frac{1}{r} \frac{\partial\left(r u_{r}\right)}{\partial r}+\frac{1}{r} \frac{\partial u_{\theta}}{\partial \theta}+\frac{\partial u_{z}}{\partial z}\label{eqn:2}$
Curl of a vector
$\vec{\nabla} \times \vec{u}=\hat{e}^{(r)}\left(\frac{1}{r} \frac{\partial u_{z}}{\partial \theta}-\frac{\partial u_{\theta}}{\partial z}\right)+\hat{e}^{(\theta)}\left(\frac{\partial u_{r}}{\partial z}-\frac{\partial u_{z}}{\partial r}\right)+\hat{e}^{(z)}\left(\frac{1}{r} \frac{\partial\left(r u_{\theta}\right)}{\partial r}-\frac{1}{r} \frac{\partial u_{r}}{\partial \theta}\right)\label{eqn:3}$
Laplacian of a scalar
$\nabla^{2} \psi=\frac{1}{r} \frac{\partial}{\partial r}\left(r \frac{\partial \psi}{\partial r}\right)+\frac{1}{r^{2}} \frac{\partial^{2} \psi}{\partial \theta^{2}}+\frac{\partial^{2} \psi}{\partial z^{2}}\label{eqn:4}$
Laplacian of a vector
$\nabla^{2} \vec{u}=\hat{e}^{(r)}\left(\nabla^{2} u_{r}-\frac{u_{r}}{r^{2}}-\frac{2}{r^{2}} \frac{\partial u_{\theta}}{\partial \theta}\right)+\hat{e}^{(\theta)}\left(\nabla^{2} u_{\theta}+\frac{2}{r^{2}} \frac{\partial u_{r}}{\partial \theta}-\frac{u_{\theta}}{r^{2}}\right)+\hat{e}^{(z)} \nabla^{2} u_{z}\label{eqn:5}$
Material derivative
$\frac{D}{D t}=\frac{\partial}{\partial t}+u_{r} \frac{\partial}{\partial r}+\frac{u_{\theta}}{r} \frac{\partial}{\partial \theta}+u_{z} \frac{\partial}{\partial z}\label{eqn:6}$
The equations of motion are the same as those listed in section 6.8, except that the Navier-Stokes momentum equation Equation 6.8.2 has added terms on the right-hand side, representing the centrifugal force. They arise mathematically because in curvilinear coordinates, the directions of the basis vectors vary in space. Differentiating the velocity to get acceleration therefore involves differentiating the basis vectors as well.
In cylindrical coordinates, the basis vectors $\hat{e}^{(r)}$ and $\hat{e}^{(\theta)}$ vary in space but $\hat{e}^{(z)}$ does not. We can therefore consider the simpler case of polar coordinates $\{r,\theta\}$. Suppose a fluid particle at $\vec{x}$ has velocity
$\vec{u}=u_{r} \hat{e}^{(r)}+u_{\theta} \hat{e}^{(\theta)}.\label{eqn:7}$
Over a short time interval $dt$, this velocity carries the particle to a new location $\vec{x}+d\vec{x}$. In Figure $2$a, the basis vectors at the initial and final locations are color coded red and blue, respectively. Now how does the velocity change? Differentiating Equation $\ref{eqn:7}$ gives
$\frac{d}{d t} \vec{u}=\frac{d u_{r}}{d t} \hat{e}^{(r)}+u_{r} \frac{d \hat{e}^{(r)}}{d t}+\frac{d u_{\theta}}{d t} \hat{e}^{(\theta)}+u_{\theta} \frac{d \hat{e}^{(\theta)}}{d t}.\label{eqn:8}$
Figure $2$b shows the increments of the basis vectors, $d\hat{e}^{(r)}$ and $d\hat{e}^{(θ)}$. Because the basis vectors have length 1, the distance between their tips is approximated by the arc length $d\theta$ (accurate as $|d\theta| \rightarrow 0$). The direction of $\hat{e}^{(r)}$ is parallel to that of $\hat{e}^{(\theta)}$, while the direction of $\hat{e}^{(\theta)}$ is opposite to that of $\hat{e}^{(r)}$. Therefore:
$d \hat{e}^{(r)}=d \theta \hat{e}^{(\theta)} ; \quad d \hat{e}^{(\theta)}=-d \theta \hat{e}^{(r)} \nonumber$
Dividing by $dt$ and taking the limit $dt \rightarrow 0$, we obtain the time derivatives of the basis vectors. We now substitute these into Equation $\ref{eqn:8}$ and find that
$\frac{d}{d t} \vec{u}=\frac{d u_{r}}{d t} \hat{e}^{(r)}+u_{r} \frac{d \theta}{d t} \hat{e}^{(\theta)}+\frac{d u_{\theta}}{d t} \hat{e}^{(\theta)}-u_{\theta} \frac{d \theta}{d t} \hat{e}^{(r)}.\label{eqn:9}$
Note finally that $d\theta/dt = u_\theta /r$, giving
$\frac{d}{d t} \vec{u}=\frac{d u_{r}}{d t} \hat{e}^{(r)}+\frac{u_{r} u_{\theta}}{r} \hat{e}^{(\theta)}+\frac{d u_{\theta}}{d t} \hat{e}^{(\theta)}-\frac{u_{\theta}^{2}}{r} \hat{e}^{(r)}.\label{eqn:10}$
The second and fourth terms on the right-hand side are the centripetal acceleration. If we write the material derivative as Equation $\ref{eqn:6}$, we must subtract these extra terms from the right hand side:
$\rho \frac{D \vec{u}}{D t}=\rho \vec{a}_{C}+\rho \vec{g}-\vec{\nabla} p+\mu \nabla^{2} \vec{u}+\mu \vec{\nabla}(\vec{\nabla} \cdot \vec{u}),\label{eqn:11}$
where
$\vec{a}_{C}=\frac{u_{\theta}^{2}}{r} \hat{e}^{(r)}-\frac{u_{r} u_{\theta}}{r} \hat{e}^{(\theta)}.\label{eqn:12}$
20.02: I.2 Spherical coordinates
Let $\psi$ be a scalar and $\vec{u}$ be a vector expressed as a linear combination of the cylindrical basis vectors: $\vec{u} = u_r\hat{e}^{(r)} +u_\theta \hat{e}^{(\theta)} +u_\phi \hat{e}^{(\phi)}$.
Gradient of a scalar
$\vec{\nabla} \psi=\hat{e}^{(r)} \frac{\partial \psi}{\partial r}+\hat{e}^{(\theta)} \frac{1}{r} \frac{\partial \psi}{\partial \theta}+\hat{e}^{(\phi)} \frac{1}{r \sin \theta} \frac{\partial \psi}{\partial \phi}\label{eqn:1}$
Divergence of a vector
$\vec{\nabla} \cdot \vec{u}=\frac{1}{r^{2}} \frac{\partial\left(r^{2} u_{r}\right)}{\partial r}+\frac{1}{r \sin \theta} \frac{\partial\left(u_{\theta} \sin \theta\right)}{\partial \theta}+\frac{1}{r^{2} \sin ^{2} \theta} \frac{\partial u_{\phi}}{\partial \phi}\label{eqn:2}$
Curl of a vector
$\vec{\nabla} \times \vec{u}=\hat{e}^{(r)} \frac{1}{r \sin \theta}\left[\frac{\partial\left(u_{\phi} \sin \theta\right)}{\partial \theta}-\frac{\partial u_{\theta}}{\partial \phi}\right]+\hat{e}^{(\theta)} \frac{1}{r}\left[\frac{1}{\sin \theta} \frac{\partial u_{r}}{\partial \phi}-\frac{\partial\left(r u_{\phi}\right)}{\partial r}\right]+\hat{e}^{(\phi)} \frac{1}{r}\left[\frac{1}{r} \frac{\partial\left(r u_{\theta}\right)}{\partial r}-\frac{\partial u_{r}}{\partial \theta}\right]\label{eqn:3}$
Laplacian of a scalar
$\nabla^{2} \psi=\frac{1}{r^{2}} \frac{\partial}{\partial r}\left(r^{2} \frac{\partial \psi}{\partial r}\right)+\frac{1}{r^{2} \sin \theta} \frac{\partial}{\partial \theta}\left(\sin \theta \frac{\partial \psi}{\partial \theta}\right)+\frac{1}{r^{2} \sin ^{2} \theta} \frac{\partial^{2} \psi}{\partial \phi^{2}}\label{eqn:4}$
Laplacian of a vector
\begin{aligned} \nabla^{2} \vec{u} &=\hat{e}^{(r)}\left[\nabla^{2} u_{r}-\frac{2 u_{r}}{r^{2}}-\frac{2}{r^{2} \sin \theta} \frac{\partial\left(u_{\theta} \sin \theta\right)}{\partial \theta}-\frac{2}{r^{2} \sin \theta} \frac{\partial u_{\phi}}{\partial \phi}\right] \ &+\hat{e}^{(\theta)}\left[\nabla^{2} u_{\theta}+\frac{2}{r^{2}} \frac{\partial u_{r}}{\partial \theta}-\frac{u_{\theta}}{r^{2} \sin ^{2} \theta}-\frac{2}{r^{2}} \frac{\cos \theta}{\sin ^{2} \theta} \frac{\partial u_{\phi}}{\partial \phi}\right] \ &+\hat{e}^{(\phi)}\left[\nabla^{2} u_{\phi}+\frac{2}{r^{2} \sin ^{2} \theta} \frac{\partial u_{r}}{\partial \phi}+\frac{2}{r^{2}} \frac{\cos \theta}{\sin ^{2} \theta} \frac{\partial u_{\theta}}{\partial \phi}-\frac{u_{\theta}}{r^{2} \sin ^{2} \theta}\right] \end{aligned}\label{eqn:5}
Material derivative
$\frac{D}{D t}=\frac{\partial}{\partial t}+u_{r} \frac{\partial}{\partial r}+\frac{u_{\theta}}{r} \frac{\partial}{\partial \theta}+\frac{u_{\phi}}{r \sin \theta} \frac{\partial}{\partial \phi}\label{eqn:6}$
Centrifugal acceleration
$\vec{a}_{C}=\hat{e}^{(r)} \frac{u_{\theta}^{2}+u_{\phi}^{2}}{r}+\hat{e}^{(\theta)}\left(-\frac{u_{r} u_{\theta}}{r}+\frac{u_{\phi}^{2}}{r \tan \theta}\right)+\hat{e}^{(\phi)}\left(-\frac{u_{r} u_{\phi}}{r}-\frac{u_{\theta} u_{\phi}}{r \tan \theta}\right). \nonumber$
21: Appendix J- The Stokes Drift
In the limit of small amplitude waves, fluid parcels oscillate in place (section 8.2.5), i.e., there is no overall current associated with the wave. But at finite amplitude, waves drive a nonzero mean motion called the Stokes drift. Floating objects such as driftwood are carried ashore by the Stokes drift. The essential reason for the Stokes drift is that the amplitude of the particle ellipses (Figure 8.4) increases toward the surface. As a result, a particle moves slightly faster at the top of its ellipse than at the bottom.
Here we will estimate the speed of the Stokes drift using the small-amplitude theory developed in chapter 8. To begin with, write the horizontal velocity as a small perturbation $\{x^\prime ,z^\prime \}$ from its value at $\{x_0,z_0\}$:
\begin{aligned} u(x, z, t) &=u^{0}+u^{\prime} \ &=u^{0}+u_{x}^{0} x^{\prime}+u_{z}^{0} z^{\prime}, \end{aligned} \nonumber
where the superscript 0 denotes evaluation at $\{x_0,z_0\}$. Now substitute from Equations 8.2.37, 8.2.25, and 8.2.44:
$u^{\prime}=k \frac{\left(U^{0}\right)^{2}}{\omega} \sin ^{2}\left(k x_{0}-\omega t\right)+U_{z}^{0} \frac{W^{0}}{\omega} \cos ^{2}\left(k x_{0}-\omega t\right). \nonumber$
We now average this over one wave period, $2\pi/\omega$. The averages of the squared sine and cosine functions over a period are both equal to 1/2. Noting also that $U^0_z = kW^0$, we have
\begin{aligned} u^{S}=\frac{\omega}{2 \pi} \int_{0}^{2 \pi / \omega} u^{\prime} d t &=\frac{1}{2} \frac{k}{\omega}\left[\left(U^{0}\right)^{2}+\left(W^{0}\right)^{2}\right] \ &=\omega k \eta_{0}^{2} \frac{\cosh ^{2} k\left(z_{0}+H\right)+\sinh ^{2} k\left(z_{0}+H\right)}{2 \sinh ^{2} k H} \end{aligned} \nonumber
The reader may check that $w^\prime$, calculated in similar fashion, averages to zero.
• In the short wave (deep water) limit $kH \rightarrow \inf$ this becomes $u^{S}=c\left(k \eta_{0}\right)^{2} e^{2 k z_{0}}. \nonumber$ he Stokes drift is therefore proportional to the phase speed $c$, the square of the wave steepness $k\eta_0$, and a rapidly-decreasing function of depth.
• In the long wave (shallow water) limit $kH \rightarrow 0$, the Stokes drift is independent of depth: $u^{S}=\frac{c}{2} \frac{\eta_{0}^{2}}{H^{2}}. \nonumber$
|
textbooks/eng/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/20%3A_Appendix_I-_Vector_Operations_in_Curvilinear_Coordinates/20.01%3A_I.1_Cylindrical_coordinates.txt
|
1. Introduction
Hydrostatic forces are the resultant force caused by the pressure loading of a liquid acting on submerged surfaces. Calculation of the hydrostatic force and the location of the center of pressure are fundamental subjects in fluid mechanics. The center of pressure is a point on the immersed surface at which the resultant hydrostatic pressure force acts.
2. Practical Application
The location and magnitude of water pressure force acting on water-control structures, such as dams, levees, and gates, are very important to their structural design. Hydrostatic force and its line of action is also required for the design of many parts of hydraulic equipment.
3. Objective
The objectives of this experiment are twofold:
• To determine the hydrostatic force due to water acting on a partially or fully submerged surface;
• To determine, both experimentally and theoretically, the center of pressure.
4. Method
In this experiment, the hydrostatic force and center of pressure acting on a vertical surface will be determined by increasing the water depth in the apparatus water tank and by reaching an equilibrium condition between the moments acting on the balance arm of the test apparatus. The forces which create these moments are the weight applied to the balance arm and the hydrostatic force on the vertical surface.
5. Equipment
Equipment required to carry out this experiment is the following:
• Armfield F1-12 Hydrostatic Pressure Apparatus,
• A jug, and
• Calipers or rulers, for measuring the actual dimensions of the quadrant.
6. Equipment Description
The equipment is comprised of a rectangular transparent water tank, a fabricated quadrant, a balance arm, an adjustable counter-balance weight, and a water-level measuring device (Figure 1.1).
The water tank has a drain valve at one end and three adjustable screwed-in feet on its base for leveling the apparatus. The quadrant is mounted on a balance arm that pivots on knife edges. The knife edges coincide with the center of the arc of the quadrant; therefore, the only hydrostatic force acting on the vertical surface of the quadrant creates moment about the pivot point. This moment can be counterbalanced by adding weight to the weight hanger, which is located at the left end of the balance arm, at a fixed distance from the pivot. Since the line of actions of hydrostatic forces applied on the curved surfaces passes through the pivot point, the forces have no effect on the moment. The hydrostatic force and its line of action (center of pressure) can be determined for different water depths, with the quadrant’s vertical face either partially or fully submerged.
A level indicator attached to the side of the tank shows when the balance arm is horizontal. Water is admitted to the top of the tank by a flexible tube and may be drained through a cock in the side of the tank. The water level is indicated on a scale on the side of the quadrant [1].
Figure 1.1: Armfield F1-12 Hydrostatic Pressure Apparatus
7. Theory
In this experiment, when the quadrant is immersed by adding water to the tank, the hydrostatic force applied to the vertical surface of the quadrant can be determined by considering the following [1]:
• The hydrostatic force at any point on the curved surfaces is normal to the surface and resolves through the pivot point because it is located at the origin of the radii. Hydrostatic forces on the upper and lower curved surfaces, therefore, have no net effect – no torque to affect the equilibrium of the assembly because the forces pass through the pivot.
• The forces on the sides of the quadrant are horizontal and cancel each other out (equal and opposite).
• The hydrostatic force on the vertical submerged face is counteracted by the balance weight. The resultant hydrostatic force on the face can, therefore, be calculated from the value of the balance weight and the depth of the water.
• The system is in equilibrium if the moments generated about the pivot points by the hydrostatic force and added weight (=mg) are equal, i.e.:
where:
m : mass on the weight hanger,
L : length of the balance arm (Figure 1.2)
F : Hydrostatic force, and
y : distance between the pivot and the center of pressure (Figure 1.2).
Then, calculated hydrostatic force and center of pressure on the vertical face of the quadrant can be compared with the experimental results.
7.1 Hydrostatic Force
The magnitude of the resultant hydrostatic force (F) applied to an immersed surface is given by:
where:
Pc : pressure at centroid of the immersed surface,
A: area of the immersed surface,
yc : centroid of the immersed surface measured from the water surface,
: density of fluid, and
g : acceleration due to gravity.
The hydrostatic force acting on the vertical face of the quadrant can be calculated as:
• Partially immersed vertical plane (Figure 1.2a):
• Fully immersed vertical plane (Figure 1.2b):
where:
B : width of the quadrant face,
d : depth of water from the base of the quadrant, and
D : height of the quadrant face.
7.2 Theoretical Determination of Center of Pressure
The center of pressure is calculated as:
is the 2nd moment of area of immersed body about an axis in the free surface. By use of the parallel axes theorem:
where is the depth of the centroid of the immersed surface, and is the 2nd moment of area of immersed body about the centroidal axis. is calculated as:
• Partially immersed vertical plane:
• Fully immersed vertical plane:
The depth of the center of pressure below the pivot point is given by:
in which H is the vertical distance between the pivot and the base of the quadrant.
Substitution of Equation (6a and 6b) and into (4) and then into (7) yields the theoretical results, as follows:
• Partially immersed vertical plane (Figure 1.2a):
• Fully immersed vertical rectangular plane (Figure 1.2b):
Figure 1.2a: Partially submerged quadrant (c: centroid, p: center of pressure)
Figure 1.2b: Fully submerged quadrant (c: centroid, p: center of pressure)
7.3 Experimental Determination of Center of Pressure
For equilibrium of the experimental apparatus, moments about the pivot are given by Equation (1). By substitution of the derived hydrostatic force, F from Equation (3a and b), we have:
• Partially immersed vertical plane (Figure 1.2a):
• Fully immersed vertical rectangular plane (Figure 1.2b):
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: https://uta.pressbooks.pub/appliedfluidmechanics/?p=5
A YouTube element has been excluded from this version of the text. You can view it online here: https://uta.pressbooks.pub/appliedfluidmechanics/?p=5
Begin the experiment by measuring the dimensions of the quadrant vertical endface (B and D) and the distances (H and L), and then perform the experiment by taking the following steps:
• Wipe the quadrant with a wet rag to remove surface tension and prevent air bubbles from forming.
• Place the apparatus on a level surface, and adjust the screwed-in feet until the built-in circular spirit level indicates that the base is horizontal. (The bubble should appear in the center of the spirit level.)
• Position the balance arm on the knife edges and check that the arm swings freely.
• Place the weight hanger on the end of the balance arm and level the arm, using the counter weight, so that the balance arm is horizontal.
• Add 50 grams to the weight hanger.
• Add water to the tank and allow time for the water to settle.
• Close the drain valve at the end of the tank, then slowly add water until the hydrostatic force on the end surface of the quadrant is balanced. This can be judged by aligning the base of the balance arm with the top or bottom of the central marking on the balance rest.
• Record the water height, which displayed on the side of the quadrant in mm. If the quadrant is partially submerged, record the reading in the partially submerged portion of the Raw Data Table.
• Repeat the steps, adding 50 g weight each time, until the final weight of 500 g is reached. When the quadrant is fully submerged, record the readings in the fully submerged part of the Raw Data Table.
• Repeat the procedure in reverse by progressively removing the weights.
• Release the water valve, remove the weights, and clean up any spilled water.
9. Results and Calculations
Please visit this link for accessing the excel workbook for this experiment.
9.1 Result
Record the following dimensions:
• Height of quadrant endface, D (m) =
• Width of submerged, B (m)=
• Length of balance arm, L (m)=
• Distance from base of quadrant to pivot, H (m)=
All mass and water depth readings should be recorded in the Raw Data Table:
Raw Data Table
Test No.
Mass, m (kg)
Depth of Immersion, d (m)
Partially submerged 1
2
3
4
5
Fully Submerged 6
7
8
9
10
9.2 Calculations
Calculate the following for the partially and fully submerged quadrants, and record them in the Result Table:
• Hydrostatic force (F)
• Theoretical depth of center of pressure below the pivot (y)
• Experimental depth of center of pressure below the pivot (y)
Result Table
Test No. Mass m(kg) Depth of immersion d(m) Hydrostatic force F(N) Theoretical depth of center of pressure (m) Experimental depth of center of pressure (m)
1
2
3
4
5
6
7
8
9
10
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table (s) of raw data
• Table (s) of results
• Plots of the following graphs:
• Hydrostatic force (y-axis) vs depth of immersion (y-axis),
• Theoretical depth of center of pressure (y-axis) vs depth of immersion (x-axis),
• Experimental depth of center of pressure (y-axis) vs depth of immersion (x-axis),
• Theoretical depth of centre of pressure (y-axis) vs experimental depth of center of pressure (x-axis). Calculate and present value for this graph, and
• Mass (y-axis) vs depth of immersion (x-axis) on a log-log scale graph.
• Comment on the variations of hydrostatic force with depth of immersion.
• Comment on the relationship between the depth of the center of pressure and the depth of immersion.
• For both hydrostatic force and theoretical depth of center of pressure plotted vs depth of immersion, comment on what happens when the vertical endface of quadrant becomes fully submerged.
• Comment on and explain the discrepancies between the experimental and theoretical results for the center of pressure.
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.1%3A_Experiment_%25231%3A_Hydrostatic_Pressure.txt
|
1. Introduction
In waterworks and wastewater systems, pumps are commonly installed at the source to raise the water level and at intermediate points to boost the water pressure. The components and design of a pumping station are vital to its effectiveness. Centrifugal pumps are most often used in water and wastewater systems, making it important to learn how they work and how to design them. Centrifugal pumps have several advantages over other types of pumps, including:
• Simplicity of construction – no valves, no piston rings, etc.;
• High efficiency;
• Ability to operate against a variable head;
• Suitable for being driven from high-speed prime movers such as turbines, electric motors, internal combustion engines etc.; and
• Continuous discharge.
A centrifugal pump consists of a rotating shaft that is connected to an impeller, which is usually comprised of curved blades. The impeller rotates within its casing and sucks the fluid through the eye of the casing (point 1 in Figure 10.1). The fluid’s kinetic energy increases due to the energy added by the impeller and enters the discharge end of the casing that has an expanding area (point 2 in Figure 10.1). The pressure within the fluid increases accordingly.
Figure 10.1: Schematic of a typical centrifugal pump
The performance of a centrifugal pump is presented as characteristic curves in Figure 10.2, and is comprised of the following:
• Pumping head versus discharge,
• Brake horsepower (input power) versus discharge, and
• Efficiency versus discharge.
Figure 10.2: Typical centrifugal pump performance curves at constant impeller rotation speed. The units for H and Q are arbitrary.
The characteristic curves of commercial pumps are provided by manufacturers. Otherwise, a pump should be tested in the laboratory, under various discharge and head conditions, to produce such curves. If a single pump is incapable of delivering the design flow rate and pressure, additional pumps, in series or parallel with the original pump, can be considered. The characteristic curves of pumps in series or parallel should be constructed since this information helps engineers select the types of pumps needed and how they should be configured.
2. Practical Application
Many pumps are in use around the world to handle liquids, gases, or liquid-solid mixtures. There are pumps in cars, swimming pools, boats, water treatment facilities, water wells, etc. Centrifugal pumps are commonly used in water, sewage, petroleum, and petrochemical pumping. It is important to select the pump that will best serve the project’s needs.
3. Objective
The objective of this experiment is to determine the operational characteristics of two centrifugal pumps when they are configured as a single pump, two pumps in series, and two pumps in parallel.
4. Method
Each configuration (single pump, two pumps in series, and two pumps in parallel) will be tested at pump speeds of 60, 70, and 80 rev/sec. For each speed, the bench regulating valve will be set to fully closed, 25%, 50%, 75%, and 100% open. Timed water collections will be performed to determine flow rates for each test, and the head, hydraulic power, and overall efficiency ratings will be obtained.
5. Equipment
The following equipment is required to perform the pumps experiment:
• P6100 hydraulics bench, and
• Stopwatch.
6. Equipment Description
The hydraulics bench is fitted with a single centrifugal pump that is driven by a single-phase A.C. motor and controlled by a speed control unit. An auxiliary pump and the speed control unit are supplied to enhance the output of the bench so that experiments can be conducted with the pumps connected either in series or in parallel. Pressure gauges are installed at the inlet and outlet of the pumps to measure the pressure head before and after each pump. A watt-meter unit is used to measure the pumps’ input electrical power [10].
7. Theory
7.1. General Pump Theory
Consider the pump shown in Figure 10.3. The work done by the pump, per unit mass of fluid, will result in increases in the pressure head, velocity head, and potential head of the fluid between points 1 and 2. Therefore:
• work done by pump per unit mass = W/M
• increase in pressure head per unit mass
• increase in velocity head per unit mass
• increase in potential head per unit mass
in which:
W: work
M: mass
P: pressure
: density
v: flow velocity
g: acceleration due to gravity
z: elevation
Applying Bernoulli’s equation between points 1 and 2 in Figure 10.3 results in:
Since the difference between elevations and velocities at points 1 and 2 are negligible, the equation becomes:
Dividing both sides of this equation by gives:
The right side of this equation is the manometric pressure head, Hm, therefore:
Figure 10.3: Schematic pump–pipe system
7.2. Power and Efficiency
The hydraulic power (Wh) supplied to the fluid by the pump is the product of the pressure increase and the flow rate:
The pressure increase produced by the pump can be expressed in terms of the manometric head,
Therefore:
The overall efficiency () of the pump-motor unit can be determined by dividing the hydraulic power (Wh) by the input electrical power (Wi), i.e.:
7.3. Single Pump – Pipe System performance
While pumping fluid, the pump has to overcome the pressure loss that is caused by friction in any valves, pipes, and fittings in the pipe system. This frictional head loss is approximately proportional to the square of the flow rate. The total system head that the pump has to overcome is the sum of the total static head and the frictional head. The total static head is the sum of the static suction lift and the static discharge head, which is equal to the difference between the water levels of the discharge and the source tank (Figure 10.4). A plot of the total head-discharge for a pipe system is called a system curve; it is superimposed onto a pump characteristic curve in Figure 10.5. The operating point for the pump-pipe system combination occurs where the two graphs intercept [10].
Figure 10.4: Pump and pipe system showing static and total heads: lift pump (left), pump with flooded suction (right)
Figure 10.5: Pump-pipe system operating point
7.4. Pumps in Series
Pumps are used in series in a system where substantial head changes take place without any appreciable difference in discharge. When two or more pumps are configured in series, the flow rate throughout the pumps remains the same; however, each pump contributes to the increase in the head so that the overall head is equal to the sum of the contributions of each pump [10]. For n pumps in series:
The composite characteristic curve of pumps in series can be prepared by adding the ordinates (heads) of all of the pumps for the same values of discharge. The intersection point of the composite head characteristic curve and the system curve provides the operating conditions (performance point) of the pumps (Figure 10.6).
7.5. Pumps in Parallel
Parallel pumps are useful for systems with considerable discharge variations and with no appreciable head change. In parallel, each pump has the same head. However, each pump contributes to the discharge so that the total discharge is equal to the sum of the contributions of each pump [10]. Thus for pumps:
The composite head characteristic curve is obtained by summing up the discharge of all pumps for the same values of head. A typical pipe system curve and performance point of the pumps are shown in Figure 10.7.
Figure 10.6: Characteristics of two pumps in series
Figure 10.7: Characteristics of two pumps in parallel
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: https://uta.pressbooks.pub/appliedfluidmechanics/?p=243
8.1. Experiment 1: Characteristics of a Single Pump
a) Set up the hydraulics bench valves, as shown in Figure 10.8, to perform the single pump test.
b) Start pump 1, and increase the speed until the pump is operating at 60 rev/sec.
c) Turn the bench regulating valve to the fully closed position.
d) Record the pump 1 inlet pressure (P1) and outlet pressure (P2). Record the input power from the watt-meter (Wi). (With the regulating valve fully closed, discharge will be zero.)
e) Repeat steps (c) and (d) by setting the bench regulating valve to 25%, 50%, 75%, and 100% open.
f) For each control valve position, measure the flow rate by either collecting a suitable volume of water (a minimum of 10 liters) in the measuring tank, or by using the rotameter.
g) Increase the speed until the pump is operating at 70 rev/sec and 80 rev/sec, and repeat steps (c) to (f) for each speed.
Figure 10.8: Configuration of hydraulics bench valves for the single pump test.
8.2. Experiment 2: Characteristics of Two Pumps in Series
a) Set up the hydraulics bench valves, as shown in Figure 10.9, to perform the two pumps in series test.
b) Start pumps 1 and 2, and increase the speed until the pumps are operating at 60 rev/sec.
c) Turn the bench regulating valve to the fully closed position.
d) Record the pump 1 and 2 inlet pressure (P1) and outlet pressure (P2). Record the input power for pump 1 from the wattmeter (Wi). (With the regulating valve fully closed, discharge will be zero.)
e) Repeat steps (c) and (d) by setting the bench regulating valve to 25%, 50%, 75%, and 100% open.
f) For each control valve position, measure the flow rate by either collecting a suitable volume of water (a minimum of 10 liters) in the measuring tank, or by using the rotameter.
g) Increase the speed until the pump is operating at 70 rev/sec and 80 rev/sec, and repeat steps (c) to (f) for each speed.
Note: Wattmeter readings should be recorded for both pumps, assuming that both pumps have the same input power.
Figure 10.9: Configuration of hydraulics bench valves for pumps in series test.
8.3. Experiment 3: Characteristics of Two Pumps in Parallel
a) Configure the hydraulic bench, as shown in Figure 10.10, to conduct the test for pumps in parallel.
b) Repeat steps (b) to (g) in Experiment 2.
Figure 10.10: Configuration of hydraulic bench valves for pumps in parallel
9. Results and Calculations
Please visit this link for accessing excel workbook of this manual.
9.1. Result
Record your measurements for Experiments 1 to 3 in the Raw Data Tables.
Raw Data Table
Single Pump: 60 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Single Pump: 70 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Single Pump: 80 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Two Pumps in Series: 60 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Pump 2 Inlet Pressure, P1 (bar)
Pump 2 Outlet Pressure, P2 (bar)
Pump 2 Electrical Input Power (Wi)
Two Pumps in Series: 70 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Pump 2 Inlet Pressure, P1 (bar)
Pump 2 Outlet Pressure, P2 (bar)
Pump 2 Electrical Input Power (Wi)
Two Pumps in Series: 80 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Pump 2 Inlet Pressure, P1 (bar)
Pump 2 Outlet Pressure, P2 (bar)
Pump 2 Electrical Input Power (Wi)
Two Pumps in Parallel: 60 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Pump 2 Inlet Pressure, P1 (bar)
Pump 2 Outlet Pressure, P2 (bar)
Pump 2 Electrical Input Power (Wi)
Two Pumps in Parallel: 70 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Pump 2 Inlet Pressure, P1 (bar)
Pump 2 Outlet Pressure, P2 (bar)
Pump 2 Electrical Input Power (Wi)
Two Pumps in Parallel: 80 rev/s
Valve Open Position 0% 25% 50% 75% 100%
Volume (L)
Time (s)
Pump 1 Inlet Pressure, P1 (bar)
Pump 1 Outlet Pressure, P2 (bar)
Pump 1 Electrical Input Power (Wi)
Pump 2 Inlet Pressure, P1 (bar)
Pump 2 Outlet Pressure, P2 (bar)
Pump 2 Electrical Input Power (Wi)
9.2. Calculations
• If the volumetric measuring tank was used, then calculate the flow rate from:
• Correct the pressure rise measurement (outlet pressure) across the pump by adding a 0.07 bar to allow for the difference of 0.714 m in height between the measurement point for the pump outlet pressure and the actual pump outlet connection.
• Convert the pressure readings from bar to N/m2 (1 Bar=105 N/m2), then calculate the manometric head from:
• Calculate the hydraulic power (in watts) from Equation 6 where Q is in m3/s, in kg/m3, g in m/s2, and Hm in meter.
• Calculate the overall efficiency from Equation 7.
Note:
– Overall head for pumps in series is calculated using Equation 8b.
– Overall head for pumps in parallel is calculated using Equation 9b.
– Overall electrical input power for pumps in series and in parallel combination is equal to (Wi)pump1+(Wi)pump2.
• Summarize your calculations in the Results Tables.
Result Tables
Single Pump: N (rev/s)
Valve Open Position 0% 25% 50% 75% 100%
Flow Rate, Q (L/min)
Flow Rate, Q (m3/s)
Pump 1 Inlet Pressure, P1 (N/m2)
Pump 1 Outlet Corrected Pressure, P2 (N/m2)
Pump 1 Electrical Input Power (Watts)
Pump 1 Head, Hm (m)
Pump 1 Hydraulic Power, Wh (Watts)
Pump 1 Overall Efficiency, η0 (%)
Two Pumps in Series: N (rev/s)
Valve Open Position 0% 25% 50% 75% 100%
Flow Rate, Q (L/min)
Flow Rate, Q (m3/s)
Pump 1 Inlet Pressure, P1 (N/m2)
Pump 1 Outlet Corrected Pressure, P2 (N/m2)
Pump 1 Electrical Input Power (Watts)
Pump 2 Inlet Pressure, P1 (N/m2)
Pump 2 Outlet Corrected Pressure, P2 (N/m2)
Pump 2 Electrical Input Power (Watts)
Pump 1 Head, Hm (m)
Pump 1 Hydraulic Power, Wh (Watts)
Pump 2 Head, Hm (m)
Pump 2 Hydraulic Power, Wh (Watts)
Overall Head, Hm (m)
Overall Hydraulic Power, Wh (Watts)
Overall Electrical Input Power, Wi (Watts)
Both Pumps Overall Efficiency, η0 (%)
Two Pumps in Parallel: N (rev/s)
Valve Open Position 0% 25% 50% 75% 100%
Flow Rate, Q (L/min)
Flow Rate, Q (m3/s)
Pump 1 Inlet Pressure, P1 (N/m2)
Pump 1 Outlet Corrected Pressure, P2 (N/m2)
Pump 1 Electrical Input Power (Watts)
Pump 2 Inlet Pressure, P1 (N/m2)
Pump 2 Outlet Corrected Pressure, P2 (N/m2)
Pump 2 Electrical Input Power (Watts)
Pump 1 Head, Hm (m)
Pump 1 Hydraulic Power, Wh (Watts)
Pump 2 Head, Hm (m)
Pump 2 Hydraulic Power, Wh (Watts)
Overall Head, Hm (m)
Overall Hydraulic Power, Wh (Watts)
Overall Electrical Input Power, Wi (Watts)
Both Pumps Overall Efficiency, η0 (%)
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• Graph(s)
• Plot head in meters as y-axis against volumetric flow, in liters/min as x-axis.
• Plot hydraulic power in watts as y-axis against volumetric flow, in liters/min as x-axis.
• Plot efficiency in % as y-axis against volumetric flow, in liters/min as x-axis on your graphs.
In each of above graphs, show the results for single pump, two pumps in series, and two pumps in parallel – a total of three graphs. Do not connect the experimental data points, and use best fit to plot the graphs
• Discuss your observations and any sources of error in preparation of pump characteristics.
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.10%3A_Experiment_%252310%3A_Pumps.txt
|
1. Introduction
Energy presents in the form of pressure, velocity, and elevation in fluids with no energy exchange due to viscous dissipation, heat transfer, or shaft work (pump or some other device). The relationship among these three forms of energy was first stated by Daniel Bernoulli (1700-1782), based upon the conservation of energy principle. Bernoulli’s theorem pertaining to a flow streamline is based on three assumptions: steady flow, incompressible fluid, and no losses from the fluid friction. The validity of Bernoulli’s equation will be examined in this experiment.
2. Practical Application
Bernoulli’s theorem provides a mathematical means to understanding the mechanics of fluids. It has many real-world applications, ranging from understanding the aerodynamics of an airplane; calculating wind load on buildings; designing water supply and sewer networks; measuring flow using devices such as weirs, Parshall flumes, and venturimeters; and estimating seepage through soil, etc. Although the expression for Bernoulli’s theorem is simple, the principle involved in the equation plays vital roles in the technological advancements designed to improve the quality of human life.
3. Objective
The objective of this experiment is to investigate the validity of the Bernoulli equation when it is applied to a steady flow of water through a tapered duct.
4. Method
In this experiment, the validity of Bernoulli’s equation will be verified with the use of a tapered duct (venturi system) connected with manometers to measure the pressure head and total head at known points along the flow.
5. Equipment
The following equipment is required to complete the demonstration of the Bernoulli equation experiment:
• F1-10 hydraulics bench,
• F1-15 Bernoulli’s apparatus test equipment, and
• A stopwatch for timing the flow measurement.
6. Equipment Description
The Bernoulli test apparatus consists of a tapered duct (venturi), a series of manometers tapped into the venturi to measure the pressure head, and a hypodermic probe that can be traversed along the center of the test section to measure the total head. The test section is a circular duct of varying diameter with a 14° inclined angle on one side and a 21° inclined angle on other side. Series of side hole pressure tappings are provided to connect manometers to the test section (Figure 2.1).
Figure 2.1: Armfield F1-15 Bernoulli’s apparatus test equipment
Manometers allow the simultaneous measurement of the pressure heads at all of the six sections along the duct. The dimensions of the test section, the tapping positions, and the test section diameters are shown in Figure 2.2. The test section incorporates two unions, one at either end, to facilitate reversal for convergent or divergent testing. A probe is provided to measure the total pressure head along the test section by positioning it at any section of the duct. This probe may be moved after slackening the gland nut, which should be re-tightened by hand. To prevent damage, the probe should be fully inserted during transport/storage. The pressure tappings are connected to manometers that are mounted on a baseboard. The flow through the test section can be adjusted by the apparatus control valve or the bench control valve [2].
Figure 2.2: Test sections, manometer positions, and diameters of the duct along the test section
7. Theory
Bernoulli’s theorem assumes that the flow is frictionless, steady, and incompressible. These assumptions are also based on the laws of conservation of mass and energy. Thus, the input mass and energy for a given control volume are equal to the output mass and energy:
These two laws and the definition of work and pressure are the basis for Bernoulli’s theorem and can be expressed as follows for any two points located on the same streamline in the flow:
where:
P: pressure,
g: acceleration due to gravity,
v: fluid velocity, and
z: vertical elevation of the fluid.
In this experiment, since the duct is horizontal, the difference in height can be disregarded, i.e., z1=z2
The hydrostatic pressure (P) along the flow is measured by manometers tapped into the duct. The pressure head (h), thus, is calculated as:
Therefore, Bernoulli’s equation for the test section can be written as:
in which is called the velocity head (hd).
The total head (ht) may be measured by the traversing hypodermic probe. This probe is inserted into the duct with its end-hole facing the flow so that the flow becomes stagnant locally at this end; thus:
The conservation of energy or the Bernoulli’s equation can be expressed as:
The flow velocity is measured by collecting a volume of the fluid (V) over a time period (t). The flow rate is calculated as:
The velocity of flow at any section of the duct with a cross-sectional area of is determined as:
For an incompressible fluid, conservation of mass through the test section should be also satisfied (Equation 1a), i.e.:
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: https://uta.pressbooks.pub/appliedfluidmechanics/?p=50
• Place the apparatus on the hydraulics bench, and ensure that the outflow tube is positioned above the volumetric tank to facilitate timed volume collections.
• Level the apparatus base by adjusting its feet. (A sprit level is attached to the base for this purpose.) For accurate height measurement from the manometers, the apparatus must be horizontal.
• Install the test section with the 14° taperedsection converging in the flow direction. If the test section needs to be reversed, the total head probe must be retracted before releasing the mounting couplings.
• Connect the apparatus inlet to the bench flow supply, close the bench valve and the apparatus flow control valve, and start the pump. Gradually open the bench valve to fill the test section with water.
• The following steps should be taken to purge air from the pressure tapping points and manometers:
• Close both the bench valve and the apparatus flow control valve.
• Remove the cap from the air valve, connect a small tube from the air valve to the volumetric tank, and open the air bleed screw.
• Open the bench valve and allow flow through the manometers to purge all air from them, then tighten the air bleed screw and partly open the bench valve and the apparatus flow control valve.
• Open the air bleed screw slightly to allow air to enter the top of the manometers (you may need to adjust both valves to achieve this), and re-tighten the screw when the manometer levels reach a convenient height. The maximum flow will be determined by having a maximum (h1) and minimum (h5) manometer readings on the baseboard.
If needed, the manometer levels can be adjusted by using an air pump to pressurize them. This can be accomplished by attaching the hand pump tube to the air bleed valve, opening the screw, and pumping air into the manometers. Close the screw, after pumping, to retain the pressure in the system.
• Take readings of manometers h1 to h6 when the water level in the manometers is steady. The total pressure probe should be retracted from the test section during this reading.
• Measure the total head by traversing the total pressure probe along the test section from h1 to h6.
• Measure the flow rate by a timed volume collection. To do that, close the ball valve and use a stopwatch to measure the time it takes to accumulate a known volume of fluid in the tank, which is read from the sight glass. You should collect fluid for at least one minute to minimize timing errors. You may repeat the flow measurement twice to check for repeatability. Be sure that the total pressure probe is retracted from the test section during this measurement.
• Reduce the flow rate to give the head difference of about 50 mm between manometers 1 and 5 (h1-h5). This is the minimum flow experiment. Measure the pressure head, total head, and flow.
• Repeat the process for one more flow rate, with the (h1-h5) difference approximately halfway between those obtained for the minimum and maximum flows. This is the average flow experiment.
• Reverse the test section (with the 21° tapered section converging in the flow direction) in order to observe the effects of a more rapidly converging section. Ensure that the total pressure probe is fully withdrawn from the test section, but not pulled out of its guide in the downstream coupling. Unscrew the two couplings, remove the test section and reverse it, then re-assemble it by tightening the couplings.
• Perform three sets of flow, and conduct pressure and flow measurements as above.
9. Results and Calculations
Please visit this link for accessing excel workbook for this experiment.
9.1. Results
Enter the test results into the Raw Data Tables.
Raw Data Table
Position 1: Tapering 14° to 21°
Test Section Volume (Litre) Time (sec) Pressure Head (mm) Total Head (mm)
h1
h2
h3
h4
h5
h6
h1
h2
h3
h4
h5
h6
h1
h2
h3
h4
h5
h6
Raw Data Table
Position 2: Tapering 21° to 14°
Test Section Volume (Litre) Time (sec) Pressure Head (mm) Total Head (mm)
h1
h2
h3
h4
h5
h6
h1
h2
h3
h4
h5
h6
h1
h2
h3
h4
h5
h6
9.2 Calculations
For each set of measurements, calculate the flow rate; flow velocity, velocity head, and total head, (pressure head+ velocity head). Record your calculations in the Result Table.
Result Table
Position 1: Tapering 14° to 21°
Test No. Test Section Distance into duct (m) Flow Area (m²) Flow Rate (m³/s) Velocity (m/s) Pressure Head (m) Velocity Head (m) Calculated Total Head (m) Measured Total Head (m)
1 h1 0 0.00049
h2 0.06028 0.00015
h3 0.06868 0.00011
h4 0.07318 0.00009
h5 0.08108 0.000079
h6 0.14154 0.00049
2 h1 0 0.00049
h2 0.06028 0.00015
h3 0.06868 0.00011
h4 0.07318 0.00009
h5 0.08108 0.000079
h6 0.14154 0.00049
3 h1 0 0.00049
h2 0.06028 0.00015
h3 0.06868 0.00011
h4 0.07318 0.00009
h5 0.08108 0.000079
h6 0.14154 0.00049
Position 2: Tapering 21° to 14°
Test No. Test Section Distance into duct (m) Flow Area (m²) Flow Rate (m³/s) Velocity (m/s) Pressure Head (m) Velocity Head (m) Calculated Total Head (m) Measured Total Head (m)
1 h1 0 0.00049
h2 0.06028 0.00015
h3 0.06868 0.00011
h4 0.07318 0.00009
h5 0.08108 0.000079
h6 0.14154 0.00049
2 h1 0 0.00049
h2 0.06028 0.00015
h3 0.06868 0.00011
h4 0.07318 0.00009
h5 0.08108 0.000079
h6 0.14154 0.00049
3 h1 0 0.00049
h2 0.06028 0.00015
h3 0.06868 0.00011
h4 0.07318 0.00009
h5 0.08108 0.000079
h6 0.14154 0.00049
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• For each test, plot the total head (calculated and measured), pressure head, and velocity head (y-axis) vs. distance into duct (x-axis) from manometer 1 to 6, a total of six graphs. Connect the data points to observe the trend in each graph. Note that the flow direction in duct Position 1 is from manometer 1 to 6; in Position 2, it is from manometer 6 to 1.
• Comment on the validity of Bernoulli’s equation when the flow converges and diverges along the duct.
• Comment on the comparison of the calculated and measured total heads in this experiment.
• Discuss your results, referring, in particular, to the following:
• energy loss and how it is shown by the results of this experiment, and
• the components of Bernoulli’s equation () and how they vary along the length of the test section. Indicate the points of maximum velocity and minimum pressure.
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.2%3A_Experiment_%25232%3A_Bernoulli%27s_Theorem_Demonstration.txt
|
1. Introduction
Two types of energy loss predominate in fluid flow through a pipe network; major losses, and minor losses. Major losses are associated with frictional energy loss that is caused by the viscous effects of the medium and roughness of the pipe wall. Minor losses, on the other hand, are due to pipe fittings, changes in the flow direction, and changes in the flow area. Due to the complexity of the piping system and the number of fittings that are used, the head loss coefficient (K) is empirically derived as a quick means of calculating the minor head losses.
2. Practical Application
The term “minor losses”, used in many textbooks for head loss across fittings, can be misleading since these losses can be a large fraction of the total loss in a pipe system. In fact, in a pipe system with many fittings and valves, the minor losses can be greater than the major (friction) losses. Thus, an accurate K value for all fittings and valves in a pipe system is necessary to predict the actual head loss across the pipe system. K values assist engineers in totaling all of the minor losses by multiplying the sum of the K values by the velocity head to quickly determine the total head loss due to all fittings. Knowing the K value for each fitting enables engineers to use the proper fitting when designing an efficient piping system that can minimize the head loss and maximize the flow rate.
3. Objective
The objective of this experiment is to determine the loss coefficient (K) for a range of pipe fittings, including several bends, a contraction, an enlargement, and a gate valve.
4. Method
The head loss coefficients are determined by measuring the pressure head differences across a number of fittings that are connected in series, over a range of steady flows, and applying the energy equation between the sections before and after each fitting.
5. Equipment
The following equipment is required to perform the energy loss in pipe fittings experiment:
• F1-10 hydraulics bench,
• F1-22 Energy losses in bends apparatus,
• Stopwatch for timing the flow measurement,
• Clamps for pressure tapping connection tubes,
• Spirit level, and
• Thermometer.
6. Equipment Description
The energy loss in fittings apparatus consists of a series of fittings, a flow control valve, twelve manometers, a differential pressure gauge, and an air-bleed valve (Figure 3.1).
The fittings listed below, connected in a series configuration, will be examined for their head loss coefficient (K):
• long bend,
• area enlargement,
• area contraction,
• elbow,
• short bend,
• gate valve, and
• mitre.
Figure 3.1: F1-22 Energy Losses in Pipe Fittings Apparatus
The manometers are tapped into the pipe system (one before and one after each fitting, except for the gate valve) to measure the pressure head difference caused by each fitting. The pressure difference for the valve is directly measured by the differential pressure gauge. The air-bleed valve facilitates purging the system and adjusting the water level in the manometers to a convenient level, by allowing air to enter them. Two clamps, which close off the tappings to the mitre, are introduced while experiments are being performed on the gate valve. The flow rate is controlled by the flow control valve [3].
The internal diameter of the pipe and all fittings, except for the enlargement and contraction, is 0.0183 m. The internal diameter of the pipe at the enlargement’s outlet and the contraction’s inlet is 0.0240 m.
7. Theory
Bernoulli’s equation can be used to evaluate the energy loss in a pipe system:
In this equation , , and z are pressure head, velocity head, and potential head, respectively. The total head loss, hL, includes both major and minor losses.
If the diameter through the pipe fitting is kept constant, then . Therefore, if the change in elevation head is neglected, the manometric head difference is the static head difference that is equal to the minor loss through the fitting.
in which and are manometer readings before and after the fitting.
The energy loss that occurs in a pipe fitting can also be expressed as a fraction (K ) of the velocity head through the fitting:
where:
K: loss coefficient, and
v: mean flow velocity into the fitting.
Because of the complexity of the flow in many fittings, K is usually determined by experiment [3]. The head loss coefficient (K) is calculated as the ratio of the manometric head difference between the input and output of the fitting to the velocity head.
Due to the change in the pipe cross-sectional area in enlargement and contraction fittings, the velocity difference cannot be neglected. Thus:
Therefore, these types of fittings experience an additional change in static pressure, i.e.:
.
This value will be negative for the contraction since and it will be positive for enlargement because . From Equation (5), note that will be negative for the enlargement.
The pressure difference ( ) between before and after the gate valve is measured directly using the pressure gauge. This can then be converted to an equivalent head loss by using the conversion ratio:
1 bar= 10.2 m water
The loss coefficient for the gate valve may then be calculated by using Equation (4).
To identify the flow regime through the fitting, the Reynolds number is calculated as:
where v is the cross-sectional mean velocity, D is the pipe diameter and is the fluid kinematic viscosity (Figure 3.2).
Figure 3.2: Kinematic Viscosity of Water (v) at Atmospheric Pressure
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: https://uta.pressbooks.pub/appliedfluidmechanics/?p=127
It is not possible to measure head due to all of the fittings simultaneously; therefore, it is necessary to run two separate experiments.
Part A:
In this part, head losses caused by fittings, except for the gate valve, will be measured; therefore, this valve should be kept fully open throughout Part A. The following steps should be followed for this part:
• Set up the apparatus on the hydraulics bench and ensure that its base is horizontal.
• Connect the apparatus inlet to the bench flow supply, run the outlet extension tube to the volumetric tank, and secure it in place.
• Open the bench valve, the gate valve, and the flow control valve, and start the pump to fill the pipe system and manometers with water. Ensure that the air-bleed valve is closed.
• To purge air from the pipe system and manometers, connect a bore tubing from the air valve to the volumetric tank, remove the cap from the air valve, and open the air-bleed screw to allow flow through the manometers. Tighten the air-bleed screw when no air bubbles are observed in the manometers.
• Set the flow rate at approximately 17 liters/minute. This can be achieved by several trials of timed volumetric flow measurements. For flow measurement, close the ball valve, and use a stopwatch to measure the time that it takes to accumulate a known volume of fluid in the tank, which is read from the hydraulics bench sight glass. Collect water for at least one minute to minimize errors in the flow measurement.
• Open the air-bleed screw slightly to allow air to enter the top of the manometers; re-tighten the screw when the manometer levels reach a convenient height. All of the manometer levels should be on scale at the maximum flow rate. These levels can be adjusted further by using the air-bleed screw and the hand pump. The air-bleed screw controls the air flow through the air valve, so when using the hand pump, the bleed screw must be open. To retain the hand pump pressure in the system, the screw must be closed after pumping [3].
• Take height readings from all manometers after the levels are steady.
• Repeat this procedure to give a total of at least five sets of measurements over a flow range of 8 – 17 liters per minute.
• Measure the outflow water temperature at the lowest flow rate. This, together with Figure 3.2, is used to determine the Reynolds number.
Part B:
In this experiment, the head loss across the gate valve will be measured by taking the following steps:
• Clamp off the connecting tubes to the mitre bend pressure tappings to prevent air being drawn into the system.
• Open the bench valve and set the flow at the maximum flow in Part A (i.e., 17 liter/min); fully open the gate valve and flow control valve.
• Adjust the gate valve until 0.3 bar of head difference is achieved.
• Determine the volumetric flow rate.
• Repeat the experiment for 0.6 and 0.9 bars of pressure difference.
9. Results and Calculations
Please visit this link for accessing excel workbook for this experiment.
9.1. Results
Record all of the manometer and pressure gauge readings, as well as the volumetric measurements, in the Raw Data Tables.
Raw Data Tables
Part A – Head Loss Across Pipe Fittings
Test No. 1: Volume Collected (liters): Time (s):
Fitting h1 (m) h2 (m)
Enlargement
Contraction
Long Bend
Short Bend
Elbow
Mitre
Test No. 2: Volume Collected (liters): Time (s):
Enlargement
Contraction
Long Bend
Short Bend
Elbow
Mitre
Test No. 3: Volume Collected (liters): Time (s):
Enlargement
Contraction
Long Bend
Short Bend
Elbow
Mitre
Test No. 4: Volume Collected (liters): Time (s):
Enlargement
Contraction
Long Bend
Short Bend
Elbow
Mitre
Test No. 5: Volume Collected (liters): Time (s):
Enlargement
Contraction
Long Bend
Short Bend
Elbow
Mitre
Part B – Head Loss Across Gate Valve
Head Loss (bar) Volume (liters) Time (s)
0.3
0.6
0.9
Water Temperature:
9.2. Calculations
Calculate the values of the discharge, flow velocity, velocity head, and Reynolds number for each experiment, as well as the K values for each fitting and the gate valve. Record your calculations in the following sample Result Tables.
Result Table
Part A – Head Loss Across Pipe Fittings
Test No: Flow Rate Q (m3/s): Velocity v (m/s):
Fitting h1 (m) h2 (m) =h1– h2 (m) Corrected (m) v2/2g (m) K Reynolds Number
Enlargement
Contraction
Long Bend
Short Bend
Elbow
Mitre
Part B – Head Loss Across Pipe Fittings
Head Loss Volume (m3) Time (s) Flow Rate Q (m3/s) Velocity (m/s) v2/2g (m) K Reynolds Number
(bar) (m)
0.3
0.6
0.9
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• For Part A, on one graph, plot the head loss across the fittings (y-axis) against the velocity head (x-axis). On the second graph, plot the K values for the fittings (y-axis) against the flow rate Q (x-axis).
• For Part B, on one graph, plot the valve head losses (y-axis) against the velocity head (x-axis). On the second graph, plot the K values for the valve (y-axis) against the flow rate Q (x-axis).
• Comment on any relationships noticed in the graphs for Parts A and B. What is the dependence of head losses across pipe fittings upon the velocity head?
• Is it justifiable to treat the loss coefficient as constant for a given fitting?
• In Part B, how does the loss coefficient for a gate valve vary with the extent of opening the valve?
• Examine the Reynolds number obtained. Are the flows laminar or turbulent?
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.3%3A_Experiment_%25233%3A_Energy_Loss_in_Pipe_Fittings.txt
|
1. Introduction
The total energy loss in a pipe system is the sum of the major and minor losses. Major losses are associated with frictional energy loss that is caused by the viscous effects of the fluid and roughness of the pipe wall. Major losses create a pressure drop along the pipe since the pressure must work to overcome the frictional resistance. The Darcy-Weisbach equation is the most widely accepted formula for determining the energy loss in pipe flow. In this equation, the friction factor (f ), a dimensionless quantity, is used to describe the friction loss in a pipe. In laminar flows, f is only a function of the Reynolds number and is independent of the surface roughness of the pipe. In fully turbulent flows, f depends on both the Reynolds number and relative roughness of the pipe wall. In engineering problems, f is determined by using the Moody diagram.
2. Practical Application
In engineering applications, it is important to increase pipe productivity, i.e. maximizing the flow rate capacity and minimizing head loss per unit length. According to the Darcy-Weisbach equation, for a given flow rate, the head loss decreases with the inverse fifth power of the pipe diameter. Doubling the diameter of a pipe results in the head loss decreasing by a factor of 32 (≈ 97% reduction), while the amount of material required per unit length of the pipe and its installation cost nearly doubles. This means that energy consumption, to overcome the frictional resistance in a pipe conveying a certain flow rate, can be significantly reduced at a relatively small capital cost.
3. Objective
The objective of this experiment is to investigate head loss due to friction in a pipe, and to determine the associated friction factor under a range of flow rates and flow regimes, i.e., laminar, transitional, and turbulent.
4. Method
The friction factor is determined by measuring the pressure head difference between two fixed points in a straight pipe with a circular cross section for steady flows.
5. Equipment
The following equipment is required to perform the energy loss in pipes experiment:
• F1-10 hydraulics bench,
• F1-18 pipe friction apparatus,
• Stopwatch for timing the flow measurement,
• Measuring cylinder for measuring very low flow rates,
• Spirit level, and
• Thermometer.
6. Equipment Description
The pipe friction apparatus consists of a test pipe (mounted vertically on the rig), a constant head tank, a flow control valve, an air-bleed valve, and two sets of manometers to measure the head losses in the pipe (Figure 4.1). A set of two water-over-mercury manometers is used to measure large pressure differentials, and two water manometers are used to measure small pressure differentials. When not in use, the manometers may be isolated, using Hoffman clamps.
Since mercury is considered a hazardous substance, it cannot be used in undergraduate fluid mechanics labs. Therefore, for this experiment, the water-over-mercury manometers are replaced with a differential pressure gauge to directly measure large pressure differentials.
This experiment is performed under two flow conditions: high flow rates and low flow rates. For high flow rate experiments, the inlet pipe is connected directly to the bench water supply. For low flow rate experiments, the inlet to the constant head tank is connected to the bench supply, and the outlet at the base of the head tank is connected to the top of the test pipe [4].
The apparatus’ flow control valve is used to regulate flow through the test pipe. This valve should face the volumetric tank, and a short length of flexible tube should be attached to it, to prevent splashing.
The air-bleed valve facilitates purging the system and adjusting the water level in the water manometers to a convenient level, by allowing air to enter them.
7. Theory
The energy loss in a pipe can be determined by applying the energy equation to a section of a straight pipe with a uniform cross section:
If the pipe is horizontal:
Since vin = vout :
The pressure difference (Pout-Pin) between two points in the pipe is due to the frictional resistance, and the head loss hL is directly proportional to the pressure difference.
The head loss due to friction can be calculated from the Darcy-Weisbach equation:
where:
: head loss due to flow resistance
f: Darcy-Weisbach coefficient
L: pipe length
D: pipe diameter
v: average velocity
g: gravitational acceleration.
For laminar flow, the Darcy-Weisbach coefficient (or friction factor f ) is only a function of the Reynolds number (Re) and is independent of the surface roughness of the pipe, i.e.:
For turbulent flow, f is a function of both the Reynolds number and the pipe roughness height, . Other factors, such as roughness spacing and shape, may also affect the value of f; however, these effects are not well understood and may be negligible in many cases. Therefore, f must be determined experimentally. The Moody diagram relates f to the pipe wall relative roughness (/D) and the Reynolds number (Figure 4.2).
Instead of using the Moody diagram, f can be determined by utilizing empirical formulas. These formulas are used in engineering applications when computer programs or spreadsheet calculation methods are employed. For turbulent flow in a smooth pipe, a well-known curve fit to the Moody diagram is given by:
Reynolds number is given by:
where v is the average velocity, D is the pipe diameter, and and are dynamic and kinematic viscosities of the fluid, respectively. (Figure 4.3).
In this experiment, hL is measured directly by the water manometers and the differential pressure gauge that are connected by pressure tappings to the test pipe. The average velocity, v, is calculated from the volumetric flow rate (Q ) as:
The following dimensions from the test pipe may be used in the appropriate calculations [4]:
Length of test pipe = 0.50 m,
Diameter of test pipe = 0.003 m.
8. Experimental Procedure
An interactive or media element has been excluded from this version of the text. You can view it online here: https://uta.pressbooks.pub/appliedfluidmechanics/?p=129
The experiment will be performed in two parts: high flow rates and low flow rates. Set up the equipment as follows:
• Mount the test rig on the hydraulics bench, and adjust the feet with a spirit level to ensure that the baseplate is horizontal and the manometers are vertical.
• Attach Hoffman clamps to the water manometers and pressure gauge connecting tubes, and close them off.
High Flow Rate Experiment
The high flow rate will be supplied to the test section by connecting the equipment inlet pipe to the hydraulics bench, with the pump turned off. The following steps should be followed.
• Close the bench valve, open the apparatus flow control valve fully, and start the pump. Open the bench valve progressively, and run the flow until all air is purged.
• Remove the clamps from the differential pressure gauge connection tubes, and purge any air from the air-bleed valve located on the side of the pressure gauge.
• Close off the air-bleed valve once no air bubbles observed in the connection tubes.
• Close the apparatus flow control valve and take a zero-flow reading from the pressure gauge.
• With the flow control valve fully open, measure the head loss shown by the pressure gauge.
• Determine the flow rate by timed collection.
• Adjust the flow control valve in a step-wise fashion to observe the pressure differences at 0.05 bar increments. Obtain data for ten flow rates. For each step, determine the flow rate by timed collection.
• Close the flow control valve, and turn off the pump.
The pressure difference measured by the differential pressure gauge can be converted to an equivalent head loss (hL) by using the conversion ratio:
1 bar = 10.2 m water
Low Flow Rate Experiment
The low flow rate will be supplied to the test section by connecting the hydraulics bench outlet pipe to the head tank with the pump turned off. Take the following steps.
• Attach a clamp to each of the differential pressure gauge connectors and close them off.
• Disconnect the test pipe’s supply tube and hold it high to keep it filled with water.
• Connect the bench supply tube to the head tank inflow, run the pump, and open the bench valve to allow flow. When outflow occurs from the head tank snap connector, attach the test section supply tube to it, ensuring that no air is entrapped.
• When outflow occurs from the head tank overflow, fully open the control valve.
• Remove the clamps from the water manometers’ tubes and close the control valve.
• Connect a length of small bore tubing from the air valve to the volumetric tank, open the air bleed screw, and allow flow through the manometers to purge all of the air from them. Then tighten the air bleed screw.
• Fully open the control valve and slowly open the air bleed valve, allowing air to enter until the manometer levels reach a convenient height (in the middle of the manometers), then close the air vent. If required, further control of the levels can be achieved by using a hand pump to raise the manometer air pressure.
• With the flow control valve fully open, measure the head loss shown by the manometers.
• Determine the flow rate by timed collection.
• Obtain data for at least eight flow rates, the lowest to give hL= 30 mm.
• Measure the water temperature, using a thermometer.
9. Results and Calculations
Please use this link for accessing excel workbook for this experiment.
9.1. Results
Record all of the manometer and pressure gauge readings, water temperature, and volumetric measurements, in the Raw Data Tables.
Raw Data Tables: High Flow Rate Experiment
Test No. Head Loss (bar) Volume (Liters) Time (s)
1
2
3
4
5
6
7
8
9
10
Raw Data Tables: Low Flow Rate Experiment
Test No. h1 (m) h2 (m) Head loss hL (m) Volume (liters) Time (s)
1
2
3
4
5
6
7
8
Water Temperature:
9.2. Calculations
Calculate the values of the discharge; average flow velocity; and experimental friction factor, f using Equation 3, and the Reynolds number for each experiment. Also, calculate the theoretical friction factor, f, using Equation 4 for laminar flow and Equation 5 for turbulent flow for a range of Reynolds numbers. Record your calculations in the following sample Result Tables.
Result Table- Experimental Values
Test No. Head loss hL (m) Volume (liters) Time (s) Discharge (m3/s) Velocity (m/s) Friction Factor, f Reynolds Number
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Result Table- Theoretical Values
No. Flow Regime Reynolds Number Friction Factor, f
1 Laminar (Equation 4) 100
2 200
3 400
4 800
5 1600
6 2000
7 Turbulent (Equation 5) 4000
8 6000
9 8000
10 10000
11 12000
12 16000
13 20000
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• Graph(s)
• On one graph, plot the experimental and theoretical values of the friction factor, f (y-axis) against the Reynolds number, Re (x-axis) on a log-log scale. The experimental results should be divided into three groups (laminar, transitional, and turbulent) and plotted separately. The theoretical values should be divided into two groups (laminar and turbulent) and also plotted separately.
• On one graph, plot hL (y-axis) vs. average flow velocity, v (x-axis) on a log-log scale.
• Discuss the following:
• Identify laminar and turbulent flow regimes in your experiment. What is the critical Reynolds number in this experiment (i.e., the transitional Reynolds number from laminar flow to turbulent flow)?
• Assuming a relationship of the form , calculate K and n values from the graph of experimental data you have plotted, and compare them with the accepted values shown in the Theory section (Equations 4 and 5). What is the cumulative effect of the experimental errors on the values of K and n?
• What is the dependence of head loss upon velocity (or flow rate) in the laminar and turbulent regions of flow?
• What is the significance of changes in temperature to the head loss?
• Compare your results for f with the Moody diagram (Figure 4.2). Note that the pipe utilized in this experiment is a smooth pipe. Indicate any reason for lack of agreement.
• What natural processes would affect pipe roughness?
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.4%3A_Experiment_%25234%3A_Energy_Loss_in_Pipes.txt
|
1. Introduction
Moving fluid, in natural or artificial systems, may exert forces on objects in contact with it. To analyze fluid motion, a finite region of the fluid (control volume) is usually selected, and the gross effects of the flow, such as its force or torque on an object, is determined by calculating the net mass rate that flows into and out of the control volume. These forces can be determined, as in solid mechanics, by the use of Newton’s second law, or by the momentum equation. The force exerted by a jet of fluid on a flat or curve surface can be resolved by applying the momentum equation. The study of these forces is essential to the study of fluid mechanics and hydraulic machinery.
2. Practical Application
Engineers and designers use the momentum equation to accurately calculate the force that moving fluid may exert on a solid body. For example, in hydropower plants, turbines are utilized to generate electricity. Turbines rotate due to force exerted by one or more water jets that are directed tangentially onto the turbine’s vanes or buckets. The impact of the water on the vanes generates a torque on the wheel, causing it to rotate and to generate electricity.
3. Objective
The objective of this experiment is to investigate the reaction forces produced by the change in momentum of a fluid flow when a jet of water strikes a flat plate or a curved surface, and to compare the results from this experiment with the computed forces by applying the momentum equation.
4. Method
The momentum force is determined by measuring the forces produced by a jet of water impinging on solid flat and curved surfaces, which deflect the jet at different angles.
5. Equipment
The following equipment is required to perform the impact of the jet experiment:
• F1-10 hydraulics bench,
• F1-16 impacts of a jet apparatus with three flow deflectors with deflection angles of 90, 120, and 180 degrees, and
• Stopwatch for timing the flow measurement.
6. Equipment Description
The jet apparatus is a clear acrylic cylinder, a nozzle, and a flow deflector (Figure 5.1). Water enters vertically from the top of the cylinder, through a nozzle striking a target, mounted on a stem, and leaves through the outlet holes in the base of the cylinder. An air vent at the top of the cylinder maintains the atmospheric pressure inside the cylinder. A weight pan is mounted at the top of the stem to allow the force of the striking water to be counterbalanced by applied masses [5].
Figure 5.1: F1-16 Impact of Jet Apparatus
7. Theory
The velocity of the water (v) leaving the nozzle with the cross-sectional area (A) can be calculated by:
in which Q is the flow rate.
Applying the energy equation between the nozzle exit point and the surface of the deflector shows that the magnitude of the flow velocity does not change as the water flows around the deflector; only the direction of the flow changes.
Applying the momentum equation to a control volume encompassing the deflected flow results in:
where:
Fy: force exerted by the deflector on the fluid
: fluid density
: 180-, where is the flow deflection angle (Figure 5.2).
Figure 5.2: Examples of flow deflection angles for flat and hemispherical deflectors
From equilibrium of forces in a vertical direction, Fy is balanced by the applied weight on the weight pan, W (W = mg, where m is the applied mass), i.e., Fy = W. Therefore:
Since Q = vA, this equation can be written as:
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: https://uta.pressbooks.pub/appliedfluidmechanics/?p=131
Perform the experiment by taking the following steps:
• Remove the top plate (by releasing the knurled nuts) and the transparent cylinder from the equipment, and check and record the exit diameter of the nozzle.
• Replace the cylinder, and screw the 90-degree deflector onto the end of the shaft.
• Connect the inlet tube to the quick-release connector on the bench.
• Replace the top plate on the transparent cylinder, but do not tighten the three knurled nuts.
• Using the spirit level attached to the top plate, level the cylinder by adjusting the feet.
• Replace the three knurled nuts, then tighten in sequence until the built-in circular spirit level indicates that the top plate is horizontal. Do not overtighten the knurled nuts, as this will damage the top plate. The nuts should only be tightened enough to level the plate.
• Ensure that the vertical shaft is free to move and is supported by the spring beneath the weight pan.
• With no weights on the weight pan, adjust the height of the level gauge until it aligns with the datum line on the weight pan. Check that the position is correct by gently oscillating the pan.
• Place a mass of 50 grams on the weight pan, and turn on the pump.
• Open the bench valve slowly, and allow water to impinge upon the target until the datum line on the weight pan is level with the gauge. Leave the flow constant. Observe and note the flow behavior during the test.
• Measure the flow rate, using the volumetric tank. This is achieved by closing the ball valve and measuring the time that it takes to accumulate a known volume of fluid in the tank, as measured from the sight glass. You should collect water for at least one minute to minimize timing errors.
• Repeat this procedure by adding an additional 50 grams incrementally, until a maximum mass of 500 grams has been applied.
• Repeat the entire test for each of the other two flow deflectors.
9. Results and Calculations
Please use this link for accessing excel workbook for this experiment.
9.1. Results
Use the following tables to record your measurements.
Raw Data Table
Test No. Deflection Angles (degree)
90 120 180
Volume
(Liter)
Time
(s)
Applied Mass
(kg)
Volume
(Liter)
Time
(s)
Applied Mass
(kg)
Volume
(Liter)
Time
(s)
Applied Mass
(kg)
1
2
3
4
5
6
7
8
9
10
9.2. Calculations
The nozzle should be of the following dimensions.
• Diameter of the nozzle: d= 0.008 m
• Cross sectional area of the nozzle: A= 5.0265×10-5 m2
These values may be measured as part of the experimental procedure and replaced with the above dimensions.
For each set of measurements, calculate the applied weight (W), flow rate (Q), velocity squared (v2), force (Fy ), and theoretical and experimental slope (S) of the relationship between W and v2. The theoretical slope is determined from Equation 5, as follows:
The experimental value of S is obtained from a graph W of plotted against v2.
Result Table
Nozzle Diameter (m)= Flow Area (m2) = Deflector Angle (degree)=
Test No. Applied Weight (N) Flow Rate (m3/s) Velocity (m/s) Velocity2 (m/s)2 Force (N) Theoretical Slope Experimental Slope
1
2
3
4
5
6
7
8
9
10
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• Graph(s)
• Plot a graph of velocity squared, v2, (x-axis) against applied weight, W, (y-axis). Prepare one graph, presenting the results for all three deflectors, and use a linear trend line, setting the intercepts to zero, to show this relationship. Find the slopes of these lines. Record the slopes in the Results Table, as the experimental slope.
• Compare the slopes of this graph with the slopes calculated from the theoretical relationship from Equation 5.
• Plot the measured force from the weights (W) versus the force of the water on the deflector (Fy) that is calculated by using the momentum equation, i.e., Equation 2.
• Discuss your results, focusing on the following:
• Does this experiment provide a feasible means of verifying the conservation of momentum equation? Try to be quantitative in your comparison between the experimental and calculated results.
• Would the results have been different if the deflectors were closer to the nozzle? Explain.
• Comment on the agreement between your theoretical and experimental results, and give reasons for any differences.
• Comment on the significance of any experimental errors.
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.5%3A_Experiment_%25235%3A_Impact_of_a_Jet.txt
|
1. Introduction
An orifice is an opening, of any size or shape, in a pipe or at the bottom or side wall of a container (water tank, reservoir, etc.), through which fluid is discharged. If the geometric properties of the orifice and the inherent properties of the fluid are known, the orifice can be used to measure flow rates. Flow measurement by an orifice is based on the application of Bernoulli’s equation, which states that a relationship exists between the pressure of the fluid and its velocity. The flow velocity and discharge calculated based on the Bernoulli’s equation should be corrected to include the effects of energy loss and viscosity. Therefore, for accurate results, the coefficient of velocity (Cv) and the coefficient of discharge (Cd) should be calculated for an orifice. This experiment is being conducted to calibrate the coefficients of the given orifices in the lab.
2. Practical Application
Orifices have many applications in engineering practice besides the metering of fluid flow in pipes and reservoirs. Flow entering a culvert or storm drain inlet may act as orifice flow; the bottom outlet of a dam is another example. The coefficients of velocity and discharge are necessary to accurately predict flow rates from orifices.
3. Objective
The objective of this lab experiment is to determine the coefficients of velocity and discharge of two small orifices in the lab and compare them with values in textbooks and other reliable sources.
4. Method
The coefficients of velocity and discharge are determined by measuring the trajectory of a jet issuing fluid from an orifice in the side of a reservoir under steady flow conditions, i.e., a constant reservoir head.
5. Equipment
The following equipment is required to perform the orifice and free jet flow experiment:
• F1-10 hydraulics bench;
• F1-17 orifice and free jet flow apparatus, with two orifices having diameters of 3 and 6 mm;
• Measuring cylinder for flow measurement; and
• Stopwatch for timing the flow measurement.
6. Equipment Description
The orifice and free jet flow apparatus consists of a cylindrical head tank with an orifice plate set into its side (Figure 6.1). An adjustable overflow pipe is adjacent to the head tank to allow changes in the water level. A flexible hose attached to the overflow pipe returns excess water to the hydraulics bench. A scale attached to the head tank indicates the water level. A baffle at the base of the head tank promotes smooth flow conditions inside the tank, behind the orifice plate. Two orifice plates with 3 and 6 mm diameters are provided and may be interchanged by slackening the two thumb nuts. The trajectory of the jet may be measured, using the vertical needles. For this purpose, a sheet of paper should be attached to the backboard, and the needles should be adjusted to follow the trajectory of the water jet. The needles may be locked, using a screw on the mounting bar. The positions of the tops of the needles can be marked to plot the trajectory. A drain plug in the base of the head tank allows water to be drained from the equipment at the end of the experiment [6].
Figure 6.1: Armfield F1-17 Orifice and Jet Apparatus
7. Theory
The orifice outflow velocity can be calculated by applying Bernoulli’s equation (for a steady, incompressible, frictionless flow) to a large reservoir with an opening (orifice) on its side (Figure 6.2):
where h is the height of fluid above the orifice. This is the ideal velocity since the effect of fluid viscosity is not considered in deriving Equation 1. The actual flow velocity, however, is smaller than vi and is calculated as:
Cv is the coefficient of velocity, which allows for the effects of viscosity; therefore, Cv <1. The actual outflow velocity calculated by Equation (2) is the velocity at the vena contracta, where the diameter of the jet is the least and the flow velocity is at its maximum (Figure 6.2).
The actual outflow rate may be calculated as:
where Ac is the flow area at the vena contracta. Ac is smaller than the orifice area, Ao (Figure 6.2), and is given by:
where Cc is the coefficient of contraction; therefore, Cc < 1.
Substituting v and Ac from Equations 2 and 4 into Equation 3 results in:
The product CvCc is called the coefficient of discharge, Cd; Thus, Equation 5 can be written as:
The coefficient of velocity, Cv, and coefficient of discharge, Cd, are determined experimentally as follows.
Figure 6.2: Orifice and Jet Flow Parameters
7.1. Determination of the Coefficient of Velocity
If the effect of air resistance on the jet leaving the orifice is neglected, the horizontal component of the jet velocity can be assumed to remain constant. Therefore, the horizontal distance traveled by jet (x) in time (t) is equal to:
The vertical component of the trajectory of the jet will have a constant acceleration downward due to the force of gravity. Therefore, at any time, t, the y-position of the jet may be calculated as:
Rearranging Equation (8) gives:
Substitution of t and v from Equations 9 and 2 into Equation 7 results in:
Equations (10) can be rearranged to find Cv:
Therefore, for steady flow conditions (i.e., constant h in the head tank), the value of Cv can be determined from the x, y coordinates of the jet trajectory. A graph of x plotted against will have a slope of 2Cv.
7.2. Determination of the Coefficient of Discharge
If Cd is assumed to be constant, then a graph of Q plotted against (Equation 6) will be linear, and the slope of this graph will be:
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: uta.pressbooks.pub/appliedfluidmechanics/?p=133
This experiment will be performed in two parts. Part A is performed to determine the coefficient of velocity, and Part B is conducted to determine the coefficient of discharge.
Set up the equipment as follows:
• Locate the apparatus over the channel in the top of the bench.
• Using the spirit level attached to the base, level the apparatus by adjusting the feet.
• Connect the flexible inlet tube on the side of the head tank to the bench quick-release fitting.
• Place the free end of the flexible tube from the adjustable overflow on the side of the head tank into the volumetric. Make sure that this tube will not interfere with the trajectory of the jet flowing from the orifice
• Secure each needle in the raised position by tightening the knurled screw.
Part A: Determination of coefficient of velocity from jet trajectory under constant head
• Install the 3-mm orifice in the fitting on the right-hand side of the head tank, using the two securing screws supplied. Ensure that the O-ring seal is fitted between the orifice and the tank.
• Close the bench flow control valve, switch on the pump, and then gradually open the bench flow control valve. When the water level in the head tank reaches the top of the overflow tube, adjust the bench flow control valve to provide a water level of 2 to 3 mm above the overflow pipe level. This will ensure a constant head and produce a steady flow through the orifice.
• If necessary, adjust the frame so that the row of needles is parallel with the jet, but is located 1 or 2 mm behind it. This will avoid disturbing the jet, but will minimize errors due to parallax.
• Attach a sheet of paper to the backboard, between the needles and board, and secure it in place with the clamp provided so that its upper edge is horizontal.
• Position the overflow tube to give a high head (e.g., 320 mm). The jet trajectory is obtained by using the needles mounted on the vertical backboard to follow the profile of the jet.
• Release the securing screw for each needle, and move the needle until its point is just immediately above the jet. Re-tighten the screw.
• Mark the location of the top of each needle on the paper. Note the horizontal distance from the plane of the orifice (taken as ) to the coordinate point marking the position of the first needle. This first coordinate point should be close enough to the orifice to treat it as having the value of y=0. Thus, y displacements are measured relative to this position.
• The volumetric flowrate through the orifice can be determined by intercepting the jet, using the measuring cylinder and a stopwatch. The measured flow rates will be used in Part B.
• Repeat this test for lower reservoir heads (e.g., 280 mm and 240 mm)
Repeat the above procedure for the second orifice with diameter of 6 mm.
Part B: Determination of coefficient of discharge under constant head
• Position the overflow tube to have a head of 300 mm in the tank. (You may have to adjust the level of the overflow tube to achieve this.)
• Measure the flow rate by timed collection, using the measuring cylinder provided.
• Repeat this procedure for a head of 260 mm.
The procedure should also be repeated for the second orifice.
9. Results and Calculations
Please visit this link for accessing excel workbook for this experiment.
9.1. Results
Use the following tables to record your measurements.
Raw Data Table: Part A
Needle
No.
Orifice
Diameter
(m)
x
(m)
Head (m) y(m)
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 0.003 0.014
2 0.064
3 0.114
4 0.164
5 0.214
6 0.264
7 0.314
8 0.364
Needle
No.
Orifice
Diameter
(m)
x
(m)
Head (m) y(m)
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 0.006 0.014
2 0.064
3 0.114
4 0.164
5 0.214
6 0.264
7 0.314
8 0.364
Test
No.
Orifice
Diameter
(m)
Head
(m)
Volume
(L)
Time
(s)
1 0.003
2
3
4
5
6 0.006
7
8
9
10
9.2. Calculations
Calculate the values of (y.h)1/2 for Part A and discharge (Q) and (h0.5) for Part B. Record your calculations in the following Result Tables.
The following dimensions of the equipment are used in the appropriate calculations. If necessary, these values may be checked as part of the experimental procedure and replaced with your measurements [6].
– Diameter of the small orifice: 0.003 m
– Diameter of the large orifice: 0.006 m
– Pitch of needles: 0.05 m
Result Table- Part A
Needle
No.
Orifice
Diameter
(m)
x
(m)
Head (m) y(m)
(y.h)1/2(m)
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 0.003 0.014
2 0.064
3 0.114
4 0.164
5 0.214
6 0.264
7 0.314
8 0.364
Needle
No.
Orifice
Diameter
(m)
x
(m)
Head (m) y(m)
(y.h)1/2(m)
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 0.006 0.014
2 0.064
3 0.114
4 0.164
5 0.214
6 0.264
7 0.314
8 0.364
Test
No.
Orifice
Diameter
(m)
Head
(m)
Volume
(L)
Time
(s)
Volume (m3) Q
(m3/sec)
h0.5
(m0.5)
1 0.003
2
3
4
5
6 0.006
7
8
9
10
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• Graph(s)
Part A: On one chart, plot a graph of x values (y-axis) against (y.h)1/2 values (x-axis) for each test. Calculate the slope of these graphs, using the equation of the best-fit for your experimental data and by setting the intercept to zero. Using Equation 11, calculate the coefficient of velocity for each orifice as:
Part B: Plot Q values (y-axis) against (h)0.5 values (x-axis). Determine the slope of this graph, using the equation of the best- fit for your experimental data and by setting the intercept to zero. Based on Equation 12, calculate the coefficient of discharge for each orifice, using the equation of the best-fit for your experimental data and the following relationship:
• Find the recommended values for Cv and Cd of the orifices utilized in this experiment from reliable sources (e.g., textbooks). Comment on the agreement between the textbook values and experimental results, and give reasons for any differences.
• Comment on the significance of any experimental errors.
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.6%3A_Experiment_%25236%3A_Orifice_and_Free_Jet_Flow.txt
|
1. Introduction
In nature and in laboratory experiments, flow may occur under two very different regimes: laminar and turbulent. In laminar flows, fluid particles move in layers, sliding over each other, causing a small energy exchange to occur between layers. Laminar flow occurs in fluids with high viscosity, moving at slow velocity. The turbulent flow, on the other hand, is characterized by random movements and intermixing of fluid particles, with a great exchange of energy throughout the fluid. This type of flow occurs in fluids with low viscosity and high velocity. The dimensionless Reynolds number is used to classify the state of flow. The Reynolds Number Demonstration is a classic experiment, based on visualizing flow behavior by slowly and steadily injecting dye into a pipe. This experiment was first performed by Osborne Reynolds in the late nineteenth century.
2. Practical Application
The Reynolds number has many practical applications, as it provides engineers with immediate information about the state of flow throughout pipes, streams, and soils, helping them apply the proper relationships to solve the problem at hand. It is also very useful for dimensional analysis and similitude. As an example, if forces acting on a ship need to be studied in the laboratory for design purposes, the Reynolds number of the flow acting on the model in the lab and on the prototype in the field should be the same.
3. Objective
The objective of this lab experiment is to illustrate laminar, transitional, and fully turbulent flows in a pipe, and to determine under which conditions each flow regime occurs.
4. Method
The visualization of flow behavior will be performed by slowly and steadily injecting dye into a pipe. The state of the flow (laminar, transitional, and turbulent) will be visually determined and compared with the results from the calculation of the Reynolds number.
5. Equipment
The following equipment is required to perform the Reynolds number experiment:
• F1-10 hydraulics bench,
• The F1-20 Reynolds demonstration apparatus,
• Cylinder for measuring flow,
• Stopwatch for timing the flow measurement, and
• Thermometer.
6. Equipment Description
The equipment includes a vertical head tank that provides a constant head of water through a bellmouth entry to the flow visualization glass pipe. Stilling media (marbles) are placed inside the tank to tranquilize the flow of water entering the pipe. The discharge through this pipe is regulated by a control valve and can be measured using a measuring cylinder [7]. The flow velocity, therefore, can be determined to calculate Reynolds number. A dye reservoir is mounted on top of the head tank, from which a blue dye can be injected into the water to enable observation of flow conditions (Figure 7.1).
Figure 7.1: Armfield F1-20 Reynolds apparatus [7]
7. Theory
Flow behavior in natural or artificial systems depends on which forces (inertia, viscous, gravity, surface tension, etc.) predominate. In slow-moving laminar flows, viscous forces are dominant, and the fluid behaves as if the layers are sliding over each other. In turbulent flows, the flow behavior is chaotic and changes dramatically, since the inertial forces are more significant than the viscous forces.
In this experiment, the dye injected into a laminar flow will form a clear well-defined line. It will mix with the water only minimally, due to molecular diffusion. When the flow in the pipe is turbulent, the dye will rapidly mix with the water, due to the substantial lateral movement and energy exchange in the flow. There is also a transitional stage between laminar and turbulent flows, in which the dye stream will wander about and show intermittent bursts of mixing, followed by a more laminar behavior.
The Reynolds number (Re), provides a useful way of characterizing the flow. It is defined as:
where () is the kinematic viscosity of the water (Figure 7.2), v is the mean flow velocity and d is the diameter of the pipe.
The Reynolds number is a dimensionless parameter that is the ratio of the inertial (destabilizing) force to the viscosity (stabilizing) force. As Re increases, the inertial force becomes relatively larger, and the flow destabilizes and becomes fully turbulent.
The Reynolds experiment determines the critical Reynolds number for pipe flow at which laminar flow (Re<2000 ) becomes transitional (2000<Re<4000 ) and the transitional flow becomes turbulent (Re>4000). The advantage of using a critical Reynolds number, instead of critical velocity, is that the results of the experiments are applicable to all Newtonian fluid flows in pipes with a circular cross-section.
Figure 7.2: Kinematic Viscosity of Water at Atmospheric Pressure.
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: https://uta.pressbooks.pub/appliedfluidmechanics/?p=237
Set up the equipment as follows:
• Position the Reynolds apparatus on a fixed, vibration-free surface (not on the hydraulics bench), and ensure that the base is horizontal and the test section is vertical.
• Connect the bench outflow to the head tank inlet pipe.
• Place the head tank overflow tube in the volumetric tank of the hydraulics bench.
• Attach a small tube to the apparatus flow control valve, and clamp it to a fixed position in a sink in the lab, allowing enough space below the end of the tube to insert a measuring cylinder. The outflow should not be returned to the volumetric tank since it contains dye and will taint the flow visualisation.
Note that any movement of the outflow tube during a test will cause changes in the flow rate, since it is driven by the height difference between the head tank surface and the outflow point.
• Start the pump, slightly open the apparatus flow control valve and the bench valve, and allow the head tank to fill with water. Make sure that the flow visualisation pipe is properly filled. Once the water level in the head tank reaches the overflow tube, adjust the bench control valve to produce a low overflow rate.
• Ensuring that the dye control valve is closed, add the blue dye to the dye reservoir until it is about 2/3 full.
• Attach the needle, hold the dye assembly over a lab sink, and open the valve to ensure that there is a free flow of dye.
• Close the dye control valve, then mount the dye injector on the head tank and lower the injector until the tip of the needle is slightly above the bellmouth and is centered on its axis.
• Adjust the bench valve and flow control valve to return the overflow rate to a small amount, and allow the apparatus to stand for at least five minutes
• Adjust the flow control valve to reach a slow trickle outflow, then adjust the dye control valve until a slow flow with clear dye indication is achieved.
• Measure the flow volumetric rate by timed water collection.
• Observe the flow patterns, take pictures, or make hand sketches as needed to classify the flow regime.
• Increase the flow rate by opening the flow control valve. Repeat the experiment to visualize transitional flow and then, at higher flow rates, turbulent flow, as characterized by continuous and very rapid mixing of the dye. Try to observe each flow regime two or three times, for a total of eight readings.
• As the flow rate increases, adjust the bench valve to keep the water level constant in the head tank.
Note that at intermediate flows, it is possible to have a laminar characteristic in the upper part of the test section, which develops into transitional flow lower down. This upper section behavior is described as an “inlet length flow,” which means that the boundary layer has not yet extended across the pipe radius.
• Measure water temperature.
• Return the remaining dye to the storage container. Rinse the dye reservoir thoroughly to ensure that no dye is left in the valve, injector, or needle.
9. Results and Calculations
Please visit this link for accessing excel workbook for this experiment.
The following dimensions of the equipment are used in the appropriate calculations. If required, measure them to make sure that they are accurate [7].
• Diameter of test pipe: d = 0.010 m
• Cross-sectional area of test pipe: A =7.854×10-5 m2
9.1. Results
Use the following table to record your measurements and observations.
Raw Data Table
Observed
Flow Regime
Volume
(L)
Time (sec) Temperature ()
9.2. Calculations
Calculate discharge, flow velocity, and Reynolds number ( Re). Classify the flow based on the Re of each experiment. Record your calculations in the following table.
Result Table
Observed
Flow Regime
Discharge Q(m3/sec) Velocity v(m/sec) Kinematic Viscosity v(m2/s) Renolds Number Flow Regime Classified using Reynolds Number
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• A description, with illustrative sketches or photos, of the flow characteristics of each experimental run.
• Discuss your results, focusing on the following:
• How is the flow pattern of each of the three states of flow (laminar, transitional, and turbulent) different?
• Does the observed flow condition occur within the expected Reynold’s number range for that condition?
• Discuss your observation and any source of error in the calculation of the number.
• Compare the experimental results with any theoretical studies you have undertaken.
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.7%3A_Experiment_%25237%3A_Osborne_Reynolds%27_Demonstration.txt
|
1. Introduction
Vortices can occur naturally or be produced in a laboratory. There are two types of vortices: free vortices and forced vortices. A free vortex is formed, for example, when water flows out of a vessel through a central hole in the base. No external force is required to rotate the fluid, and the degree of rotation is dependent upon the initial disturbance. Whirlpools in rivers and tornadoes are examples of natural free vortices. A forced vortex, on the other hand, is caused by external forces on the fluid. It can be created by rotating a vessel containing fluid or by paddling in fluid. Rotational flow created by impellers of a pump is an example of a forced vortex in turbomachinery.
2. Practical Application
Studying natural phenomena such as hurricanes, tornadoes, and whirlpools (free vortices) requires a full understanding of vortex behavior. It is also critical for engineers and designers to be able to characterize forced vortices generated in machinery, such as centrifugal pumps or turbines. Vortices often have adverse effects, as have been seen during hurricanes, tornadoes, or scour holes created downstream of a dam outlet; however, understanding vortex behavior has enabled engineers to design turbomachinery and hydraulic structures that take advantage of these phenomena. For example, hydrodynamic separators have been developed, based on vortex behavior (swirling flow), to separate solid materials from liquids. This type of separator is used in water treatment plants.
3. Objective
The objective of this lab experiment is to study and compare the water surface profiles of free and forced vortices.
4. Method
This experiment is performed by measuring the water surface profiles of a number of free and forced vortices, and observing the differences. We will study the profiles of free vortices that are produced when water flows from orifices of different diameters that are installed at the base of a tank. Varying the size of the orifice creates changes in the flow rate, thereby changing the rotational speed and size of the vortex profile. Forced vortices are created due to external forces, so we will increase the rotational speed throughout the experiment to study the theoretical and experimental relationships between the vortex surface profile and angular velocity.
5. Equipment
The following equipment is required to perform the free and forced experiment:
• P6100 hydraulics bench, and
• P6238: Free and forced vortices apparatus.
6. Equipment Description
The free and forced vortices apparatus consists of a transparent cylindrical vessel, 250 mm in diameter and 180 mm deep, with two pairs of diametrically opposed inlet tubes of 9.0 mm and 12.5 mm diameter. The 12.5 diameter inlet tubes are angled at 15° to the diameter in order to create a swirling motion of the water entering the vessel during the free vortex experiment (Figure 8.1a). An outlet is centrally positioned in the base of the vessel, and a set of push-in orifices of 8, 16, and 24 mm diameter (Figure 8.1b) is supplied to reduce the outlet diameter to a suitable value and produce free vortices of different sizes. The vortex surface profile is determined by a measuring caliper (Figure 8.1c) housed on a mounted bridge, that measures the diameter of the vortex at various elevations. This provides the coordinate points that are required for plotting the free vortex profile [8].
The forced vortex is created by positioning a bushed plug in the central hole of the vessel and introducing the flow through 9 mm inlet tubes that are angled at 60° to the diameter. The water inflow from these tubes impinges on a two-blade paddle. The water exits the vessel via the 12.5 mm angled inlet tubes that are used as entry tubes for the free vortex experiment. The two-bladed paddle rotates on a vertical shaft supported by the bushed plug. A bridge piece mounted on top of the vessel houses a series of needles (Figure 8.1d) to determine the coordinates of the forced vortex profile [8].
A 3-way valve allows water to be diverted through the 12.5 mm inlet tubes for the free vortex experiment, and 9 mm inlet tubes for the forced vortex experiment.
Figure 8.1: a) P6238 CUSSONS free and forced vortex apparatus, b) push-in orifices, c) free vortex measuring caliper, d) force vortex measuring probes
7. Theory
Two types of vortices are distinguished in the dynamics of the motion: forced and free vortices. The forced vortex is caused by external forces on the fluid, such as the impeller of a pump, and the free vortex naturally occurs in the flow and can be observed in a drain or in the atmosphere of a tornado.
7.1. Free Vortex
A free vortex is formed when water flows out of a vessel through a central hole in the base (Figure 8.2). The degree of the rotation depends on the initial disturbance. In a free cylindrical vortex, the velocity varies inversely with the distance from the axis of rotation (Figure 8.3).
The equation governing the surface profile is derived from the Bernoulli’s theorem:
Substituting Equation (1) into (2) will give a new expression:
or:
which is the equation of a hyperbolic curve of nature (Figure 8.4)
This curve is asymptotic to the axis of rotation and to the horizontal plane through z=c.
7.2. Forced Vortex
When water is forced to rotate at a constant speed () (Figure 8.2), the velocity will be also constant and equal to:
The velocity head (or kinetic energy) can be calculated as:
Substituting Equation (5) into (6) results in:
If the horizontal plane passing through the lowest point of the vortex is selected as datum, the total energy is equal to:
where ho is the pressure head at the datum. Substituting hc from Equation (7) into (8) gives:
At r=0: H=0, therefore, ho=0 , and :
This is the equation of the water surface profile, which is a parabola (Figure 8.4).
Figure 8.2: Free vortex (left) and forced vortex (right) produced in the lab
Figure 8.3: Velocity profile of free vortex (left) and forced vortex (right)
Figure 8.4: Surface profile of a free vortex (left) and a forced vortex (right)
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: uta.pressbooks.pub/appliedfluidmechanics/?p=239
This experiment will be performed in two parts: free vortex and forced vortex.
8.1. Free Vortex
• Position the apparatus on the hydraulics bench so that the central outlet in the base of the vessel is located over the weir trough.
• Adjust the feet to ensure that the apparatus is level.
• Push the 24 mm diameter orifice into the central outlet located in the base of the apparatus.
• Connect the inlet pipe of the apparatus (situated on the 3-way valve) to the hydraulics bench outlet, using the flexible pipe provided.
• Close the apparatus outlet globe valve, and position the 3-way valve so that water flows into the vessel via the 15-degree inlet ports.
• Close the bench outlet valve, and turn on the pump.
• Gradually open the bench valve, and allow the vessel to fill with water until water begins to overflow through the cutouts.
• After the vessel is slightly overflowing, slowly open the outlet valve so that the water level maintains a stable height. Note that you can also adjust the bench valve to maintain a constant water level.
• After a constant water level has been achieved, measure the water surface profile, by adjusting the measuring caliper to a desired radius, and then lower it into the vortex until the needles evenly touch the walls of the vortex. At this point, record the height indicated by the caliper and repeat the procedure for the remaining radii (Figure 8.5).
• After completing your measurements, close the bench valve, turn off the pump, drain the apparatus, and repeat the process for the remaining two orifices.
Note: The vortex profile tends to wander, so the vortex diameter- measuring gauge arm should be positioned at 90° to the main arm. This allows a meaningful vortex diameter measurement to be made.
8.2. Forced Vortex
• Position the bushed plug into the outlet of the vessel and mount the two-blade paddle wheel on the shaft, ensuring that the tapered edges of the blades angle upward.
• Adjust the 3-way valve so that water flows into the vessel via the 60-degree outlet ports. Turn on the pump, open the bench control valve, and allow the vessel to fill with water until water just begins to overflow through the cutouts. Note that the inlet may need to be adjusted in order to achieve a low-profiled, calm vortex. Water will now flow through these ports and impinge on the paddle wheel before flowing out of the apparatus via the two 15-degree ports.
• After the vessel is filled with water, adjust the outlet valve so that the water level remains stable.
Note: If the water level fluctuates, raise the free end of the outlet tube above the grade line of the water in the vessel, and then lower it again into the bench tank. Doing this will ensure that water discharges at the same rate that it flows in, thereby helping to maintain the water level.
• After the water level is stable, measure the vortex surface profile. This is done by mounting the measuring bridge to the vessel, and then lowering the needles until they are touching the profile of the vortex. Lock them in place, then remove the bridge, and measure the height of each needle. It is recommended that this be done with a graph or engineering paper.
• Record the time that it takes for the paddles to make 10 revolutions in the vessel. You can find the angular velocity of the flow by dividing the number of revolutions by the time.
• Increase the inflow rate to achieve higher angular velocity, and repeat the process so that you have four distinct vortex profiles. Note that as you increase the inflow, you will need to adjust the outlet flow to maintain the water level. As you increase the flow rate, change the count of the revolutions to 20, 40, and 50.
Figure 8.5: Example of water surface profile in free vortex experiment
9. Results and Calculations
Please visit this link for accessing excel workbook for this experiment.
9.1. Results
Use the following tables to record your measurements.
D (mm) 24-mm
Orifice
16-mm
Orifice
8-mm
Orifice
H (mm)
80
70
60
50
45
40
35
30
Raw Data Table: Forced Vortex
No. of Revolutions (N) T
(sec)
Distance from center, r(mm)
125
(edge)
110 90 70 50 30 0
Measured height, H(mm)
10
20
40
50
9.2. Calculations
a) Free Vortex
Record the coordinate points (D and H) for the three vortex profiles, using Figure 8.5 and the Raw Data Table – Free Vortex.
Result Table: Free Vortex
24-mm orifice 16-mm orifice 8-mm orifice
D (mm) H (mm) D (mm) H (mm) D (mm) H (mm)
80 80 80
70 70 70
60 60 60
50 50 50
45 45 45
40 40 40
35 35 35
30 30 30
b ) Forced Vortex
For all series of experiments with N=10, 20, 40, and 50,
• Calculate angular velocity,
• Calculate the theoretical water surface profile, using Equation 10.
Result Table – Forced Vortex
Distance from the center, r (mm)
N=10 N = 20 N = 40 N = 50
ω (rad/s) H (mm) cal. H (mm) meas. ω (rad/s) H (mm) cal. H (mm) meas. ω (rad/s) H (mm) cal. H (mm) meas. ω (rad/s) H (mm) cal. H (mm) meas.
0
30
50
70
90
110
125
cal.= calculated; meas.= measured.
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• Graph(s)
• Plot all three measured surface profiles of the free vortices on one chart ( r as x-axis and H as y-axis). Note that your plots should look like Figure 8.4 (left): as the depth increases, the radius of the vortex decreases.
• Plot measured and calculated water surface profiles for forced vortices on one chart ( r as x-axis and H as y-axis). Note that you need to prepare one chart for each N value (a total of 4 graphs), and the surface profiles should look like Figure 8.4 (right).
• Compare and discuss the calculated and measured surface profiles of forced vortices.
• Discuss briefly the practical applications of free and forced vortices.
• Comment on the possible sources of error (e.g., variance from ideal vortex motion).
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.8%3A_Experiment_%25238%3A_Free_and_Forced_Vortices.txt
|
1. Introduction
A weir is a barrier across the width of a river or stream that alters the characteristics of the flow and usually results in a change in the height of the water level. Several types of weirs are designed for application in natural channels and laboratory flumes. Weirs can be broad-crested, short-crested, or sharp-crested. Sharp-crested weirs, commonly referred to as notches, are manufactured from sharp-edged thin plates. The relationship between the flow rate and water depth above the weir can be derived by applying the Bernoulli’s equation and by making some assumptions with regard to head loss and pressure distribution of the flow passing over the weir. A coefficient of discharge needs to be determined experimentally for each weir to account for errors in estimating the flow rate that is due to these assumptions.
2. Practical Application
Weirs are commonly used to measure or regulate flow in rivers, streams, irrigation canals, etc. Installing a weir in an open channel system causes critical depth to form over the weir. Since there is a unique relationship between the critical depth and discharge, a weir can be designed as a flow-measuring device. Weirs are also built to raise the water level in a channel to divert the flow to irrigation systems that are located at higher elevations.
3. Objective
The objectives of this experiment are to:
a) determine the characteristics of flow over a rectangular and a triangular weir, and
b) determine the value of the discharge coefficient for both notches.
4. Method
The coefficients of discharge are determined by measuring the height of the water surface above the notch base and the corresponding flow rate. The general features of the flow can be determined by direct observation.
5. Equipment
The following equipment is required to perform the flow over weirs experiment:
• F1-10 hydraulics bench;
• F1-13 rectangular and triangular weirs;
• Vernier height gauge; and
• Stopwatch.
6. Equipment Description
The flow over the weir apparatus includes the following elements that are used in conjunction with the flow channel in the molded bench top of the hydraulics bench (Figure 9.1).
• A combination of a stilling baffle and the inlet nozzle to promote smooth flow conditions in the channel.
• A vernier hook and point gauge, mounted on an instrument carrier, to allow measurement of the depth of flow above the base of the notch.
• The weir notches that are mounted in a carrier at the outlet end of the flow channel [9].
Figure 9.1: Hydraulics bench and weir apparatus
7. Theory
The depth of water above the base of a weir is related to the flow rate through it; therefore, the weir can be used as a flow measuring device. The relationships of flow over weirs can be obtained by applying the energy equation from a point well upstream of the weir to a point just above the weir crest. This approach requires a number of assumptions, and it yields the following results:
• for a triangular weir (Figure 9.2a):
• for a rectangular weir (Figure 9.2b):
where:
Q : flow rate;
H : height above the weir base;
b : width of rectangular weir (R-notch);
: angle of triangular weir (V-notch);
Cd: discharge coefficient to account for the effects of simplifying assumptions in the theory, which has to be determined by experiment [9].
Figure 9.2: (a) Triangular weir, (b) Rectangular weir
• for a V-notch
• for a R-notch:
8. Experimental Procedure
A YouTube element has been excluded from this version of the text. You can view it online here: uta.pressbooks.pub/appliedfluidmechanics/?p=241
This experiment will be performed by taking the following steps:
• Ensure that the hydraulics bench is positioned so that its surface is horizontal. This is necessary because the flow over the notch is driven by gravity.
• Mount the rectangular notch plate onto the flow channel, and position the stilling baffle as shown in Figure 9.3.
• Turn on the pump, and slightly adjust the flow control to fill the channel upstream of the weir with water.
• Turn off the pump when the water starts to flow over the weir.
• Wait a few minutes to allow the water to settle.
• Level the point gauge with the water level in the channel. Record the reading as ho.
Note: To measure the datum height of the base of the notch (ho), position the instrument carrier as shown in Figure 9.3. Then carefully lower the gauge until the point is just above the notch base, and lock the coarse adjustment screw. Then, using the fine adjustment, adjust the gauge until the point just touches the water surface and take a reading, being careful not to damage the notch.
• Adjust the point gauge to read 10 mm greater than the datum.
• Record the reading as h.
• Turn on the pump, and slightly adjust the flow until the water level coincides with the point gauge. Check that the level has stabilized before taking readings.
• Measure the flow rate using the volumetric tank.
• Observe the shape of the nappe and take pictures of it.
Note: The surface of the water will fall as it approaches the weir. This is particularly noticeable at high flow rates by high heads. To obtain an accurate measurement of the undisturbed water level above the crest of the weir, it is necessary to place the measuring gauge at a distance of at least three times the head above the weir.
• Increase the flow by opening the bench regulating valve to set the heads above the datum level in 10 mm increments until the regulating valve is fully open. Take care not to allow spillage to occur over the plate top that is adjacent to the notch. At each condition, measure the flow rate and observe the shape of the nappe.
Note: To obtain a sufficiently accurate result, collect around 25 liters of water each time, or collect the water for at least 120 seconds.
• Close the regulating valve, stop the pump, and then replace the weir with the V-notch.
• Repeat the experiment with the V-notch weir plate, but with 5 mm increments in water surface elevation.
• Collect seven head and discharge readings for each weir.
Figure 9.3: Position of the notch and Vernier height gauge to set the datum.
9. Results and Calculations
Please visit this link for accessing excel workbook for this experiment.
9.1. Result
Use the following tables to record your measurements. Record any observations of the shape and the type of nappe, paying particular attention to whether the nappe was clinging or sprung clear, and of the end contraction and general change in shape. (See Figure 9.4 to classify the nappe).
Figure 9.4: Types of nappe: a) Springing clear nappe, b) Depressed Nappe, and c) Clinging Nappe
Raw Data Table: R-notch
Test No. Datum Height ho (m) Water Surface Elev. h(m) Volume Collected (L) Time for Collection (s)
1
2
3
4
5
6
7
Raw Data Table: V-notch
Test No. Datum Height ho (m) Water Surface Elev. h(m) Volume Collected (L) Time for Collection (s)
1
2
3
4
5
6
7
9.2. Calculations
The following dimensions from the equipment can be used in the appropriate calculations:
– width of rectangular notch (b) = 0.03 m
– angle of V-notch () = 90°
• Calculate discharge (Q) and head (h) for each experiment, and record them in the Result Tables. For calculation purposes, the depth of the water above the weir is the difference between each water level reading and the datum reading, i.e., H = h-ho
• Calculate H5/2and H3/2 for the triangular and rectangular notches, respectively.
• For each measurement, calculate the experimental values of for the triangular and rectangular notches, using Equations 3 and 4, respectively.
• Record your calculations in the Results Tables.
Result Table: R-notch
No. H (m) Volume Collected (m3) Flow Rate (m3/s) H 3/2 Experimental Cd Theoretical Cd %Error
1
2
3
4
5
6
7
Result Table: V-notch
No. H (m) Volume Collected (m3) Flow Rate (m3/s) H 3/2 Experimental Cd Theoretical Cd %Error
1
2
3
4
5
6
7
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• Graph(s)
• Schematic drawings or photos of the nappes observed during each experiment, with an indication of their type.
• Plot a graph of Q (y-axis) against H 3/2(x-axis) for the rectangular weir and Q against H 5/2 for the triangular weir. Use a linear function to plot the best fit, and express the relationship between Q and Hn and in the form of: in which the exponent value n is 1.5 for the rectangular weir and 2.5 for the triangular weir. Calculate the coefficients of discharge Cd (theoretical method) using Equations 5 and 6. Record Cd values calculated from the theoretical method in the Result Tables.
• for a rectangular notch:
• for a triangular notch:
• Compare the experimental results to the theory by calculating the percentage of error.
• What are the limitations of the theory?
• Why would you expect wider variation of Cd values at lower flow rates?
• Compare the results for Cd of the weirs utilized in this experiment with those you may find in a reliable source (e.g., textbooks). Include in your report a copy of the tables or graphs you have used for textbook values of Cd.
• Discuss your observations and any source of errors in calculation of Cd.
|
textbooks/eng/Civil_Engineering/Book%3A_Applied_Fluid_Mechanics_Lab_Manual_(Ahmari_and_Kabir)/01.9%3A_Experiment_%25239%3A_Flow_Over_Weirs.txt
|
• 1.1: Digital Information
This chapter presents the key challenges AECO is facing with the digitization of information and outlines the content of this book with respect to these challenges.
01: Digitization
This chapter presents the key challenges AECO is facing with the digitization of information and outlines the content of this book with respect to these challenges.
Information explosion
One of the key characteristics of our era is the explosive increase in information production and registration. It has been estimated that human societies had accumulated roughly 12 exabytes until the digital era. Then, annual information growth rates of 30% raised the total to 180 exabytes by 2006 and to over 1.800 exabytes by 2011. In the most recent period, the total more than doubled every two years, towards a projected 44 zettabytes by 2020 and 180 zettabytes by 2025.[1]
Such astounding calculations are updated regularly, with even more dramatic projections, so future totals may become even higher. The main reason for this is that the population of information users and producers keeps on increasing and is currently expanding to cover devices generating and sharing data on the Internet of Things. But even if we reach a plateau at some point, as with Moore’s “law” concerning the growth of computing capacity,[2] we already have an enormous problem in our hands.
The situation is further complicated by changing attitudes concerning information. Not so long ago, most people were afraid of information overload.[3] Nowadays with the general excitement about big data we have moved to the opposite view. From being a worry, the plethora of information we produce and consume has become an opportunity. Attitudes may change further, moreover in unpredictable ways, as suggested by reactions to the Facebook – Cambridge Analytica data breach in 2018.
Regardless of such attitudes, two things will not change. The first is that we have to manage existing information efficiently, effectively, securely and safely. The second is that the means of information production, dissemination and management will remain primarily digital.
Information explosion in AECO
The explosive growth of digital information relates to AECO in various ways. On one end of the spectrum, we have new information sources that produce big data, such as smartphones and sensors. These tell us a lot about users and conditions in the built environment, and so promise a huge potential for the analysis and improvement of building performance, while requiring substantial investment in technologies and organization. On the other end of the spectrum, there are established information and communication technologies that have already become commonplace and ubiquitous, also in AECO. Email, for instance, appears to dominate communication and information exchange[4] by offering a digital equivalent to analogue practices like letter writing. Replication of analogue practices that dominate digital information processing is typical of AECO: digital technologies and information standards are still geared towards the production of conventional documents like floor plans and sections.
In between these two extremes, we encounter domain-specific technologies that aim to structure AECO processes and knowledge. Currently paramount among these is BIM, an integrated approach that is usually justified with respect to performance.[5] Performance improvement through BIM involves intensive and extensive collaboration, which adds to both the importance and the burden of information. The wide adoption of BIM means rapid expansion to cover more aspects and larger projects, which accentuates interoperability, capacity and coordination problems. In a recent survey, 70% of AECO professionals claim that project information deluge actually impedes effective collaboration, while 42% feel unable to integrate new digital tools in their organizations.[6] This surely impedes the deployment of solutions to their information needs: AECO appears to share many of the problems of the digital information explosion, yet to profit relatively little from the information-processing opportunities of the digital era.
Digitization in AECO: origins and outcomes
AECO has always been an intensive producer and consumer of information. In fact, most of its disciplines primarily produce information on buildings rather than buildings, e.g. drawings and related documents that specify what should be constructed and how. Especially drawings have been a major commodity in AECO; they are ubiquitous in all forms of specification and communication, and quite effective in supporting all kinds of AECO tasks.
The history of digitization in AECO starts quite early, already in the 1960s, but with disparate ambitions. Some researchers were interested in automating design (even to the extent of replacing human designers with computers), while others were keen to computerize drawing. The two coexisted in the area of CAAD, with design automation been generally treated as the real goal. With the popularization of computers in the 1990s, however, it was computerized drawing that became popular in AECO practice.
As with other software, the primary use of computerized drawing systems has been the production of analogue documents: conventional drawings like floor plans and sections on paper. For many years, the advantages of computerized drawing were presented in terms of efficiency improvement over drawing by hand on paper: faster production of drawings, easier modification and compact storage. Even after the popularization of the Internet, the emphasis on conventional documents remained; the only difference was that, rather than producing and exchanging paper-based documents, one would produce and exchange digital files like PDFs.
A main consequence of this has been that AECO remained firmly entrenched in conventional, document-based processes. While other analogue documents like telephone directories were being replaced by online information systems and apps, and people adapted to having their day planners and address lists on mobile phones, AECO stubbornly stuck to analogue practices and documents, prolonging their life into the digital era.
BIM: radical intentions
Drawing from product modelling, BIM emerged as a radical improvement of computerized drawing that should provide a closer relation to design. The difference with earlier design automation attempts was that it did not offer prescriptive means for generating a design but descriptive support to design processes: collaboration between AECO disciplines, integration of aspects and smooth transition between phases. By doing so, it shifted attention from drawings to the information they contained.
The wide acceptance of BIM is unprecedented in AECO computerization. Earlier attempts at computerization were often met with reluctance, not in the least for the cost of hardware, software and training to use them. The reception of BIM, by contrast, was much more positive, even though it was more demanding than its predecessors in terms of cost. Arguably more than its attention to information or collaboration, it was its apparent simplicity (a Lego-like assembly of a building) that made it appealing, especially to non-technical stakeholders. The arcane conventions and practices of analogue drawing no longer seemed necessary or relevant.
Nevertheless, BIM remained rooted in such conventions. It may have moved from the graphic to the symbolic but it did so through interfaces laden with graphic conventions. For example, entering a wall in BIM may be done in a floor plan projection as follows: the user selects the wall type and then draws a line to indicate its axis. As soon as the axis is drawn, the wall symbol appears fully detailed according to the wall type that has been chosen: lines, hatches and other graphic elements indicating the wall materials and layers. The axis is normally not among the visible graphic elements. Such attachment to convention makes it rather hard for users to understand that they are actually entering a symbol in the model rather than somehow generating a drawing.
More on such matters follows later in the book. For the moment, it suffices to note that BIM may indicate a step forward in the digitization of AECO information but it remains a hybrid environment that may confuse or obscure fundamental information issues. As such, it deserves particular attention and, being the best option for AECO for the moment, it is used as the main information environment discussed in this book. Future technologies are expected to follow the symbolic character of BIM, so any solutions developed on the basis of BIM will probably remain applicable.
Representation
Ideas about information and how it works can be vague or even confusing if one fails to realize that most of it is not unstructured or haphazard but organized in meaningful representations. These representations allow us to understand and utilize information effectively and economically. Consequently, they are critical for both information and digitization. As intensive but generally intuitive users of representations, we have to become aware of their structure and characteristics in order to understand how we process and disseminate information. We also have to appreciate that existing representations are not necessarily appropriate for the computer era. Computers have different capacities to humans, therefore familiar representations we have been using successfully for centuries may have to be adapted or even abandoned.
This is evident in changes that have already occurred but are not always apparent, even to avid computer users. Anticipating the following chapters on representation, let us consider just one example of the effects of computerization: humans mostly use decimal numbers, arguably because we have ten fingers to help us with calculations, while computers use binary numbers because they are built out of components with two possible states (on an off). Humans are capable of using binary numbers but they require significantly more effort than decimal ones. As a result, while computers use binary numbers, user interfaces translate them into decimal ones. Despite the added burden of having to employ and connect two different representations, this solution works well for the symbiosis of computers and humans.
In dealing with information, one must therefore be aware of all representations involved, their connections and utility. This is a prerequisite to effective and reliable computerization, e.g. concerning the role and operation of interfaces. The same applies to the treatment of digital information: knowing the characteristics of a representation allows one to ascertain which data are well-formed and meaningful in the particular context.
Information management
Managing information is not just a task for managers and computer specialists. It involves everyone who disseminates, receives or stores information. Very few people are concerned with information management just for the sake of it; most approach information and its management in the framework of their own activities, for which information is an essential commodity. This makes management of information not an alien, externally imposed obligation but a key aspect of everyone’s information processing, a fundamental element in communication and collaboration, and a joint responsibility for all those involved. Given the amounts of information currently produced and exchanged, its careful management is a necessity for anyone who relies on information for their functioning or livelihood.
For these reasons, in this book we view management issues from two complementary perspectives: that of design management, as representative of all management, coordination and collaboration activities in AECO, and that of generic information management, not restricted to AECO, as a source of generally applicable principles and guidelines. As we shall see, the one depends on the other for providing a suitable solution to information management problems. As with all aspects of this book, emphasis is not on technical solutions but on the conceptual and operational structure of information management: the definition of clear approaches and transparent criteria for guiding people to a better performance and selecting or evaluating means that support them towards this goal.
The reasons for doing so are already rather pressing. Despite the broad acknowledgement of the information deluge in AECO, the development of effective IM approaches appears to be lagging behind. Information may hold a central position in AECO computerization, as the “I” in BIM testifies, yet IM in AECO is generally poorly specified as an abstract, background obligation in management — as something that additional computer systems should solve or as a reason to create additional management roles, such as project information managers, BIM and CAD managers and coordinators, so as to cover the increased technical complexity (not just quantity) of digital information. Such new computer systems and technical specializations nevertheless add to the complexity of IM by their mere presence, especially if they operate without clear goals.
A primary cause for confusion and uncertainty is the lack of a clear definition of information. Despite wide acknowledgement of its importance in all AECO products and processes, to the extent that perceptions of information in DM vary from a key means of communication and decision support to the main goal of design management, there is considerable fuzziness concerning what constitutes information in AECO. Many adopt a conventional view and equate information to drawings and other documents, even in the framework of BIM. As a result, IM is reduced to document management and to the use of document management systems, which often exist parallel to BIM, increasing redundancy and lowering overall efficiency.
Considering a document as information goes beyond using the carrier as a metaphor for the content, in the same way that we say “the Town Hall” to indicate the local authority accommodated in the building. It also reflects a strong adherence of AECO to conventional practices that have managed to survive into the digital era and may be uncritically replicated in digital information processing. For IM, this means that coordination of information production, exchange and utilization is in danger of being reduced to merely ensuring the presence of the right files, while most content-related matters, including quality assessment, are deferred to the human information users. It is therefore not surprising that both industry and academia complain that AECO has yet to define clear goals for information management and governance, even within BIM. Lots of data are captured but they are not always organized in ways that support comprehensive utilization.
IM literature is not particularly helpful in this respect. Arguably consistently with its broad scope, IM is rather inclusive concerning what is to be managed and covers documents, applications, services, schemes and metadata. To make such disparate material coherent and usable, IM literature proposes processing it in ways that establish correlations between data or with specific contexts, classify and categorize or condense data. This may be apply to conventional practices in AECO but is incompatible with new directions towards integration of information, as represented by BIM.
Finally, it should be stressed that IM is not a matter of brute force (by computers or humans) but of information organization. One can store all documents, files and models and hope for the best but stored information is not necessarily accessible and usable. As we know from search machines on the Internet, they can be very clever in retrieving what there is but this does not mean that they return the answers we need. If one asks for the specific causes of a fault in a building, it is not enough to receive all documents on the building from all archives to browse and interpret. Being able to identify the precise documents that refer to the particular part or aspect of the building depends on how the archives and the documents have been organized and maintained. To do that, one can rely on labour-intensive interpretation, indexing and cross-referencing of each part of each document – or one can try to understand the fundamental structure of these documents and build intelligent representations and management strategies based on them.
Key Takeaways
• Computerization has added substantial possibilities to our information-processing capacities and also promoted the accumulation of huge amounts of information, which keep on increasing
• Computerization in AECO is still in a transitional stage, bounded by conventions from the analogue era and confused by its dual origins: automation of design and digitization of drawing
• Information is often organized in representations ; understanding how representations are structured and operate is a prerequisite to both computerization of information and its management
• Information management is becoming critical for the util ization of digital information; instead of relying on brute-force solutions, one should consider the fundamental principles on which it should be based
Exercises
1. Calculate how much data you produce per week, categorized in:
1. Personal emails
2. Social media (including instant messaging)
3. Digital photographs, video and audio for personal use
4. Study-related emails
5. Study-related photographs, video and audio
6. Study-related alphanumeric documents (texts, spreadsheets etc.)
7. Study-related drawings and diagrams (CAD, BIM, renderings etc.)
8. Other (please specify)
2. Specify how much of the above data is stored or shared on the Internet and how much remains only on personal storage devices (hard drives, SSD, memory cards etc.)
3. Calculate how much data a design project may produce and explain your calculations analytically, keeping in mind that there may be several design alternatives and versions. Use the following categories:
1. CAD or BIM files
2. PDFs and images produced from CAD & BIM or other software
3. Alphanumeric files (texts, spreadsheets, databases etc.)
4. Other (please specify)
4. Calculate how much of the above data is produced by different stakeholders, explaining your calculations analytically:
1. Architects
2. Structural engineers
3. MEP engineers
4. Clients
5. Managers
1. Calculations and projections of information accumulated by human societies can be found in: Lyman, P. & Varian, H.P. 2003, "How much information." http://groups.ischool.berkeley.edu/archive/how-much-info/; Gantz, J. & Reinsel, D., 2011, "Extracting value from chaos." 2011, www.emc.com/collateral/analyst-reports/idc-extracting-value-from-chaos-ar.pdf;Turner, V., Reinsel D., Gantz J. F., & Minton S., 2014. "The Digital Universe of Opportunities" https://www.emc.com/leadership/digital-universe/2014iview/digital-universe-of-opportunities-vernon-turner.htm
2. Simonite, T., 2016. "Moore’s law Is dead. Now what?" Technology Review https://www.technologyreview.com/s/601441/moores-law-is-dead-now-what/
3. The notion of information overload was popularized in: Toffler, A., 1970. Future shock. New York: Random House.
4. The dominance of email in AECO communication is reported in several sources, including a 2015 survey: www.newforma.com/news-resources/press-releases/70-aec-firms-say-information-explosion-impacted-collaboration/
5. Performance and in particular the avoidance of failures and related costs are among the primary reasons for adopting BIM, as argued in:Eastman, C., Teicholz, P.M., Sacks, R., & Lee, G., 2018. BIM handbook (3rd ed.). Hoboken NJ: Wiley.
6. Research conducted in the UK in 2015: www.newforma.com/news-resources/press-releases/70-aec-firms-say-information-explosion-impacted-collaboration/
|
textbooks/eng/Civil_Engineering/Book%3A_Building_Information_-_Representation_and_Management_-_Fundamentals_and_Principles_(Koutamanis)/01%3A_Digitization/1.01%3A_Digital_Information.txt
|
• 2.1: Representation
This chapter introduces representations, in particular symbolic ones: how they are structured and how they describe things , including spatial ones . It explains that spatial symbolic representations are frequently graphs and presents some of the advantages of using such mathematical foundations. The chapter concludes with the paradigmatic and syntagmatic dimensions of representations, and their relevance for interpretation and management.
• 2.2: Analogue representations
This chapter introduces representations, in particular symbolic ones: how they are structured and how they describe things , including spatial ones . It explains that spatial symbolic representations are frequently graphs and presents some of the advantages of using such mathematical foundations. The chapter concludes with the paradigmatic and syntagmatic dimensions of representations, and their relevance for interpretation and management.
• 2.3: Building representations in BIM
This chapter offers an overview of symbolic building representations in BIM , including their key differences to analogue representations and how these were implemented in CAD . It explains how a model is built out of symbols that may have an uneasy correspondence with real-world objects and how abstraction can be achieved using these symbols.
02: Building Representation
This chapter introduces representations, in particular symbolic ones: how they are structured and how they describe things , including spatial ones . It explains that spatial symbolic representations are frequently graphs and presents some of the advantages of using such mathematical foundations. The chapter concludes with the paradigmatic and syntagmatic dimensions of representations, and their relevance for interpretation and management.
Symbolic representations
Many of the misunderstandings concerning information stem from our lack of understanding of representations and how these convey information. Representations are so central to our thinking that even if the sender of some information has failed to structure it in a representation, the receiver does so automatically. A representation can be succinctly defined as a system for describing a particular class of entities. The result of applying a representation to an entity is therefore a description. Representations of the symbolic kind, which proliferate human societies, consist of two main components:
• A usually finite set of symbols
• Some rules for linking these symbols to the entities they describe
The decimal numeral system is such a symbolic representation. Its symbols are the familiar Hindu-Arabic numerals:
SD = {0,1,2,3,4,5,6,7,8,9}
The rules by which these symbols are linked to the quantities they describe can be summarized as follows:
nn · 10n + nn-1 · 10n-1 + … + n1 · 101 + n0 · 100
These rules underlie positional notation, i.e. the description of a quantity as:
nnnn-1 …. n1n0
For example, the description of seventeen becomes:
1 · 101 + 7 · 100 ⇒ 17
The binary numeral system is essentially similar. Its symbol set consists of only two numerals and its rules employ two as a base instead of ten:
SB = {0,1}
nn · 2n + nn-1 · 2n-1 .+ … + n1 · 21 + n0 · 20
This means that seventeen becomes:
1 · 24 + 0 · 23 + 0 · 22 + 0 · 21 + 1 · 20 ⇒ 10001
There are often alternative representations for the same class of entities. Quantities, for example, can be represented by (from left to right) Roman, decimal and binary numerals, as well as one of many tally mark systems:
XVII = 17 = 10001 = IIIIIIIIIIII II
A representation makes explicit only certain aspects of the described entities. The above numerical representations concern quantity: they tell us, for example, that there are seventeen persons in a room. The length, weight, age or other features of these persons are not described. For these, one needs different representations.
Each representation has its advantages. Decimal numerals, for example, are considered appropriate for humans because we have ten fingers that can be used as an aid to calculations. Being built out of components with two states (on and off), computers are better suited to binary numerals. However, when it comes to counting ongoing results like people boarding a ship, tally marks are better suited to the task. Some representations may be not particularly good at anything: it has been suggested that despite their brilliance at geometry, ancient Greeks and Romans failed to develop other branches of mathematics to a similar level because they lacked helpful numeral representations.
Symbols and Things
The correspondence between symbols in a representation and the entities they denote may be less than perfect. This applies even to the Latin alphabet, one of the most successful symbolic representations and a cornerstone of computerization. Using the compact set of symbols in an alphabet instead of syllabaries or logographies (i.e. graphemes that correspond to syllables or words) is an economical way of describing sounds (phonemes) in a language. This turns a computerized text into a string of ASCII characters that combine to form all possible words and sentences. Imagine how different text processing in the computer would be if its symbols were not alphabetic characters but pixels or lines like the strokes we make to form the characters in handwriting.
At the same time, the correspondence between Latin alphabet graphemes and the phonemes in the languages that employ them is not straightforward. In English, for example, the letter A may denote different phonemes:
• ɑ: (as in ‘car’)
• æ (as in ‘cat’)
• ɒ (as in ‘call’)
• ə (as in ‘alive’)
• ɔ: (as in ‘talk’)
The digraph TH can be either:
• θ (as in ‘think’) or
• ð (as in ‘this’)
Conversely, the phoneme eɪ can be written either as:
• AY (as in ‘say’)
• EI (as in ‘eight’)
The lesson we learn from these examples is that abstraction and context are important in representation. Abstraction allows for less strict yet still clear relations between symbols and things, as with the letter A which represents only vowels. A one-to-many correspondence like that is trickier than a simple one-to-one relation but is usually clarified thanks to the context, in our case proximal alphabetic symbols: ‘car’ and ‘cat’ are very similar strings but most English learners soon learn that they are pronounced differently and associate the right phoneme with the word rather than the letter. Similarly, in the floor plan of a building one soon learns to distinguish between two closely spaced lines denoting a wall and two very similar lines representing a step (Figure 1).
Spatial symbolic representations
Symbolic representations are also used for spatial entities. A familiar example are metro and similar public transport maps. A common characteristic of many such maps is that they started life as lines drawn on a city map to indicate the route of each metro line and the position of the stations (Figure 2). As the size and complexity of the transport networks increased, the metro lines and stations were liberated from the city maps and became separate, diagrammatic maps: spatial symbolic representations, comprising symbols for stations and connections between stations (Figure 3). The symbols are similar for each line but may be differentiated e.g. by means of shape or colour, so that one can distinguish between lines. The symbol set for a metro network comprising two lines (the red O line and the blue Plus line) would therefore consist of the station symbol for the red line, the station symbol for the blue line, the connection symbol for the red line and the connection symbol for the blue line:
SM = {o, +, |o, |+ }
The rules that connect these symbols to real-world entities can be summarized as follows:
• Each station on a metro line (regardless of the complexity of the building that accommodates it) is represented by a station symbol of that line
• Each part of the rail network that connects two stations of the same line is represented by a line symbol of that line
These common-sense, practical principles underlie many intuitive attempts at spatial representation and, as discussed later on, even a branch of mathematics that provides quite useful and powerful means for formalizing and analysing symbolic spatial representations.
Our familiarity with metro maps is to a large degree due to their legibility and usability, which make them excellent illustrations of the strengths of a good representation. As descriptions of a city transport system, they allow for easy and clear planning of travels, facilitate recognition of interchanges and connections, and generally provide a clear overview and support easy understanding. To manage all that, metro maps tend to be abstract and diagrammatic (as in Figure 2), in particular by simplifying the geometry of the metro lines (usually turning them into straight lines) and normalizing distances between stations (often on the basis of a grid). As a consequence, metro diagrams are inappropriate for measuring geometric distances between stations. Still, as travelling times on a metro often depend mostly on the number of stations to be traversed, metro maps are quite useful for estimating the time a trip may take. However, for finding the precise location of a station, city maps are far more useful.
A comparison of metro maps to numerals leads to the suggestion that the increase in dimensionality necessitates explicit representation of relations between symbols. In the one-dimensional numerals, relations are implicit yet unambiguous: positional notation establishes a strict order that makes evident which numeral stands for hundreds in a decimal number and how it relates to the numerals denoting thousands and tens. Similarly, in another kind of one-dimensional representation, spaces and punctuation marks are used in alphabetic texts to indicate the clustering of letters into words, sentences and paragraphs, and thus facilitate understanding of not only phonemes but also meanings in the text.
In two-dimensional representations like the metro diagrams, proximity between two station symbols does not suffice for inferring the precise relation between them. One needs an explicit indication like a line that connects the two symbols. A metro map missing such a connection (Figure 4) is puzzling and ambiguous: does the missing connection mean that a metro line is still under development or simply that the drawings is incomplete by mistake? Interestingly, such an omission in a metro diagram is quite striking and does not normally go unnoticed, triggering questions and interpretations, which will be discussed in the chapter on information theory (in relation to anti-data).
Similarly puzzling is a metro map where stations of different lines are close to each other, even touching (Figure 5): does this indicate that the stations are housed in the same building, so that one can change from one line to the other, or that the stations are close by but separate, in which case one has to exit the metro and enter it again (which may involve having to buy a new ticket)? In a metro map where stations are clearly connected or coincide (Figure 3), there is no such ambiguity concerning interchange possibilities.
Graphs
Diagrams like these metro maps are graphs: mathematical structures that describe pairwise relations between things. Graph theory in mathematics began in 1736 with Euler’s study of paths that crossed the bridges of Königsberg only once and has since gone from strength to strength. A key element of their success is that graphs are fairly simple but strictly structured diagrams consisting of vertices (or nodes) and edges (or lines) that link pairs of vertices. Vertices usually denote things and edges relations. In Figure 3, each metro station is a vertex and each connection between two stations an edge.
Graphs have a wide range of applications, from computer networks and molecular structures to the organization of a company or a family tree. The tools supplied by graph theory help analyse and quantify many of aspects of such networks. For example, the degree of a vertex (the number of edges connected to it) is a good indication of complexity: in a metro map it indicates the number of lines that connect there. The degree can therefore be used to identify interchanges, as well as a basic measure of how busy each interchange might be. Another measure is the closeness of a vertex: its mean distance to all other vertices in the graph (distance being the number of edges in the shortest path between two vertices). Closeness is a good indication of a vertice’s centrality in a graph.
The degree sequence of a graph is a sequence that is obtained by listing the degrees of its vertices. In a map of a metro line this sequence is a good expression not only of opportunities for crossing over to other lines but also an indication of how busy the line may become as passengers make use of such opportunities. One can measure complexity in the whole graph in other ways, too, e.g. through eccentricity: the greatest distance between a vertex and any other vertex in the graph. The eccentricity of a metro station relates to its remoteness or poor connectivity. The diameter of the graph is the greatest eccentricity of any vertex in it and its radius the smallest eccentricity of any vertex. Vertices with an eccentricity equal to the radius are the center of the graph, while those with an eccentricity equal to the diameter are the periphery, In a metro system, therefore, it is interesting to know how many stations form te center and should consequently be easily and quickly accessible, and how many are in the periphery.
Finally, in order to be able to travel on the metro, the graph has to be connected: each vertex should connect to every other vertex by some sequence of edges and vertices (the graph in Figure 5 is therefore not connected). In fact, this sequence should be a path: no vertex should occur twice. Any edge that divides a graph into two parts (as in Figure 4) is called a bridge. In our metro example, all edges are bridges, making the metro particularly sensitive: any problem between two stations can render it unusable, as passengers cannot move along alternative routes.
What the above examples illustrate is that a well-structured representation can rely on mathematical tools that help formalize its structure and analyses. This is important for two reasons: firstly, formalization makes explicit what one may recognize intuitively in a representation; secondly, it allows for automation, especially of analyses. Allowing computers to perform painstaking and exhaustive analyses complements, liberates and supports the creative capacities of humans.
Graphs and buildings
Graph-like representations are also used for buildings: architects, for example, use bubble and relationship diagrams to express schematically the spatial structure of a design (Figure 3). In such diagrams nodes usually denote spaces where some specific activities take place (e.g. “Expositions” or “Library”), while edges or overlaps indicate proximity or direct access.
On the basis of graph theory, more formal versions of such diagrams have been developed, such as access graphs.Here nodes represent spaces and edges openings like doors, which afford direct connection between spaces. Access graphs are particularly useful for analysing circulation in a building.[1]
The access graph demonstrates the significance of explicit structure: pictorially it may have few advantages over relationship diagrams, as both make explicit the entities in a representation and their relations. However, imposing the stricter principles of a mathematical structure reduces vagueness and provides access to useful mathematical tools. In a relationship diagram one may use both edges and overlaps to indicate relations, and shapes, colours and sizes to indicate properties of the nodes. In a graph, one must use only nodes and edges, and label them with the necessary attributes. This improves consistency and clarity in representation, similarly to the standardization of spelling in a language. It also facilitates application of mathematical measures which give clear indications of design performance. For example, the eccentricity of the node representing the space from where one may exit a building is a useful measure of how long it may take for people to leave the building, which is critical for e.g. fire egress. Similarly, the significance of a space for pedestrian circulation is indicated by its degree in the access graph, while spaces that form bridges are opportune locations for circulation control. For all these reasons, graphs are a representational basis to which we will returning in several parts of this book.
Paradigmatic and syntagmatic dimensions
In a symbolic representation we can analyse descriptions along two dimensions: the paradigmatic and the syntagmatic.[2] The paradigmatic dimension concerns the symbols in the representation, e.g. letters in a text. The syntagmatic dimension refers to the sequence by which these symbols are entered in the description. The meaning of the description relies primarily on the paradigmatic dimension: the symbols and their arrangement in the description. Syntagmatic aspects may influence the form of these symbols and their arrangement but above all reveal much about the cognitive and social processes behind the representation and its application, as well as mechanical aspects. For instance, in a culture where left-to-right writing is dominant, one would expect people to write numerals from left to right, too. However, the Dutch language uses a ten-before-unit structure for number words between 21 and 99 (as opposed to the unit-and-ten structure in English), e.g. “vijfentwintig” (five-and-twenty). Consequently, when writing by hand, e.g. noting down a telephone number dictated by someone else, one often sees Dutch people first enter the ten numeral, leaving space for the unit, and then backtrack to that space to enter the unit numeral. With a computer keyboard such backtracking is not possible, so the writer normally pauses while listening to the ten numeral, waits for the unit numeral and then enters them in the reverse order. Matching the oral representation to the written one may involve such syntagmatic peculiarities, which are moreover constrained by the implementation means of the representation (writing by hand or typing).
In drawing by hand, one may use a variety of guidelines, including perspective, grid and frame lines, which prescribe directions, relations and boundaries. These lines are normally entered first in the drawing, either during the initial setup or when the need for guidance emerges. The graphic elements of the building representation are entered afterwards, often in direct reference to the guidelines: if a graphic element has to terminate on a guideline, one may draw it from the guideline or, if one starts from the opposite direction, slow down while approaching the guideline, so as to ensure clear termination. Similar constraining influences may also derive from already existing graphic elements in the drawing: consciously or unconsciously one might keep new graphic elements parallel, similarly sized or proportioned as previously entered ones, terminate them against existing lines etc. Such mechanical and proportional dependence on existing graphic elements has led to the development of a wide range of object-snap options and alignment facilities in computerized drawing.
Any analysis of the paradigmatic dimension in a description aims at identifying symbols, e.g. relating each stroke in a handwritten text to a letter. To do that, one has to account for every stroke with respect to not only all symbols available in the representation but also various alternatives and variations, such as different styles of handwriting. Analyses of the syntagmatic dimension have to take into account not only the paradigmatic dimension (especially symbols and implementation mechanisms) but also cognitive, social, mechanical aspects that may have played a role in the temporal process of making a description, such as the tendency to draw from an existing graphic element to endure clear termination. Similarly, in most BIM editors, one enters openings like doors or windows only after the walls that host them have been entered in the model, while rooms are defined only after the bounding walls have been completed.
As all that relates to the organization of a design project and the relations between members of a design team, the syntagmatic dimension is of particular relevance to the management of information processes. Thankfully, there are sufficient tools for registering changes in a digital representation, since adding a time stamp to the creation, modification and eventual deletion of a symbol in a computer program is easy and computationally inexpensive. Making sense of what these changes mean requires thorough analysis of the sequences registered and clear distinctions between possible reasons for doing things in a particular order.
The significance of the syntagmatic dimension increases with the dimensionality of the representation: in a one-dimensional representation like a text, the sequence by which letters are entered is quite predictable, including peculiarities like the way Dutch words for numbers between 21 and 99 are structured. In representations with two or more dimensions, one may enter symbols in a variety of ways, starting from what is important or opportune and moving iteratively through the description until it is complete (although completeness may be difficult to ascertain syntagmatically, making it unclear when the process should terminate). This clearly indicates the significance of the syntagmatic dimension for the management of 3D and 4D representations of buildings.
Key Takeaways
• Symbolic representations employ usually finite sets of symbols and rules to relate these symbols to specific classes of entities and produce descriptions of these entities
• Familiar spatial symbolic representations like metro diagrams are graphs: mathematical structures that describe pairwise relations between things , using nodes for the things and edges for the relations
• Graphs are a useful representational basis for buildings because they make symbols and relations between symbols explicit and manageable
• Symbolic descriptions have a paradigmatic and a syntagmatic dimension, relating respectively to the symbols they contain and the sequence by which the symbols have been entered in the description
• Interpretation of a description relies primarily on the paradigmatic dimension, while management strongly relates to the syntagmatic dimension
Exercises
1. Draw graphs for the above post-and-beam structure:
1. One using vertices for the posts and beams and edges for their connections
2. One using vertices for the junctions and edges for the posts and beams
2. Calculate the following for the above graphs:
1. The degree and eccentricity of each vertex
2. The diameter and radius of each graph
3. Draw an access graph for the following floor plan:
4. In the access graph:
1. Calculate the degree and eccentricity of each vertex
2. Calculate the diameter and radius of the graph
3. Indicate the vertices belonging to the center and the periphery
4. Identify any bridges in the access graph
1. Graph-based applications in the representation of buildings are discussed extensively in: Steadman, P., 1983. Architectural morphology: an introduction to the geometry of building plans. London: Pion.
2. The discussion on the paradigmatic and syntagmatic dimensions in visual representations draws from: Van Sommers, P., 1984. Drawing and cognition: descriptive and experimental studies of graphic production processes. Cambridge: Cambridge University Press.
|
textbooks/eng/Civil_Engineering/Book%3A_Building_Information_-_Representation_and_Management_-_Fundamentals_and_Principles_(Koutamanis)/02%3A_Building_Representation/2.01%3A_Representation.txt
|
To understand many of the problems surrounding building information, we first need to examine the analogue representations that still dominate AECO. The chapter presents some of the key characteristics that have made these representations so successful, although they do not necessarily agree with digital environments. Effective computerization relies on replacing the human abilities that enable analogue representations with capacities for information processing by machines.
Pictorial representations and geometry
Familiar building representations tend to be drawings on paper, such as orthographic projections like floor plans and sections, and projective ones, including isometrics and axonometrics: two-dimensional depictions of three-dimensional scenes, through which one tries to describe the spatial arrangement, construction or appearance of a building. What these drawings have in common is:
• They are pictorial representations (not symbolic)
• They rely heavily on geometry
Even though drawings were used in building design already in antiquity, it was in the Renaissance that applied geometry revolutionized the way Europeans represented and conceptualized space, in many cases raising the importance of the graphic image over the written text. Geometry was not merely a handy foundation for descriptive purposes, i.e. formalizing pictorial representations of buildings, but also a means of ordering space, i.e. organizing people’s experiences and thoughts to reveal some inherent order (including that of the cosmos). Consequently, building drawings evolved from schematic to precise and detailed representations that matched the perception of actual buildings, as well as most levels of decision making and communication about building design and construction.
Such empowerment gave geometry a central position in building design, with many architects and engineers becoming absorbed in geometric explorations closely linked to some presumed essence or ambition of their profession. With geometry forming both an overlay and underlay to reality, a complex relation developed between building design and geometry, involving not only the shape of the building but also the shape of its drawings. In turn, this caused building drawings to become semantically and syntactically dense pictorial representations, where any pictorial element, however small, can be significant for interpretation. By the same token, in comparison to more diagrammatic representations, the interpretation of building drawings involves a larger number of pictorial elements, properties and aspects, such as colour, thickness, intensity and contrast. As representations, building drawings were therefore considered a mixed and transitional case.[1]
The computerization of such complex, highly conventional analogue representations was initially superficial, aiming at faithful reproduction of their appearance. To many, the primary function of digital building representations, including not only CAD but also BIM, is the production of conventional analogue drawings either on paper (prints) or as identical computer files (e.g. a PDF of a floor plan). This makes computerization merely an efficiency improvement, especially concerning ease of drawing modification, compactness of storage and speed of dissemination. This is a testimony to the power and success of analogue building drawings but at the same time a major limitation to a fuller utilization of the information-processing capacities of computers. Analogue drawings work well in conjunction with human abilities for visual recognition, allowing us to develop efficient and effective means of specification and communication: most people recognize the same number of spaces in a floor plan on paper; scanning the floor plan transforms it into a computer file but computers generally only recognize it as an array of pixels. Recognizing the rooms and counting them by computer requires explicit representation of spaces.
Visual perception and recognition
Building drawings are surprisingly parsimonious: they manage to achieve quite a lot with a limited repertory of graphic primitives. With just a few kinds of lines, they produce floor plans, sections, perspectives etc., as well as depict a wide variety of shapes and materials in all these projections. To a large degree this is due to the ingenious ways they trigger the human visual system and allow us to see things. For example, we tend to associate similar elements if they are proximal. Therefore, closely parallel lines become depictions of walls but if the distance between the lines increases (beyond what might be plausible for a thick wall), they become just parallel lines. Seeing two lines as a wall does not necessarily mean they have to be strictly parallel or straight (Figure 1).
It is similarly easy to identify columns in a floor plan. Even more significantly, the arrangement (repetition, collinearity, proximity etc.) and similarity of columns allow us to recognize colonnades: groups of objects with a specific character (Figure 2). The colonnade may be recognizable even if the columns are not identical and their arrangement not completely regular (Figure 3). However, if the arrangement is truly irregular, proximity or similarity do not suffice for the recognition of a colonnade (Figure 4).
Probably the most unnoticed and yet striking part of reading a drawing concerns the recognition of spaces: in a floor plan, one enters graphic elements that develop into depictions of building elements and components, like walls, doors and windows. Spaces are what is left over on paper, essentially background coming through the drawing. Yet most people with a basic understanding of building drawings are capable of recognizing the spaces in a floor plan (inferring them from the bounding building elements) with precision, accuracy and reliability (Figure 5).
Abstraction and incompleteness
Pictorial representations are characterized by a high potential for abstraction, which is evident in the different scales of building drawings: a wall at a scale like 1:20 is depicted by a large number of lines indicating various layers and materials; at 1:100 the wall may be reduced to just two parallel lines; at 1:500 it may even become a single, relatively thick line. Similarly, a door in a floor plan at 1:20 is quite detailed (Figure 6), at 1:100 it is abstracted into a depiction that primarily indicates the door type (Figure 7) and at 1:500 it becomes just a hole in a wall (Figure 8). At all three scales both the wall and the door are clearly recognizable, albeit at different scales of specificity and detail. Such abstraction is largely visual: it mimics the perception of a drawing (or, for that matter, any object) from various distances. It also corresponds to the design priorities in different stages: in early, conceptual design, one tends to focus on general issues, zooming out of the drawing to study larger parts, while deferring details to later stages. Therefore, the precise type, function and construction of a door may be relatively insignificant, making abstraction at the scale of 1:500 suitable. However, that abstraction level is inappropriate for the final technical design, when one has to specify not just the function and construction of a door but also its interfacing with the wall. To do so, one has to zoom in and use a scale like 1:20 to view and settle all details.
In addition to visual abstraction, one may also reduce common or pertinent configurations, however complex, into a single, named entity, e.g. an Ionic or Corinthian column or a colonnade (Figure 2) or “third floor” and “north wing”. Such mnemonic or conceptual abstraction is constrained by visual recognition, as outlined above, but also relies on cultural convention: it is clearly not insignificant that we have a term for a colonnade. As such, mnemonic abstraction plays a more important role in symbolic representations than purely visual abstraction.
Pictorial representations are also relatively immune to incompleteness: a hastily drawn line on paper, with bits missing, is still perceived as a line (Figure 9). A house partially occluded by an obstacle is similarly perceived as a single, complete and coherent entity (Figure 10).
Dealing with incomplete descriptions is generally possible because not all parts are critical for understanding their meaning, even if they are not redundant. In English, for example, keeping only the consonants in a text may suffice for recognizing most words:
TH QCK BRWN FX JMPS VR TH LZY DG
(THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG)
This practice, currently known as disenvoweling, is widely applied in digital short messages. In the past, it was used to similar effect by telegraph operators, note takers and others who wanted to economize on message length and the time and effort required for writing or transmitting a message. Identifying the missing vowels is often a matter of context: ‘DG’ in a farmyard setting probably means ‘DOG’ but in an archaeological one it may stand for ‘DIG’. If a word contains many vowels, it may be hard even then: ‘JMPS’ is highly probably ‘JUMPS’ in most contexts but ‘DT’ as a shorthand of ‘IDIOT’ may be far from effective in any context.
Likewise in images, some parts are more critical than others for recognition. A basic example is dashed lines: even with half of the line missing, the human visual system invariably recognizes the complete lines and the shapes they form (Figure 11).
Interestingly, a shape drawn with dashed lines is recognized more easily if the line junctions are present. This relates to a general tendency of the human visual system to rely on points of maximum curvature in the outline of shapes.[2] Corners, in particular, are quite important in this respect: the presence of corners makes it possible to perceive illusory figures (Figure 12). The form of a corner gives perceivers quite specific expectations concerning the position and form of other corners connected to it, regardless of rectilinear or curvilinear geometry (Figure 13). The presence of compatible corners in the image leads to perception of an illusory figure occluding other forms. Perception of the illusory figure weakens if occlusion occurs at non-critical parts of the figure, such as the middle of its sides (Figure 14).
The importance of corners underlay one of the early successes in artificial intelligence: using a typology of edge junctions (Figure 15) and expectations about the connectivity of these types and the orientation of surfaces that met there, researchers were able to use constraint propagation to recognize the composition of scenes with trihedral geometric forms: faces, volumes and their relative positions (Figure 16).[3]
The above examples illustrate how analogue representations can be parsimonious and simultaneously effective but only if complemented with quite advanced and expensive recognition capacities. Empowering computers with such capacities is an emerging future but for the moment at least symbolic representations that contain explicit information are clearly preferable.
Implementation mechanisms
Another problem with analogue building representations is the overemphasis on geometry and the resulting dominance of implementation mechanisms over symbols. As symbols have to be implemented in various environments, one has to use means appropriate to each environment. A letter of the alphabet can be handwritten on paper with ink or graphite particles, depending on the writing implement (although one might claim that the strokes that comprise the letter are the real implementation mechanisms with respect to both the paradigmatic and the syntagmatic dimensions). In the computer, the same letter is implemented as an ASCII character in a text processing, spreadsheet and similar programs. In a drawing program, it may comprise pixels or vectors corresponding to the strokes (depending on the type of the program). In all cases, the symbol (the letter) is the same; what changes is the mechanisms used for its implementation.
With geometric primitives forming the graphic implementation mechanisms in pictorial building representations (underlay) and the ordering influence of geometry on building design (overlay), it has been easy to sidetrack attention to the geometric implementation mechanism of building representations, not only in the analogue but also in the digital versions. This geometric fixation meant lack of progress in CAD and also many misunderstandings in BIM.
To understand the true significance of geometric implementation mechanisms for the symbols in a building representation, consider the differences between alternative depictions of the same door in a floor plan (Figure 17). Despite differences between the graphic elements and their arrangement, they all carry the same information and are therefore equivalent and interchangeable. Many people reading the floor plan are unlikely to even notice such differences in notation, even in the same drawing, if the doors are not placed close to each other.
Using different door depictions for the same door type in the same drawing makes little sense. Differences in notation normally indicate different types of doors (Figure 18): they trigger comparisons that allow us to identify that there are different door types in the design and facilitate recognition of the precise differences between these types, so as to be able to judge the utility of each door in the design.
In conclusion, one of the key advantages of symbolic representations is the preeminence of symbols and the attenuation of confusion between symbols and implementation mechanisms relative to pictorial representations. In computerized texts, letters are not formed by handwritten strokes that produce the required appearance; the appearance of letters is added to the letter symbols through properties like their font and size. Analogue building representations are similar to handwritten texts in that they may put too much emphasis on graphic elements because it is only through the interpretation of these that one can know e.g. the materials and layers that comprise a wall. In a symbolic representation, the materials and composition of the wall are explicit properties of an explicit symbol, which can also be described alphanumerically. This removes ambiguity and makes visual displays one of the possible views of building information.
Key Takeaways
• Analogue building representations are mostly pictorial and rely heavily on geometry
• Visual perception and recognition are essential for the success of pictorial representations
• The reliance of analogue building representations on geometry leads to overemphasis on implementation mechanisms like graphic elements, even in digital environments
Exercises
1. Identify the building elements and components in Figure 6 and list the properties described graphically and geometrically in the drawing
2. List and explain the differences between the above and what appears in Figure 7 and Figure 8
1. There are many treatises on building drawings, their history, significance and relation to geometry. The summary presented here draws in particular from: Cosgrove, D., 2003. Ptolemy and Vitruvius: spatial representation in the sixteenth-century texts and commentaries. A. Picon & A. Ponte (eds) Architecture and the sciences: exchanging metaphors. Princeton NJ: Princeton University Press; Evans, R., 1995. The Projective Cast: Architecture and Its Three Geometries. Cambridge MA: MIT Press; Goodman, N., 1976. Languages of art; an approach to a theory of symbols (2nd ed.). Indianapolis IN: Hackett.
2. The significance of points of maximum curvature, corners and other critical parts of an image is described among others in: Attneave, F., 1959. Applications of information theory to psychology; a summary of basic concepts, methods, and results. New York: Holt; Kanizsa, G., 1979. Organization in vision: essays on Gestalt perception. New York: Praeger.
3. The algorithmically and conceptually elegant recognition of scenes with trihedral objects was finalized in: Waltz, D., 1975. Understanding line drawings of scenes with shadows. P.H. Winston (ed) The psychology of computer vision. New York: McGraw-Hill.
|
textbooks/eng/Civil_Engineering/Book%3A_Building_Information_-_Representation_and_Management_-_Fundamentals_and_Principles_(Koutamanis)/02%3A_Building_Representation/2.02%3A_Analogue_representations.txt
|
This chapter offers an overview of symbolic building representations in BIM , including their key differences to analogue representations and how these were implemented in CAD . It explains how a model is built out of symbols that may have an uneasy correspondence with real-world objects and how abstraction can be achieved using these symbols.
Symbols and relations in BIM
BIM[1] is the first generation of truly symbolic digital building representations. CAD also used discrete symbols but these referred to implementation mechanisms: the geometric primitives that comprised a symbol in analogue representations. In BIM the symbols explicitly describe discrete building elements or spaces – not their drawings. BIM symbols usually appear as “libraries” of elements: predefined symbols of various types. The types can be specific, such as windows of a particular model by a certain manufacturer or abstract, e.g. single-hung sash windows or even just windows. The hierarchical relations between types enable specificity and abstraction in the representation, e.g. deferring the choice of a precise window type or of a window manufacturer to a later design stage, without missing information that is essential for the current stage: all relevant properties of the window, like its size, position and general type, are present in the generic window symbol at a suitable abstraction level.
Entering an instance of any kind in a model normally follows the following procedure:
• The user selects the symbol type from a library menu or palette
• The user positions and dimensions the instance in a geometric view like a floor plan, usually interactively by:
• Clicking on an insertion point for the location of the instance, e.g. on the part of a wall where a window should be
• Clicking on other points to indicate the window width and height relative to the insertion point (this only if the window does not have a fixed size)
Modifications of the instance are performed in three complementary ways:
• Changes of essential properties such as the materials of a component amount to change of type. This is done by selecting a different symbol type from the library menu or palette and linking it to the instance.
• Changes in the geometry of an instance involve either repositioning the reference points or numerically changing the relevant values in any of the ways allowed by the program interface: in dialogue boxes that pop up by right-clicking on the instance, in properties palettes, through dimension lines or schedules.
• Changes in additional properties that do not conflict with the type, e.g. the occupancy of a space or the stage where a wall should be demolished, are entered through similar facilities in the interface, like a properties palette. Some of these properties are built in the symbols, while others can be defined by the user.
BIM symbols make all properties, geometric or alphanumeric, explicit: the materials of a building element are not inferred from its graphic appearance but are clearly stated among its properties, indicated either specifically or abstractly, e.g. “oak” or “wood”. Most properties in an instance are inherited from the type – not just materials but also any fixed dimensions: each wall type typically has a fixed cross section. Changing type properties like materials means crossing over to a different type, not changes in the instance properties. This ensures consistency in the representation by keeping all similar windows truly similar in all critical respects. This is essential for many tasks, such as cost estimation or procurement.
Many of the relations between symbols are also present in BIM, even if they are not always directly accessible. Openings like doors and windows, for example, are hosted by a wall. Therefore, normally they can only be entered after the hosting wall has been placed in the representation and in strict connection to it: trying to move a window out of a wall is not allowed. Connected walls may also have a specific relation, e.g. co-termination: if one is moved, the others follow suit, staying connected in the same manner. Similarly, spaces know their bounding elements (which also precede them in the representation) and if any of these is modified, they automatically adapt themselves. Through such relations, some of the possibilities offered by graphs become available in BIM, albeit often in indirect ways. A door schedule (Figure 1) reveals that, in addition to its hosting wall, a door knows which two spaces it connects (or separates when closed).
The explicit symbolic representation of both the ‘solids’ out of which a building is constructed (building elements like walls, floors, doors and windows) and the ‘voids’ of the building (the spaces bounded by the building elements) is important. In analogue representations, the spaces are normally implicit, i.e. inferred by the reader. Having them explicit in BIM means that we can manipulate them directly and, quite significantly from the perspective of this book, attach to them information that cannot be linked to building elements: similarly to specifying that a window is made of oak wood, one can specify that a space is intended for a particular use, e.g. “office”, and even for specific activities like “small group meeting” or “CEO’s meeting room”. Such characterizations relate to various performance specifications, such as acoustics or daylighting, which can also be attached to the space and be used to guide and evaluate the design. Making spaces explicit in the representation therefore allows for full integration of building information in BIM and, through that, higher specificity and certainty. Spaces, after all, are the main reason and purpose of buildings, and most aspects are judged by how well spaces accommodate user activities.
BIM symbols and things
BIM has many advantages but, in common with other symbolic representations, also several ambiguities. Arguably the most important of these concerns the correspondence between symbols and real-world things. Building representations in BIM are truly symbolic, comprising discrete symbols. Unfortunately, the structure of building elements often introduces fuzziness in the definition of these symbols, similarly to the one-to-many correspondence between graphemes and phonemes we have seen in alphabets. In general, there are two categories of ‘solids’ in buildings. The first is building elements that are adequately represented by discrete symbols: doors and windows, for example, are normally complete assemblies that are accommodated in a hole in a wall. Walls, on the other hand, are typical representatives of the second category: conceptual entities that are difficult to handle in three respects. Firstly, walls tend to consist of multiple layers of brickwork, insulation, plaster, paint and other materials. Some of these layers continue into other elements: the inner brick layer of an external wall may become the main layer of internal walls, forming a large, complex and continuous structure that is locally incorporated in various walls (Figure 2).
Secondly, BIM retains some of the geometric bias of earlier building representations, especially in the definition of elements like walls that have a fixed cross section but variable length or shape. When users have to enter the axis of a wall to describe this length or shape, they inevitably draw a geometric shape. BIM usually defines symbols on the basis of the most fundamental primitives in this shape. Even if one uses e.g. a rectangle to describe the axis, the result is four interconnected yet distinct walls, each corresponding to a side of the rectangle. Similarly, a wall with a complex shape, but conceptually and practically unmistakably a single structure, is analysed into several walls, each corresponding to a line segment of its shape (Figure 3).
Thirdly, our own perception of elements like walls may get in the way. Standing on one side of a wall, we see only the portion of the wall that bounds the room we are in. Standing on the other side, we perceive not only a different face but possibly also a different part of the wall (Figure 4). As a result, when thinking from the perspective of either space, we refer to parts of the same entity as if they were different walls.
The inevitable conclusion is that some symbols in BIM may still require further processing when considered with respect to particular goals. One may have to analyse a symbol into parts that then have to be combined with parts of other symbols, e.g. for scheduling the construction of the brickwork in Figure 2. Other symbols have to be grouped together, like the internal wall in Figure 3. Such manipulations should not reduce the integrity of the symbols; it makes little sense to represent each layer of a wall separately. At the same time, one has to be both consistent and pragmatic in the geometric definition of building elements. In most cases, acceptance of the BIM preference for the simplest possible geometry is the least painful option: the vertical internal wall in Figure 4 should be represented as a single entity and not split into two parts in order to simplify the adjacency of walls to spaces. Looking at it in any way beyond the three perspectives indicated in the figure and the spaces that frame them, it cannot be anything else than a single building element.
Paradigmatic and syntagmatic dimensions in BIM
Even with the issues discussed above, the symbolic character of BIM has obvious advantages for the paradigmatic dimension: each symbol is explicit and integral in a building representation. The same holds for the syntagmatic dimension, in three different respects. The first concerns the practical side of developing a building representation in BIM: in common with most computerized programs, BIM editors can record the sequence of user actions and so make the history of a representation accessible and transparent. This allows users to undo actions and backtrack to earlier states.
More significantly, the sequence of user actions is often organized in prescriptive procedures because their order is not trivial. As we have seen, one has first to select the type of a new symbol in a model and then indicate the geometry of the instance. Such procedures ensure consistency in the symbols and their structure, as well as register many relations between symbols, for example the anchoring of a window or a wash basin to a hosting wall.
The third advantage of BIM for the syntagmatic dimension relates to 4D modelling: the addition of a time property to symbols, for example the moment the symbolized element should be constructed. This supports the scheduling of construction, demolition and other real-world activities from within the building representation, and reduces inconsistencies or other errors that emerge from poor communication between building representations and scheduling activities or software.
Abstraction and grouping in BIM
BIM symbols cover a wide range of abstraction levels, from generic symbols like “internal wall” without any further specifications to highly detailed symbols, representing e.g. a very specific wall type, including precise descriptions of materials from particular manufacturers. Usually a building representation in BIM starts with abstract symbols, which become progressively more specific. It is also possible to backtrack to a higher abstraction level rather than sidestep to a different type on the same level, e.g. when some conflict resolution leads to a dead end and one needs to reconsider their options. This typologic abstraction is one of the strong points of BIM but also something one has to treat with care because a model may contain symbols at various abstraction levels. Managing the connections between them, e.g. deciding on the interfacing between a highly specific window and an abstract wall, requires attention to detail. On the positive side, one can use such connections to guide decision making, e.g. restrict the choice of exact wall type to those that fit the expectations from the window.
Symbolic representations also have considerable capacities for bottom-up mnemonic abstraction on the basis of explicit relations between symbols, ranging from similarity (e.g. all vowels in a text) to proximity (all letters in a word). As it typical of digital symbolic representations, BIM allows for multiple groupings of symbols to produce mnemonic structures of all kinds, e.g. selecting all instances of the same door type in a design, identifying all spaces with a particular use on the second floor or determining which parts of a design belong to the north wing. For the latter, some additional input from the user may be required, such as drawing a shape that represents the outline of the north wing or labelling every symbol with an additional wing property. No user input is required for relations built into the behavioural constraints of a symbol, e.g. the hosting of openings in walls.
Through the combination of standard symbol features (like their properties) and arbitrary, user-defined criteria (like the outline of a wing), one can process the representation at any relevant abstraction level and from multiple perspectives, always in direct reference to specific symbols. For example, it is possible to consider a specific beam in the context of its local function and connections to other elements but simultaneously with respect to the whole load-bearing structure of its floor and wing or of the whole building. Any decision taken locally, specifically for this beam, relates transparently to either the instance or the type and may therefore lead not only to changes in the particular beam but also reconsideration of the beam types comprising the structure, e.g. a change of type for all similar beams. Reversely, any decision concerning the general type of the structure can be directly and automatically propagated to all of its members and their arrangement.
The automatic propagation of decisions relates to parametric modelling: the connection of symbol properties so that any modification to one symbol causes all others to adapt accordingly. In addition to what is built into the relations between types and instances or the behaviours like hosting, one can explicitly link instance properties, e.g. make several walls remain parallel to each other or vertical to another wall. One can also specify that the length of several walls is the same or a multiple of an explicitly defined parameter. Changing the value of the parameter leads to automatic modification of the length of all related walls. Parametric design holds significant promise. People have envisaged building representations in which it suffices to change a few values to produce a completely new design (or variation). However, establishing and maintaining the constraint propagation networks necessary for doing so in a reliable manner remains a major challenge. For the moment, parametric modelling is a clever way of grouping symbols with explicit reference to the relation underlying the grouping, e.g. parallelism of walls. Still, even in such simple cases, the effects of parametric relations in combination with built-in behaviours can lead to unpredictable and unwanted results.
In views which replicate conventional drawings, BIM software often also incorporates visual abstraction that mimics that of scales in analogue representations. By selecting e.g. “1:20” and “fine” one can make the visual display of a floor plan more detailed than with “1:200” and “coarse”. Such settings are useful only for visual inspections; they alter only the appearance of symbols, not their type or structure.
The LoD in BIM is also related to abstraction. LoD specifications attempt to standardize the specificity of information in a model or preferably in a symbol, as a model may contain elements at various LoD. Many LoD standards have been proposed but strict adherence to them is a throwback to analogue standards regarding drawing scale. Such adherence fails to appreciate that information in a model has a reason and a purpose: some people have taken decisions concerning some part or aspect of a design. The specificity of these decisions and of the resulting representations is not accidental or conventional. Rather, it reflects what is needed for that part or aspect at the particular stage of a project. The LoD of the model that accommodates this information can only be variable, as not all parts or aspects receive the same attention at the same stages.
Specificity should therefore be driven by the need for information rather than by convention. If information in a representation is at a higher specificity level, one should not discard it but simply abstract in a meaningful way by focusing on relevant properties, relations or symbols. A useful analogy is with how human vision works: in your peripheral vision, you perceive vague forms and movement, e.g. something approaching you rapidly. If you turn your eyes and pay attention to these forms, you can see their details and recognize e.g. a friend rushing to meet you. As soon as you turn to these forms, other parts of what you perceive become vague and schematic. In other words, the world is as detailed as it is; your visual system is what makes some of its parts more abstract or specific, depending on your needs. By the same token, the specificity of a building representation should be as high as the available information allows. Our need for information determines the abstraction level at which we consider the representation, as well as actions by which we can increase the specificity of some of its parts.
Implementation mechanisms in BIM
Despite its symbolic structure, BIM uses the same implementation mechanisms as CAD: the same geometric primitives that reproduce the graphic appearance of analogue representations. The key difference is that these primitives are just part of pictorial views, in which they express certain symbol properties. The type of a door, for example, is explicitly named, so that we do not have to infer its swing from the arc used to represent it in a floor plan; the width of a wall is a numerical property of its symbol, so that we do not have to measure the distance between the two lines indicating the outer faces of the wall. On the contrary, this distance is determined by the width property of the symbol.
As we have seen, however, implementation mechanisms still influence the structure of a building representation in other respects: a wall is still partly determined by drawing its axis and so by the geometric shape one draws. On the whole, therefore, one should consider BIM as largely immune to undue influences from implementation mechanisms but at the same time remain aware of persistent geometric biases in both building representation in BIM and in the mindset of BIM users.
Key Takeaways
• BIM is a truly symbolic building representation that employs discrete symbols to describe building elements and spaces. Symbols in BIM integrate all properties of the symbolized entities , which determine their pictorial appearance.
• This makes BIM symbols largely independent of graphic implementation mechanisms and immune to most geometric biases .
• The correspondence between BIM symbols and some building elements is problematic in certain respects due to the structure of these elements, persisting geometric biases and human perception and cognition.
• The symbolic structure of BIM representations has advantages for the paradigmatic dimension (it makes symbols explicit) and the syntagmatic dimension (through prescriptive procedures for user input, as well as parametric modelling).
• Abstraction in BIM is both typological (as symbols are at various abstraction levels) and mnemonic ( based on similarity of properties and relations like proximity and hosting between symbols ) . Mnemonic abstraction amounts to grouping of symbols and relates to parametric modelling.
Exercises
1. In a BIM editor of your choice (e.g. Revit), make an inventory of all wall types (Families in Revit) in the supplied library. Classify these types in terms of abstraction, clearly specifying your criteria.
2. In a BIM editor of your choice, make a simple design of a space with four walls and two floors around it. Identify properties of the building elements and space symbols that connect them (e.g. dimensions) and overlapping properties (e.g. space properties that refer to finishings of the building elements). Make schedules that illustrate your findings.
3. Expand your design with another space and a door that connects them. Make a schedule that illustrates some relations between the spaces.
4. In the same design, describe step by step how a change in the size of one room is propagated to other symbols in the model.
1. A comprehensive general introduction to BIM, which may be necessary, depending on the reader’s experience with it, is: Eastman, C., Teicholz, P.M., Sacks, R., & Lee, G., 2018. BIM handbook (3rd ed.). Hoboken NJ: Wiley.
|
textbooks/eng/Civil_Engineering/Book%3A_Building_Information_-_Representation_and_Management_-_Fundamentals_and_Principles_(Koutamanis)/02%3A_Building_Representation/2.03%3A_Building_representations_in_BIM.txt
|
• 3.1: Data and information
Previous chapter s have explained how information is organized in representations. A question that remains to be answered is what exactly constitutes information, i.e. what one should consider as information and data in these representations. This chapter introduces relevant theories and explains how they apply to building information and representations.
• 3.2: Information Management
This chapter introduces the general goals of IM and connects them to information sources on buildings in order to determine the fundamental principles of IM with BIM.
• 3.3: Process and Information
This chapter describes how process and information diagrams can describe tasks in a process and the information actions relating to these tasks in a comprehensive and coherent manner.
03: Information - Theory and Management
Previous chapter s have explained how information is organized in representations. A question that remains to be answered is what exactly constitutes information, i.e. what one should consider as information and data in these representations. This chapter introduces relevant theories and explains how they apply to building information and representations.
Theories and definitions
There is nothing more practical than a good theory: it supplies the definitions people need to agree what to do, how and why; it explains the world, providing new perspectives from where to see and understand it; it establishes targets for researchers keen to improve or refute the theory and so advance science and knowledge. In our case, there is a clear need for good, transparent and operational definitions. Terms like ‘information’ and ‘data’ are used too loosely, interchangeably and variably to remove ambiguities in information processing and management. Computerization adds to this vagueness, especially with subjects like buildings: as we have seen in previous chapters, there may be a big gap between the analogue representations still used in most AECO processes and the capacities of computers.
A theory that resolves these problems cannot draw from the AECO domains only. It needs a firm foundation in general theories of information, especially those that take the capacities and peculiarities of digital means and environments into account. Thankfully, there are enough candidates for this.
Syntactic, semantic and pragmatic theories
When one thinks of information theory in a computing context, Shannon’s MTC springs to mind.[1] The MTC is indeed foundational and preeminent among formal theories of information. It addresses what has been visualized as the innermost circle in information theory (Figure 1):[2] the syntactic core of information, dealing with the structure and basic, essential aspects of information, including matters of probability, transmission flows and capacities of communication facilities – the subjects of the technical side of information theory.
The outermost circle in the same visualization is occupied by pragmatics: real-life usage of meaningful information. Information management theories (which will be discussed in a later chapter) populate this circle, providing a general operational framework for supporting and controlling information quality and flow. To apply this framework, one requires pragmatic constraints and priorities from application areas: a notary and a facility manager have different interests with regard to the same building information.
Between the syntactic and the pragmatic lies the intermediate circle of semantics, which deals with how meaning is added to the syntactical components of information before they are utilized in real life. As syntactic approaches are of limited help with the content of information and its interpretation, establishing a basis for IM requires that we turn to semantic theories of information.
Arguably the most appealing of these is by Luciano Floridi, who is credited with establishing the subject of philosophy of information. His value goes beyond his position as a modern authority on the subject. The central role of semantics in his work is an essential contribution to the development of much-needed theoretical principles in a world inundated with rapidly changing digital technologies. In our case, they promise a clear and coherent basis for understanding AECO information and establishing parsimonious structures that link different kinds of information and data. These structures simplify IM in a meaningful and relevant manner: they allow us to shift attention from how one should manage information (the technical and operational sides) to which information and why.
Before moving on to explaining this theory and applying it to building information, it should be noted that management, computing and related disciplines abound with rather too easy, relational definitions of data, information, knowledge, strategy etc., e.g. that data interpreted become information, information understood turns into knowledge and so forth. Such definitions tend to underestimate the complexity of various cognitive processes and are therefore not to be trusted. In this book, we focus on data, information and their relation. The rest concerns utilization of information and benefits that may be derived for individuals, enterprises, disciplines or societies – matters that require extensive analyses well beyond the scope of the present book. Information certainly contributes to achieving these benefits and in many cases it may even be a prerequisite but seldom suffices by itself. Rather than making unfounded claims about knowledge and performance, we focus on more modest goals concerning IM: understanding building information, its quality and flows, and organizing them in ways that may help AECO take informed decisions, in the hope that informed also means better.
A semantic theory for building information
Data and information instances
A fundamental definition in Floridi’s theory[3] concerns the relation between data and information: an instance of information consists of one or more data which are well-formed and meaningful. Data are defined as lacks of uniformity in what we perceive at a given moment or between different states of a percept or between two symbols in a percept. For example, if a coffee stain appears on a floor plan drawing on paper (Figure 3), this is certainly a lack of uniformity with the earlier, pristine state of the drawing but it is neither well-formed nor meaningful within the context of architectural representations. It tells us nothing about the representation or the represented design, only that someone has been rather careless with the drawing (the physical carrier of the representation).
On the other hand, if the lack of uniformity between the two states is a new straight line segment across a room in a floor plan (Figure 4), this is both well-formed (as a line in a line drawing) and meaningful (indicating a change in the design, possibly that the room has now a split-level floor).
Data and information types
The typology of data is a key component in Floridi’s approach. Data can be:
• Primary, like the name and birth date of a person in a database, or the light emitted by an indicator lamp to show that a radio receiver is on.
• Antidata,[4] i.e. the absence of primary data, like the failure of an indicator lamp to emit light or silence following having turned the radio on. Anti-data are informative: they tell us that e.g. the radio or the indicator lamp are defective.
• Derivative: data produced by other, typically primary data, which can therefore serve as indirect indications of the primary ones, such as a series of transactions with a particular credit card as an indication of the trail of its owner.
• Operational: data about the operations of the whole system, like a lamp that indicates whether other indicator lamps are malfunctioning.
• Metadata: indications about the nature of the information system, like the geographic coordinates that tell where a digital picture has been taken.
These types also apply to information instances, depending on the type of data they contain: an information instance containing metadata is meta-information.
In the context of analogue building representations like floor plans (Figure 5), lines denoting building elements are primary data. They describe the shape of these elements, their position and the materials they comprise.
In addition to such geometric primary data, an analogue floor plan may contain alphanumeric primary data, such as labels indicating the function of a room or dimension lines (Figure 6). A basic principle in hand drawing is that such explicitly specified dimensions take precedence over measurements in the drawing because amending these dimensions is easier than having to redraw the building elements.
Anti-data are rather tricky to identify in building representations because of the abstraction and ellipsis that characterize them. Quite often it is hard to know if something is missing in a representation. One should therefore consider absence as anti-data chiefly when absence runs contrary to expectation and is therefore directly informative: a door missing from the perimeter of a room indicates either a design mistake or that the room is inaccessible (e.g. a shaft). Similarly, a missing room label indicates either that the room has no specific function or that the drawer has forgotten to include it in the floor plan (Figure 7).
Derivative data in building representations generally refer to the abundance of measurements, tables and other data produced from primary data in the representation, such as floor area labels in a floor plan (Figure 8). One can recognize derivative data from the fact that they can be omitted from the representation without reducing its completeness or specificity: derivative data like the area of a room can be easily reproduced when necessary from primary data (the room dimensions). An important point is that one should always keep in mind the conventions of analogue representations, like the precedence of dimension lines over measurement in the drawing, which turns the former into primary data.
Operational data reveal the structure of the building representation and explain how data should be interpreted. Examples include graphic scale bars and north arrows, which indicate respectively the true size of units measured in the representation and the true orientation of shapes in the design (Figure 9).
Finally, metadata describe the nature of the representation, such as the projection type and the design project or building, e.g. labels like ‘floor plan’ (Figure 10).
BIM, information and data
Data types in BIM
As we have seen in previous chapters, computerization does not just reproduce analogue building representations. Digital representations may mimic their analogue counterparts in appearance but can be quite different in structure – something that becomes apparent when we examine the data types they contain. Looking at a BIM editor on a computer screen, one cannot help observing a striking shift in primary and derivative data (Figure 11 & Figure 12): most graphic elements in views like floor plans are derived from properties of symbols. In contrast to analogue drawings, in BIM, dimension lines and values are derivative, pure annotations like floor area calculations in a space. This may be understandable given the ease with which one can modify a digital representation but even the lines denoting the various materials of a building element are derivative, determined by the type of the symbol: if the type of a wall changes, then all these graphic elements change accordingly. In analogue representations the opposite applies: we infer the wall type from the graphic elements that describe it in terms of layers of materials and other components.
The main exception is the geometry of symbols. As described in the previous chapter, when one enters e.g. a wall in BIM, the usual procedure is to first choose the type of the wall and then draw its axis in a geometric view like a floor plan. Similarly, modifications to the location or shape of the wall are made by changing the same axis, while other properties, like layer composition and material properties of each layer, can only be changed in the definition of the wall type. One can also change the axis by typing new coordinates in some window but in most BIM editors the usual procedure is interactive modification of the drawn axis with a pointer device like a mouse. Consequently, primary data may appear dispersed over a number of views and windows, including ones that chiefly contain derivative data.
One should not be confused by the possibilities offered by computer programs, especially for the modification of entities in a model. The interfaces of these programs are rich with facilities to change shapes and values. It seems as if programmers have taken the trouble to allow users to utilize practically everything for this purpose. For example, one may be able to change the length of a wall by typing a new value for its dimension line, i.e. via derivative data. Such redundancy of entry points is highly prized for user interaction but may be confusing in terms of IM, as it tends to obscure the type of data and the location where each type can be found. To reduce confusion and hence the risk of mistakes and misunderstandings, one should consider the character of each view or window and how necessary it is for defining an entity in a model. A schedule, for example, is chiefly meant for displaying derivative data, such as area or volume calculations, but may also contain primary data for reasons of overview, transparency or legibility. Most schedules are not necessary for entering entities in a model, in contrast to a window containing the properties of a symbol, from where one chooses the type of the entity to be entered. In managing the primary data of a symbol one should therefore focus on the property window and its contents.
Computer interfaces also introduce more operational data, through which users can interact with the software. Part of this interaction concerns how other data are processed, including in terms of appearance, as with the scale and resolution settings in drawing views mentioned in the previous chapter (Figure 13).
The presence of multiple windows on the screen also increases the number of visible metadata, such as window headers that describe the view in each window (Figure 14).
Anti-data remain difficult to distinguish from data missing due to abstraction or deferment. The lack of values for e.g. cost or fire rating for some building elements may merely indicate that their calculation has yet to take place, despite the availability of the necessary primary data. After all, both are calculated on the basis of materials present in the elements: if these materials are known, cost and fire ratings are easy to derive. One should remember the inherent duality of anti-data: they do not only indicate missing primary data but the presence of anti-data is significant and meaningful by itself. For example, not knowing the materials and finishes of a window frame, although the window symbol is quite detailed, signifies that the interfacing of the window to a wall is a non-trivial problem that remains to be solved. Interfacing typically produces anti-data, especially when sub-models meet in BIM, e.g. when the MEP and architectural sub-models are integrated, and the fastenings of pipes and cables to walls are present in neither. Anti-data generally necessitate action: no value (or “none”) for the demolition phase of an entity suggests that the entity has to be preserved during all demolition phases – not ignored but actively preserved with purposeful measures, which should be made explicit (Figure 15).
Information instances in BIM
Identifying information instances in BIM starts with recognizing the data. As described in the previous section, data are to be found in the symbols: their properties and relations. In the various views and windows of BIM software, one can easily find the properties of each symbol, either of the instance (Figure 16, Figure 18) or of the type (Figure 17).
What one sees in such a view or window is a mix of different data types, with derivative data like a volume calculation or thermal resistance next to primary data, such as the length and thickness of a wall. Moreover, no view or window contains a comprehensive collection of properties. As a result, when a property changes in one view, the change is reflected in several other parts of the interface that accommodate the same property or data derived from it.
Any lack of uniformity in these properties, including the addition of new symbols and their properties to a model, qualifies as data. One can restrict the identification of data to each view separately but it makes more sense for IM to include all clones of the same property, in any view. On the other hand, any derivative data that are automatically produced or modified as a result of the primary data count as different data instances. So, any change in the shape of a space counts as a single data instance, regardless of the view in which the user applies the change or of in how many views the change appears. The ensuing change in the space area value counts as a second instance of data; the change in the space volume as a third.
Relations between symbols are even more dispersed and often tacit. They can be found hidden in symbol behaviours (e.g. in that windows, doors or wash basins tend to stick to walls or in that walls tend to retain their co-termination), in explicit parametric rules and constraints, as well as in properties (e.g. construction time labels) that determine incidental grouping. Discerning lacks of uniformity in relations is therefore often hard, especially since most derive variably from changes in the symbols. For example, modifying the length of a wall may inadvertently cause its co-termination with another wall to be removed or, if the co-termination is retained, to change the angle between the walls.
Many relations can be made explicit and controllable through appropriate views like schedules. As we have seen, window and door schedules make explicit relations between openings and spaces. This extends to relations between properties of windows or doors and of the adjacent spaces, e.g. connects the fire rating of a door to whether a space on either side is part of a main fire egress route or the acoustic isolation offered by the door to the noise or privacy level of activities accommodated in either adjacent space.
Information instances can be categorized by the type of their data: primary, derivative, operational etc. This is important for IM, as it allows one to, firstly, prioritize in terms of significance and, secondly, to link information to actors and stakeholders. Primary information obviously carries a higher priority than derivative. Moreover, primary information (e.g. the shape of spaces) is produced or maintained by specific actors (e.g. designers), preferably with no interference by others who work with derivative information (e.g. fire engineers). So, information instances concerning space shape are fed forward from the designers to the fire engineers, whose observations or recommendations are fed back to the designers, who then initiate possible further actions and produce new data. Understanding these flows, the information types they convey and transparently linking instances to each other and to actors or stakeholders is essential for IM.
Another categorization of information instances concerns scope. This leads to two fundamental categories:
1. Instances comprising one or more properties or relations of a single symbol: the data are produced when one enters the symbol in the representation or when the symbol is modified, either interactively by a user or automatically, e.g. on the basis of a built-in behaviour, parametrization etc. Instances of this category are basic and homogeneous: they refer to a single entity of a particular kind, e.g. a door. The entity can be:
1. Generic in type, like an abstract internal door
2. Contextually specific, such as a door for a particular wall in the design, i.e. partially defined by relations in the representation
3. Specific in type, e.g. a specific model of a particular manufacturer, fixed in all its properties
2. Instances comprising one or more properties or relations of multiple symbols, added or modified together, e.g. following a change of type for a number of internal walls, or a resizing or repositioning of the building elements bounding a particular space. Consequently, instances of this category can be:
1. Homogeneous, comprising symbols of the same type, e.g. all office spaces in a building
2. Heterogeneous, comprising symbols of various types, usually related to each other in direct, contextual ways, e.g. the spaces and doors of a particular wing that make up a fire egress route
These two categories can account for all data and abstraction levels in a representation, from sub-symbols (like the modification of the geometry of a door handle in the definition of a door type) to changes in the height of a floor level that affects the location of all building elements and spaces on that floor, the size and composition of some (e.g. stairs) and potentially also relations to entities on adjacent floors.
Key Takeaways
• An instance of information consists of one or more data which are well-formed and meaningful.
• Data are lacks of uniformity in what we perceive at a given moment or between different states of a percept or between two symbols in a percept.
• Data can be primary, anti-data, derivative, operational or metadata.
• There are significant differences between analogue and digital building representations concerning data types, with symbols like dimension lines being primary in the one and derivative in the other.
• In BIM lacks of uniformity can be identified in the properties and relations of symbols.
• Information instances can be categorized by the semantic type of their data and by their scope in the representation.
Exercises
1. Identify the semantic data types in the infobox of a Wikipedia biographic lemma (the summary panel on the top right), e.g. https://en.Wikipedia.org/wiki/Aldo_van_Eyck (Figure 19),[5] and in the basic page information of the same lemma (e.g. https://en.Wikipedia.org/w/index.php...ck&action=info)
|
textbooks/eng/Civil_Engineering/Book%3A_Building_Information_-_Representation_and_Management_-_Fundamentals_and_Principles_(Koutamanis)/03%3A_Information_-_Theory_and_Management/3.01%3A_Data_and_information.txt
|
This chapter introduces the general goals of IM and connects them to information sources on buildings in order to determine the fundamental principles of IM with BIM.
The need for information management
With the information explosion we have been experiencing, it is hardly surprising that IM seems to have become a self-evident technical necessity. Handling the astounding amounts of information produced and disseminated every day requires more robust and efficient approaches than ever. Nevertheless, IM is considered mosty as a means to an end, usually performance in a project or enterprise: with effective IM, one can improve the chances of higher performance. Consequently, IM usually forms a key component of overall management.
This is widely acknowledged in building design management. Even before the digital era, the evident dependence of AECO on information coming from various sources and regarding various but interconnected aspects of a building had led to agreement that information and the way it is handled can be critical for communication and decision making. DM often focuses on information completeness, relevance, clarity, accuracy, quality, value, timeliness etc., so as to enable greater productivity, improve risk management, reduce errors and generally raise efficiency and reliability. The dependence on information is such that some even go so far as to suggest that DM is really fundamentally about IM: managing information flows so that stakeholders receive the right information at the right time.[1]
In practical terms, however, there was little clarity concerning what should be managed and how. DM sources often simply affirm that information is important and should be treated with care. What makes information usable, valuable, relevant etc. is assumed to be known tacitly. Information is fundamentally correctly defined as data in usable form. Predictably, however, it is also equated to the thousands of drawings and other documents produced during the lifecycle of a building. If the right document is present, then it is assumed that stakeholders also possess the right information and are directly capable of judging the veracity, completeness, coherence etc. of the information they receive or need. However, equating information with documents not only places a heavy burden on users, it also prolongs attachment to analogue practices in the digital era.
It is arguably typical of AECO and DM that, in the face of operational and especially technical complexity, they invest heavily in human resources. This goes beyond the interpretation of documents in order to extract information; it also includes the invention of new roles that assume a mix of new and old tasks and responsibilities. So, in addition to project and process managers, one encounters not only information managers but also BIM managers, CAD managers, BIM coordinators and CAD coordinators, working together in complex, overlapping hierarchies. These new roles are usually justified by the need for support concerning new technologies, which may be yet unfamiliar to the usual participants in an AECO project. At the same time, however, they increase complexity and reduce transparency by adding more intermediaries in the already multi-layered structure of AECO. They moreover increase the distance between AECO stakeholders and new technologies, frequently limiting learning opportunities for the stakeholders.
New roles, either temporary or permanent, may be inevitable with technological innovation. In the early days of motorcars, for example, chauffeurs were more widely employed to drive them than today, while webmasters have become necessary by the invention and popularity of the World Wide Web and remain so for the foreseeable future, despite growing web literacy among general users. However, such new roles should be part of a sound and thorough plan of approach rather than an easy alternative to a good approach. The plan should determine what is needed and why, taking into account the increasing familiarity and even proficiency of many users with various technologies, to a degree that they require little day-to-day support. In our case, one may expect that AECO professionals will eventually become quite capable not only of using BIM directly but also of coordinating their BIM activities, with little need for technical intermediaries. After all, that was the case with analogue drawings in the past. To achieve this, AECO needs practical familiarization with the new technologies but above all clear comprehension of what these technologies do with information. Based on that, one can develop a sound IM approach that takes into account both domain needs and the capacities of digital technologies, determine changes in the tasks, responsibilities and procedures of existing AECO roles, and develop profiles for any additional roles.
Information sources
Inclusiveness
IM[2] has a broad scope and, as a result, is quite inclusive. It pays no attention to issues of representation and accepts as information sources all kinds of documents, applications, services and schemes. This is due to three reasons. Firstly, IM covers many application areas and must therefore be adaptable to practices encountered in any of them. Secondly, in many areas there is a mix of analogue and digital information, as well as various channels, for example financial client transactions with a shop using cash and debit or credit cards, either physically or via a web shop. IM provides means for bringing such disparate material together into more coherent forms, ensuring that no outdated or inappropriate information is used and preventing that information is missing, inaccessible or deleted by error. These means include correlation with context (e.g. time series displays relative to other data), classification and condensation (aggregation, totalling, filtering and summarization). Thirdly, IM has a tenuous relation to computerization, often relying on it but also appearing weary of putting too much emphasis on technology to the detriment of information and organization.
The inclusiveness of IM with respect to information sources means that it may end up not only tolerating the redundancy of analogue and digital versions of the same information but also supporting outdated practices and conventions, even prolonging their life through superficial digitization. It may also reduce IM to mere document management, i.e. making sure that the necessary documents are retained and kept available. This seems like an easy way out of most domain problems. As the content and the expanse of the Internet suggest, there may be enough computer power and capacity to store and retrieve any document produced in a project or enterprise – in our case, throughout the whole lifecycle of a building (although one should question whether this also applies to all buildings in the world). On the other hand, however, both the information explosion in the digital era and big data approaches suggest the opposite: we already need more intelligent solutions than brute force. At this moment, we may think we still have control over the huge amounts of information in production and circulation but the IoT could change that soon, as smart things start communicating with each other with great intensity. For AECO this can be quite critical, since buildings are among the prime candidates for accommodating a wide range of sensors and actuators, e.g. for controlling energy consumption.
Structured, semi-structured and unstructured information
It is important for IM that BIM marks a transition not only to symbolic representation but also to holistic, structured information solutions for AECO. With regard to structure, there are three main categories:
• Unstructured data are the subject of big data approaches: sensor measurements, social media messages and other data without a uniform, standardized format. Finding relevant information in unstructured data is quite demanding because queries have to take into account a wide range of locations where meaningful data may reside and a wide variety of storage forms (including natural language and images).
• Semi-structured data are a favourite of IM: information sources with a loosely defined structure and flexible use. Analogue drawings are a typical example: one knows what is expected in e.g. a section but there are several alternative notations and few if any prohibitions concerning what may be depicted and how. IM thrives on semi-structured sources, adding metadata, extracting and condensing, so as to summarize relevant information into a structured overview.
• Structured data are found in sources where one knows precisely what is expected and where. Databases are prime examples of structured information sources. In a relational database, one knows that each table describes a particular class of entities, that each record in a table describes a single entity and that each field describes a particular property of these entities in the same, predefined way. Finding the right data in a structured source is therefore straightforward and less challenging for IM.
In contrast to analogue drawings, BIM is clearly structured, along the lines of a database. Each symbol belongs to a particular type and has specific properties. This structure is one of the driving forces behind BIM, in particular with respect to its capacity to integrate and process building information coherently. Given the effort put into developing structured models in BIM, it makes little sense to abandon the advantages they promise. More specifically, a parsimonious approach to IM with BIM should:
• Avoid having other primary information sources next to BIM: any building information should be integrated in BIM and all related data linked to it. Currently, there is general agreement that the price of a component, e.g. a washbasin, should be a property of the corresponding symbol. However, the same should apply to packaging information for this component, including the dimensions of the box in which the washbasin is brought to the building site, as this is useful for logistic purposes. Trying to retrieve this information from the manufacturer’s catalogue is significantly less efficient than integrating the relevant data among the symbol properties. The same applies to a photograph of some part of the building during construction or use: this too should be connected to BIM as a link between the digital file of the photograph and relevant symbols in the model (Figure 1) or even mapped as a decal on the symbols (Figure 2).
• Desist from promoting BIM output to the level of a primary source: any view of a model, from a floor plan to a cost calculation, can be exported as a separate document (PDF, spreadsheet etc.). This may have its uses but one should not treat such exports as sources separate from the model. Any query about the building, including the history of such output, should start from the model. Using IM to ensure consistency between exports and the model is meaningless. This applies even to legally significant documents like contracts because these too can be expressed as views of the model (i.e. textual frames around data exported from the model).
From the above, a wider information environment emerges around the model, populated largely by files linked to the model, preferably to specific symbols. IM can assist with the organization of this environment, even allowing queries to be answered on the basis of such satellite documents, but the successful deployment of BIM depends on transparent links between these queries and documents and the model itself: any query should ultimately lead to primary data and their history in the model.
It is perhaps ironic that while the world is focusing on big, unstructured data, AECO should insist on structured data. One explanation is latency: AECO has been late with the development of structured information solutions because it continued to use analogue, semi-structured practices in digital facsimiles. As a consequence, AECO has yet to find the limits of structured data, although this may happen soon, when the IoT becomes better integrated in building design and management.
The emphasis on the structured nature of BIM also flies in the face of IM and its inclusiveness. In this respect, one should keep in mind that IM is a means, not an end, and that its adaptability has historical causes. It is not compulsory to retain redundant information sources next to BIM, simply because IM can handle redundancy and complexity. If the structured content of BIM suffices, then IM for AECO simply becomes easier and parsimonious.
Information management goals
Information flow
The first of the two main goals of IM is to regulate information flows. This is usually achieved by specifying precise processing steps and stages, which ensure that information is produced and disseminated on time and to the right people, until it is finally archived (or disposed of). In terms of the semantic information theory proposed in this book, this involves identifying and tracking information instances throughout a process, covering both the production and modification of data. In IM there is an emphasis on the sources and stores of information: the containers and carriers from where information is drawn, rests or is archived. BIM combines all these into a single information environment, shifting attention to the symbols, their properties and relations, where the data of information instances are found.
Managing information flow involves:
• What: the information required for or returned by each specific task in a process
• Who: the actors or stakeholders who produce or receive the information in a task
• How: the processing of information instances
• When: the timing of information instances
What is about information instances and symbols, as discussed in the previous section. However, despite the integration potential of BIM, which makes most information internal, some data may reside outside of models, e.g. weather data required for a thermal simulation. Connectivity to external sources is also part of IM.
For both internal and external information, it is critical to distinguish between authorship and custodianship: the actors who produce some information are not necessarily the same stakeholders who safeguard this information in a project, let alone during the lifecycle of a building. A typical example is briefing information: this is usually compiled in the initiative stage by a specialist on the basis of client and user input, as well as professional knowledge. In the development stage, custodianship may pass on to a project manager who utilizes it to evaluate the design, possibly adapting the brief on the basis of insights from the design. Then in the use stage, it becomes a background to facility and property management, before it develops into a direct or indirect source for a new brief, e.g. for the refurbishment of the building. Making clear in all stages who is the custodian of this information is of paramount importance in an integrated environment like BIM, where overlaps and grey areas are easy to develop.
How information flows are regulated relates to the syntagmatic dimension of a model: the sequence of actions through which symbols, their properties and relations are processed. The information instances produced by these actions generally correspond to the sequence of tasks in the process but are also subject to extrinsic constraints, often from the software: the presence of bounding walls is necessary for defining a space in most BIM editors, although in many design processes one starts with the spatial design rather than with construction. IM needs to take such conflicts into account and differentiate between the two sequences.
A useful device for translating tasks into information actions is the tripartite scheme Input-Processing-Output (IPO), which underlies any form of information processing: for any task, some actors deliver information as input; this input is then processed by some other (or even the same) actors. These return as output some other information, which usually becomes input for the next task. IM has to ensure that the right input is delivered to the right actors and that the right output is collected. By considering each decision task with respect to I‑P‑O, one can identify missing information in the input and arrange for its delivery.
The syntagmatic dimension obviously also relates to when: the moments when information instances become available. These moments usually form a coherent time schedule. The time schedule captures the process of actions and transactions, linking each to specific information instances. Here again one should differentiate between the sequence of tasks, which tends to be adequately covered by a project schedule, and the sequence of information actions, which may require additional refinement.
Information flow in BIM
We are used to viewing the early part of a design process as something almost magical: someone somehow puts a few lines on a scrap of paper and suddenly we have a basis for imagining what the building will look like. The same applies to BIM: one starts entering symbols in a model and the design is there for all to see and process. Building information flows seem to emerge out of nothing but this is far from true. The designers who make the first sketches or decide on the first elements in a model operate on the basis of general knowledge of their disciplines, specific knowledge of the kind of building they are designing and specific project information, including location characteristics and briefs. In other words, building representations are the product of cognitive processes that combine both tacit and overt information.
It is also widely assumed that the amount of information in a design process grows from very little in early design to substantial amounts by the end, when a building is fully specified. This actually refers to the specificity of conventional building representations, e.g. the drawing scales used in different design stages. In fact, even before the first sketch is made, there usually is considerable information available on the building. Some of it predates the project, e.g. planning regulations and building codes that determine much of the form of a building and key features of its elements, such as the pitch of the roof and the dimensions of stairs. Other information belongs to the project, e.g. the brief that states accommodation requirements for the activities to be housed in the building, the budget that constrains cost or site-related principles like the continuation of vistas or circulation networks in the neighbourhood through the building site. Early building representations may conform to such specifications but most information remains in other documents or in the mind of the designers. For example, in many cases, one starts drawing or modelling a design with a site plan onto which building elements and spaces are placed but the site plan rarely includes planning regulations.
In managing building information, one should ensure that this information becomes explicit and is connected to subsequent tasks. In BIM, this amounts to augmenting the basic model setup (site plan, floor height and grids) with constraints from planning regulations (e.g. in the form of the permissible building envelope), use information from the brief and constraints on the kind of building elements that are admissible in the model (e.g. with respect to the fire rating of the building). Integration of such information amounts to feedforward: measurement and control of the information system before disturbances occur. Feedforward is generally more efficient and effective than feedback, e.g. checking if all building elements meet the fire safety requirements after they have been entered in the model.
It has also been suggested that early design decisions have a bigger impact on the outcome of a design process than later decisions. Having to decide on the basis of little overt information makes such decisions difficult and precarious. This conventional wisdom concerning early decisions may be misleading. Admittedly, early design decisions tend to concern basic features and aspects, from overall form to load-bearing structure, which determine much of the building and so have a disproportionate influence on cost and performance. However, such decisions are not exclusive to early design: the type of load-bearing structure can change late in the process, e.g. in relation to cost considerations or the need for larger spans. Such a late change can be more expensive because it also necessitates careful control of all interfacing between load-bearing and other elements in the design. From an IM perspective, what matters is to make all relevant information explicit in BIM, so as to know which data serve as input for a task (processing) and register the output of the task. Explicitness of information allows one to map decision making in a process and to understand the significance of any decision, regardless of process stage.
Information quality
The second main goal of IM is to safeguard or improve information quality.[3] Quality matters to IM in two respects. Firstly, concerning information utility: knowing that the information produced and disseminated in a process meets the requirements of its users. Secondly, concerning information value: information with a higher quality needs to be preserved and propagated with higher priority. IM measures quality pragmatically, in terms of relevance, i.e. fitness for purpose: how well the information supports the tasks of its users. In addition to pragmatic information quality, IM is also keen on inherent information quality: how well the information reflects the real-world entities it represents.
In both senses, information quality is determined within each application environment. IM offers a tactical, operational and technical framework but does not provide answers to domain questions. These answers have to be supplied by the application environment in order for IM to know which information to preserve, disseminate or prioritize. It should be noted that IM is not passive with regard to information quality. It can also improve it both at meta-levels (e.g. by systematically applying tags) and with respect to content (e.g. through condensation).
Information quality concerns the paradigmatic dimension: the symbols of a representation and their relations. As this dimension tends to be quite structured in symbolic representations, one can go beyond the pragmatic level of IM and utilize the underlying semantic level to understand better how information quality is determined.
The first advantage of utilizing the semantic level lies in the definition of acceptable data as being well-formed and meaningful. This determines the fundamental quality of data: their acceptability within each representation. A coffee stain cannot be part of a building representation but neither can a line segment be part of a model in BIM: it has to be a symbol that has the appearance of a line segment (i.e. uses the line segment as implementation mechanism), e.g. a room separation line in Revit, the most abstract of bounding elements. By the same token, a colour is not acceptable as a description of the material of a wall and a floor cannot be host to a door (except for a trapdoor). In conclusion, any data that do not fit the specifications of a symbol, a property or a relation cannot be well-formed or meaningful in BIM. Therefore, they have low quality, which requires attention. If quality cannot be improved, these data should be ignored as noise.
Data that pass the fundamental semantic test must then be evaluated concerning relevance for the particular building or project and its tasks. To judge relevance, one needs additional criteria, e.g. concerning specificity: it is unlikely that a model comprising generic building elements is satisfactory for a task like the acoustic analysis of a classroom because the property values of generic elements tend to be too vague regarding factors that influence acoustic performance.
The semantic level also helps to determine information value beyond utility: prioritizing which information should be preserved and propagated relates to semantic type. As derivative data can be produced from primary data when needed, they do not have to be prioritized – in many cases, they do not have to be preserved at all. Operational data and metadata tend to change little and infrequently in BIM, so these too have a lower priority relative to primary data. Finally, anti-data have a high priority, both because they necessitate interpretation and action, and because such action often aims at producing missing primary data.
Parsimonious IM concerning information value in a symbolic representation like BIM can be summarized as follows:
• Preservation and completion of primary data
• Establishing transparent and efficient procedures for producing derivative data when needed
• Identification and interpretation of anti-data, including specification of relevant actions
• Preservation of stable operational and metadata
The priority of primary data apparently conflicts with IM improvement of information quality through condensation, i.e. operations that return pragmatically superior derivative data and metadata. Such operations belong to the second point above: if the primary data serve as input for certain procedures, then these procedures have to be established as a dynamic view or similar output in BIM. If users need to know the floor areas of spaces, one should not just give them the space dimensions and let them work out the calculations but supply instead transparent calculations, ordered and clustered in a meaningful way. This does not mean that the results of these calculations should be preserved next to the space dimensions from which they derive.
Moving from the semantic to the pragmatic level, veracity is a key criterion of quality: fitness for purpose obviously requires that the information is true. In addition to user feedback, veracity can be established on the basis of additional data, e.g. laser scanning to verify that a model represents faithfully, accurately and precisely the geometry of a particular building.
Before relevance or veracity, however, one should evaluate the structural characteristics of primary information: a model that is not complete, coherent and consistent is a poor basis for any use. Completeness in a building representation means that all parts and aspects are present, i.e. that there are no missing symbols for building elements or spaces in a model. BIM software uses deficiency detection to identify missing symbols. Missing aspects refer to symbol properties or relations: the definition of symbols should include all that is necessary to describe their structure, composition, behaviour and performance.
Completeness is about the presence of all puzzle pieces; coherence is about how well these pieces fit together to produce a seamless overall picture. In a building representation this primarily concerns the interfacing of elements, including possible conflicts in space or time. Clash detection in BIM aims at identifying such conflicts, particularly in space. Relations between symbols are of obvious significance for coherence, so these should be made explicit and manageable. In BIM, there are examples of this in the way some symbols attach themselves to others, e.g. co-terminating walls to each other, spaces to their bounding walls and floors, windows and doors to hosting walls. Parameterization can extend such relations further into a network that automatically ensures coherence.
Finally, consistency is about all parts and aspects being represented in the same or compatible ways. In a symbolic representation, this refers to the properties and relations of symbols. If these are described in the same units and present in all relevant symbol types, then consistency is also guaranteed in information use. Colour, for example, should be a property of the outer layer of all building elements. In all cases, the colour should be derived from the materials of this layer. This means that any paint applied to an element should be explicit as material with properties that include colour. Moreover, any colour data attached to this material layer should follow a standard like the RAL or Pantone colour matching systems. Allowing users to enter any textual description of colour does not promote consistency.
It is important to evaluate completeness, coherence and consistency only after clarifying the semantic types in a representation. This allows one to concentrate on the data that really matter, in particular primary and anti-data, and the procedures that produce derivative data. This allows higher focus in IM and reduces the amount of data to be processed.
Key Takeaways
• IM is more than a technical necessity: it is also a means of improving performance in a project or enterprise and therefore a key component of overall management.
• IM is inclusive and accepts all kinds of information, from structured, semi-structured and unstructured sources. As a structured information system, BIM simplifies IM.
• IM has two main goals: regulate information flow and safeguard or improve information quality.
• Custodianship of information is critical for information control.
• Information flow relates to the syntagmatic dimension of a representation and draws from the sequence of tasks in a process, as well as from extrinsic constraints.
• In managing information flow one needs to make explicit what, who, how and when.
• The I ‑P ‑O scheme helps translate tasks into information actions.
• Even before a design takes shape, there are substantial amounts of information that should be made explicit in a model as feedforward.
• Information quality concerns the paradigmatic dimension and can therefore build on the semantic typology of data.
• In addition to semantic and pragmatic criteria, information quality also depends on completeness, coherence and consistency.
Exercises
1. Use the I‑P‑O scheme to explain how one decides on the width of an internal door in a design. Cluster the input by origin (general, specific, project) and describe the relations between input items.
2. Use the I‑P‑O scheme to explain what, who, how and when in deciding the layout of an office landscape, particularly:
1. Which workstation types are to be included, including dimensions and other requirements.
2. How instances of these types are to be arranged to achieve maximum capacity.
3. In a BIM editor of your choice make the permissible building envelope for a building in a location of your choice. Describe the process in terms of input, information instances produced and resulting constraints for various kinds of symbols in the model.
4. Evaluate the completeness, coherence and consistency of the permissible building envelope model you have made.
5. Analyse how one should constrain types of building elements in relation to performance expectations from the use type of building: compare a hotel bedroom to a hospital ward on the basis of a building code of your choice. Explain which symbol properties are involved and how.
1. The views on DM derive primarily from: Richards, M., 2010. Building Information Management - a standard framework and guide to BS 1192. London: BSI; Eynon, J., 2013. The design manager's handbook. Southern Gate, Chichester, West Sussex, UK: CIOB, John Wiley & Sons; Emmitt, S., 2014. Design management for architects (2nd ed.). Hoboken NJ: Wiley.
2. The presentation of IM is based on: Bytheway, A., 2014. Investing in information. New York: Springer; Detlor, B., 2010. Information management. International Journal of Information Management, 30(2), 103-108. doi:10.1016/j.ijinfomgt.2009.12.001; Flett, A., 2011. Information management possible?:Why is information management so difficult? Business Information Review, 28(2), 92-100. doi:10.1177/0266382111411066; Rosenfeld, L., Morville, P., & Arango, J., 2015. Information architecture :for the web and beyond (4th ed.). Sebastopol CA: O'Reilly Media.
3. IM definitions of information quality derive from: Wang, R.Y., & Strong, D.M., 1996. Beyond accuracy: what data quality means to data consumers. Journal of Management Information Systems, 12(4), 5-33. doi:10.1080/07421222.1996.11518099; English, L.P., 1999. Improving data warehouse and business information quality: methods for reducing costs and increasing profits. New York: Wiley.
|
textbooks/eng/Civil_Engineering/Book%3A_Building_Information_-_Representation_and_Management_-_Fundamentals_and_Principles_(Koutamanis)/03%3A_Information_-_Theory_and_Management/3.02%3A_Information_management.txt
|
This chapter describes how process and information diagrams can describe tasks in a process and the information actions relating to these tasks in a comprehensive and coherent manner.
Flow charts
As we have seen in the previous chapter, there is some correspondence between the sequence of tasks in a process and the sequence of information actions: process management and IM overlap. The main difference is that IM goes beyond the actions and transactions in a task, in order to identify, structure and connect information in a way that supports and anticipates the needs of the process. Therefore, the first step towards effective IM in any process is understanding the process itself: what people actually do and how their actions, decisions, interactions and transactions relate to the production, dissemination and utilization of information. Starting IM by analysing the process also has advantages for the deployment of IM measures: most people and organizations are more process-oriented than information-oriented and may have difficulty identifying and organizing information actions without a clear operational context. Using a process model as background makes clearer why and how one should manage information.
A process can be described as a sequence of tasks towards a specific outcome. Representing processes diagrammatically is particularly useful in our case because of the abstraction and consistency afforded by diagrams. Of the many kinds of diagrams available for this purpose, basic flow charts suffice in practically all cases. These diagrams are directed graphs, in which objects are represented by nodes of various kinds corresponding to different kinds of objects, while relations are described by arcs (Figure 1). The direction of the arcs indicates the direction of flow in the process. Bidirectional arcs should be avoided because they usually obscure separate tasks, e.g. evaluation followed by feedback. Explicit representation of such tasks is essential for both the process and IM.
To make an unambiguous and useful flow chart of a process, one should adhere to another basic rule: that each object should appear only once as a node in the diagram. This allows us to make feedback explicit and to measure the degree of a node, its closeness and all other graph-theoretic measures that can be used in analysing the process.
Process diagram
Let us consider a simple example of a process in building design: the estimation of construction cost in early design, on the basis of gross floor area. The process involves three actors: the client, the architect and the cost specialist. These are responsible for the budget, the design, the cost estimation and the evaluation of the estimate, which leads to either feedback to the design (usually to lower the cost) or acceptance of the design as it is (Figure 2).
Information diagram
The process diagram is clear about who does what but the actual information they produce and consume remains obscure. This generic depiction may be useful for process management but is too abstract for IM. Using the process diagram as a foundation, one can develop an information diagram that makes information and its flow explicit (Figure 3). Actor nodes may be abstracted from an information diagram in order to focus on the analysis of process-related and data-related nodes into information instances. Ultimately, however, in IM one should always identify who does what in unambiguous terms.
In this diagram, too, each object should appear only once, as a single node. So, if the floor plan has to be redrawn because the design is deemed too expensive, the diagram should contain feedback loops to the floor plan of the same (albeit modified) design, so as to make the process cycles explicit. If the cost evaluation leads to a radically new design, requiring a new node in the diagram, then this should be made clear by means of unambiguous node labelling (e.g. Design 1 and Design 2). Such new versions of the same nodes should be used cautiously and sparingly, only when absolutely necessary, e.g. when a process involves design alternatives.
The information diagram should reveal what actually takes place in terms of input, processing and output in the process. For example, a cost specialist contributes to the cost estimation by supplying a list of unit prices, i.e. cost of gross floor area per m2 for different categories of building use. One m2 of storage area costs significantly less than one m2 of office space, which in turn costs less than one m2 of an operating theatre in a hospital. This also means that one has to extract matching data from the design. It is not enough to calculate the total gross floor area of a hospital design; one has to know the use of every space, so as to be able to calculate the subtotals for each category. The subtotals are then multiplied by the unit prices to arrive at a correct estimate and ascertain which category may be too big or too costly.
The above illustrates an important difference between process and information diagrams: the former can be abstract about what each task entails but the latter has to be specific regarding information sources (e.g. which drawings are used), the information instances these sources accommodate and the actions through which these instances are processed. The higher specificity of the information diagram leads to a finer grain in the analysis of the process into nodes and arcs that allow one to trace the flow of information instances. In general, it may be assumed that the flow is the same in both diagrams but the finer grain of the information diagram may lead to new insights and local elaborations or changes.
I‑P‑O and primary versus derivative
Transforming a process diagram into an information diagram involves the I‑P‑O scheme. Examining each node in the process diagram with respect to this scheme reveals which information is used as input and produced as output. The Design node, for example, is expected to contribute to a cost estimation involving gross floor areas. This means that the design cannot exist solely in the architect’s mind; we need some external representation as input, on the basis of which we can measure floor areas, moreover by use category. The obvious candidate is a floor plan and, more precisely, one where all spaces are indicated and labelled by their use. This floor plan rather than some abstract notion of a design is the appropriate input for the processing we require (calculation of gross floor areas). In the same manner, one can establish that these areas are the local output of a task: what has to be passed on to the next processing step (cost estimation) as input.
Equally important for the development of a complete and specific information diagram is the semantic type of information used as input: if it is derivative, one has to trace it back to the primary data from which it is produced. Floor areas are derivative, so one needs to identify the primary data from which they derive, as well as the representations that accommodate these primary data. Consequently, one should not just require a table of all spaces, their areas and use type from the design but also specify that for making this table one needs floor plans that describe the spaces and their uses. These floor plans should be present as sources in the information diagram.
Information diagrams for BIM
As explained in previous chapters, implementation mechanisms may affect the structure and use of a representation in non-trivial ways. Consequently, one should take implementation environments into account in an information diagram, including adapting the diagram following any change in implementation environment: what applies to analogue information processes may be significantly different to what should take place in a computer.
The information diagram we have considered so far for cost estimation could be called generic, although it primarily reflects analogue practices and media. Adapting it to BIM means first of all that the model (the central information system) should be explicitly present as a source. This information system contains the symbols and relations in which primary data are found. Derivative data like floor area calculations are produced from the model in views like schedules. These schedules are typically predefined in various formats, including room schedules that list spaces and their properties, including floor area calculations (Figure 4). They can be used to verify that the model contains all the primary data needed for the cost estimation.
Unit prices can be added to room schedules, thus integrating cost estimation in BIM in a straightforward and transparent manner (Figure 5).
Integrating cost estimation in BIM also means that feedback to the model should be similarly direct and straightforward, e.g. through annotations to spaces, especially large or expensive ones that should be prioritized when improving the design to match the constraints of the budget (Figure 6). Note that feedback in this example is abstract with regard to the particular symbols or relations that are affected. One may choose to be quite specific about this, so as to guide information actions with precision and certainty. Reversely, if one chooses abstract actions, then this would involve interpretation by certain actors. These actors should therefore be explicit in the diagram as authors or custodians of specific information.
An integrated information environment like BIM also makes automation of various information controls possible, e.g. concerning the presence of essential primary data. These too can be included in an information diagram for BIM (Figure 7).
Using information diagrams in information management
An information diagram that captures both the needs of a process and the capacities of BIM can make IM clear and unambiguous to both managers and actors in the process. Information flow can be explicitly depicted in the diagram, especially concerning what, who and when. Managers can use the information diagram to guide and control the process at any moment, while actors have a clear picture of the scope and significance of their actions. Addressing how questions depends on the fineness of the grain in the description of information instances: the finer it is, the more specific answers one can draw from the diagram. As such specificity affects interpretation, one should be careful about the balance between the two: many actors in a building project are knowledgeable professionals who may not take kindly to IM approaches that overconstrain them.
On the other hand, IM has to be strict about matters of authorship and custodianship because not everybody is yet accustomed to the possibilities and responsibilities of digital information processing. By linking actors to information with accordingly labelled arcs in the information diagram, one can indicate responsibilities and actions throughout a process. Note that roles can be variable: an actor who authors some information in one task may become custodian of other information in another task.
Concerning information quality, the information diagram forms a usable background for pragmatic value: applying the I‑P‑O scheme at any node is a critical part of measuring pragmatic value, i.e. establishing what users need to process and must produce in a task. Similarly, the information diagram is essential for the evaluation of completeness, coherence and consistency: it reveals the moments when one should return to the representation and analyse it. Such moments occur after critical information instances or after a multitude of information instances, i.e. when the model changes substantially.
The information diagram is also a necessity for our parsimonious approach to information value. This approach focuses on primary data and their propagation; both can be traced with accuracy in the diagram, including explicit, manageable connections to derivative data, enabling managers and users to know what should be preserved or prioritized. Finally, in the same manner one can identify anti-data, on the basis of expectations (e.g. knowing when information from different disciplines comes together in a process) and interpretation (e.g. that a space without a door is a shaft). This leads to directed action (e.g. requiring that two disciplines work together to solve interfacing problems), which should be present in an information diagram of appropriate specificity.
Key Takeaways
• Flow charts are directed graphs that can be used to describe a sequence of tasks (process diagram) or a sequence of information actions (information diagram ) .
• In both process and information diagrams, each object should be represented by only one node and each arc should be unidirectional.
• The I ‑P ‑O scheme helps translate a process diagram into an information diagram.
• Tracking the primary data needed for a process makes the information diagram complete and specific.
• Information diagrams should take into account the implementation environment of BIM: the symbols and relations that contain the primary data and the views that present derivative data, as well as the possibilities for quality control.
• Information diagrams make flows explicit and manageable; they also support analyses of information quality by identifying significant actions and moments.
Exercises
1. Measure the degree and eccentricity of nodes, and the eccentricity, diameter and radius of the graph in the process diagram of Figure 2. Do these measurements suggest critical parts in the process?
2. Measure the same in the information diagram of Figure 3. Do you observe any differences with Figure 2, also in terms of critical parts?
3. Add symbols, properties and relations to the information diagram of Figure 7. Does the increased specificity make IM easier or more reliable?
4. Add actors to the information diagram of Figure 7. How does the result compare to the diagram of the previous exercise?
|
textbooks/eng/Civil_Engineering/Book%3A_Building_Information_-_Representation_and_Management_-_Fundamentals_and_Principles_(Koutamanis)/03%3A_Information_-_Theory_and_Management/3.03%3A_Process_and_Information.txt
|
The following list of key concepts from the previous chapters is a reminder or checklist of what can be used in solving information problems, e.g. in the following exercises.
• Symbolic representation
• Symbols, properties and relations
• Symbols and things
• Symbols versus implementation mechanisms
• Graphs: objects and relations represented respectively by vertices (nodes) and edges
• Directed graphs (digraphs): graphs consisting of nodes and arcs (edges with a direction)
• Abstraction: visual versus mnemonic
• Solids and voids in building representations
• Paradigmatic and syntagmatic dimensions
• Data and information instances
• Semantic data types: primary, anti-data, derivative, operational, metadata
• Information instances by scope: single symbol versus multiple symbols
• Structured, semi-structured and unstructured information sources
• Information flow: what, who, how, when
• Completeness, coherence and consistency
• Information authorship versus information custodianship
• Process diagram: sequence of tasks in a digraph representation
• Information diagram: information instances and flows
• I‑P‑O: transition from process to information management
4.02: Exercise I- maintenance
Organize the process of repainting all walls of a large lecture hall at a university. The walls are in good condition, so a single coat of pain suffices. The process therefore can be reduced to the following tasks:
• Make a model of the lecture hall in BIM using direct measurements and photographs
• Classify wall surfaces and their parts with respect to:
• Labour (e.g. painting parts narrower than 30 cm are more time consuming)
• Equipment (e.g. parts higher than 220 cm require scaffolding)
• Accessibility (e.g. parts behind radiators or other fixed obstacles are hard to reach and therefore also time consuming)
• Measure the wall surfaces
• Make cost estimates
• Make a time schedule in 4D BIM
Deliverables
1. Process and information diagrams, accompanied by short explanatory comments
2. Basic model of the lecture hall in a BIM editor
3. Schedules for classification, measurement, estimates and scheduling in BIM
Roles
If the exercise is a group assignment, consider roles for the following aspects:
• Process management
• Information management
• BIM modelling (two or more people)
• Analyses in BIM (using schedules – two or more people)
4.03: Exercise II- change management
Organize how changes to a design in the development and realization stages can be registered and processed in BIM. These changes may refer to:
• Change to a property of a symbol (e.g. lengthening of a wall)
• Change of the type of a symbol (e.g. change of family for a door)
• Change in a relation between symbols (e.g. relocation of a door in a wall)
• Change in a time property of a symbol (e.g. as a result of a scheduling change)
Organize the process of change management in both stages as a series of tasks that reflect the above types of changes and take into account possible causes of change, such as:
• Changes in the brief (e.g. new activities added)
• Changes in the budget (e.g. increase of façade cost necessitating reduction of cost elsewhere)
• Changes in an aspect of the design (e.g. change in the heating solution or the fire rating of internal doors and ensuing interfacing issues – not just clash detection)
• Changes in the construction schedules (e.g. due to delays in the delivery of components or to bad weather)
• Errors in construction (e.g. wrong dimensioning or specifications of an element)
Deliverables
1. Process and information diagrams, accompanied by short explanatory comments
2. Basic model in a BIM editor demonstrating the way changes can be implemented
3. Short overview of findings (two A4 sheets)
Roles
If the exercise is a group assignment, consider roles for the following aspects:
• Process management
• Information management
• BIM modelling
• Case analyses (for finding realistic examples)
4.04: Exercise III- circular energy transition
The planned energy transition in the Netherlands means that most buildings have to undergo an expensive renovation to meet new standards. To reduce costs, one can adopt a circular approach to both components or materials released from existing buildings and the new components and subsystems that will be added to the buildings. Organize the following tasks for a typical Dutch single-family house:
• Document the existing situation in a model appropriate for renovation, i.e. including realization phases, distinction between existing and planned, what should remain and what should be removed
• Identify in the model components and materials that should be extracted (e.g. radiators: the house will have underfloor heating), explaining how identification takes place (preferably automatically) in the model
• Estimate the expected circularity form for these components and materials (recycle, remanufacture, repurpose, re-use etc.), explaining which factors play a role (weathering, wear, interfacing with other elements etc.) and how these factors can be detected in the model
• Identify which elements should be upgraded and specify what this entails in the model (paying attention to phasing and element type changes)
• Specify how new elements (for the renovation) should be added to the model to support the above in the remaining lifecycle of the house
• Make a time schedule for the renovation in 4D BIM
Deliverables
1. Process and information diagrams, accompanied by short explanatory comments
2. Incomplete model in a BIM editor containing demonstrations of your solutions
3. Schedules for circularity analyses in BIM
4. Short overview and table of contents (two A4 sheets)
Roles
If the exercise is a group assignment, consider roles for the following aspects:
• Process management
• Information management
• BIM modelling
• Analyses in BIM (using schedules – probably more than one group member)
• Legal and technical aspects of the energy transition
• Building documentation (emphasis on how to deal with incompleteness and uncertainty)
• Subsystem integration
• Circularity in design (technical aspects)
|
textbooks/eng/Civil_Engineering/Book%3A_Building_Information_-_Representation_and_Management_-_Fundamentals_and_Principles_(Koutamanis)/04%3A_Exercises/4.01%3A_Key_Concepts.txt
|
Fluid mechanics deals with the study of all fluids under static and dynamic situations. Fluid mechanics is a branch of continuous mechanics which deals with a relationship between forces, motions, and statical conditions in a continuous material. This study area deals with many and diversified problems such as surface tension, fluid statics, flow in enclose bodies, or flow round bodies (solid or otherwise), flow stability, etc. In fact, almost any action a person is doing involves some kind of a fluid mechanics problem. Furthermore, the boundary between the solid mechanics and fluid mechanics is some kind of gray shed and not a sharp distinction (see Figure 1.1 for the complex relationships between the different branches which only part of it should be drawn in the same time.). For example, glass appears as a solid material, but a closer look reveals that the glass is a liquid with a large viscosity. A proof of the glass ``liquidity'' is the change of the glass thickness in high windows in European Churches after hundred years. The bottom part of the glass is thicker than the top part. Materials like sand (some call it quick sand) and grains should be treated as liquids. It is known that these materials have the ability to drown people. Even material such as aluminum just below the mushy zone also behaves as a liquid similarly to butter. Furthermore, material particles that ``behaves'' as solid mixed with liquid creates a mixture After it was established that the boundaries of fluid mechanics aren't sharp, most of the discussion in this book is limited to simple and (mostly) Newtonian (sometimes power fluids) fluids which will be defined later.
Fig. 1.1 Diagram to explain part of relationships of fluid mechanics branches.
The fluid mechanics study involve many fields that have no clear boundaries between them. Researchers distinguish between orderly flow and chaotic flow as the laminar flow and the turbulent flow. The fluid mechanics can also be distinguish between a single phase flow and multiphase flow (flow made more than one phase or single distinguishable material). The last boundary (as all the boundaries in fluid mechanics) isn't sharp because fluid can go through a phase change (condensation or evaporation) in the middle or during the flow and switch from a single phase flow to a multi phase flow. Moreover, flow with two phases (or materials) can be treated as a single phase (for example, air with dust particle).
After it was made clear that the boundaries of fluid mechanics aren't sharp, the study must make arbitrary boundaries between fields. Then the dimensional analysis can be used explain why in certain cases one distinguish area/principle is more relevant than the other and some effects can be neglected. Or, when a general model is need because more parameters are effecting the situation. It is this author's personal experience that the knowledge and ability to know in what area the situation lay is one of the main problems. For example, engineers in software company) analyzed a flow of a complete still liquid assuming a complex turbulent flow model. Such absurd analysis are common among engineers who do not know which model can be applied. Thus, one of the main goals of this book is to explain what model should be applied. Before dealing with the boundaries, the simplified private cases must be explained.
There are two main approaches of presenting an introduction of fluid mechanics materials. The first approach introduces the fluid kinematic and then the basic governing equations, to be followed by stability, turbulence, boundary layer. The second approach deals with the Integral Analysis to be followed with Differential Analysis, and continue with Empirical Analysis. These two approaches pose a dilemma to anyone who writes an introductory book for the fluid mechanics. These two approaches have justifications and positive points. Reviewing many books on fluid mechanics made it clear, there isn't a clear winner. This book attempts to find a hybrid approach in which the kinematic is presented first (aside to standard initial four chapters) follow by Integral analysis and continued by Differential analysis. The ideal flow (frictionless flow) should be expanded compared to the regular treatment. This book is unique in providing chapter on multiphase flow. Naturally, chapters on open channel flow (as a sub class of the multiphase flow) and compressible flow (with the latest developments) are provided.
|
textbooks/eng/Civil_Engineering/Book%3A_Fluid_Mechanics_(Bar-Meir)/00%3A_Introduction/1.1%3A_What_is_Fluid_Mechanics%3F.txt
|
The need to have some understanding of fluid mechanics started with the need to obtain water supply. For example, people realized that wells have to be dug and crude pumping devices need to be constructed. Later, a large population created a need to solve waste (sewage) and some basic understanding was created. At some point, people realized that water can be used to move things and provide power. When cities increased to a larger size, aqueducts were constructed. These aqueducts reached their greatest size and grandeur in those of the City of Rome and China.
Yet, almost all knowledge of the ancients can be summarized as application of instincts, with the exception Archimedes (250 B.C.) on the principles of buoyancy. For example, larger tunnels built for a larger water supply, etc. There were no calculations even with the great need for water supply and transportation. The first progress in fluid mechanics was made by Leonardo Da Vinci (1452-1519) who built the first chambered canal lock near Milan. He also made several attempts to study the flight (birds) and developed some concepts on the origin of the forces. After his initial work, the knowledge of fluid mechanics (hydraulic) increasingly gained speed by the contributions of Galileo, Torricelli, Euler, Newton, Bernoulli family, and D'Alembert. At that stage theory and experiments had some discrepancy. This fact was acknowledged by D'Alembert who stated that, ``The theory of fluids must necessarily be based upon experiment.'' For example the concept of ideal liquid that leads to motion with no resistance, conflicts with the reality.
This discrepancy between theory and practice is called the ``D'Alembert paradox'' and serves to demonstrate the limitations of theory alone in solving fluid problems. As in thermodynamics, two different of school of thoughts were created: the first believed that the solution will come from theoretical aspect alone, and the second believed that solution is the pure practical (experimental) aspect of fluid mechanics. On the theoretical side, considerable contribution were made by Euler, La Grange, Helmholtz, Kirchhoff, Rayleigh, Rankine, and Kelvin. On the ``experimental'' side, mainly in pipes and open channels area, were Brahms, Bossut, Chezy, Dubuat, Fabre, Coulomb, Dupuit, d'Aubisson, Hagen, and Poisseuille. In the middle of the nineteen century, first Navier in the molecular level and later Stokes from continuous point of view succeeded in creating governing equations for real fluid motion. Thus, creating a matching between the two school of thoughts: experimental and theoretical. But, as in thermodynamics, people cannot relinquish control. As results it created today ``strange'' names: Hydrodynamics, Hydraulics, Gas Dynamics, and Aeronautics.
The Navier-Stokes equations, which describes the flow (or even Euler equations), were considered unsolvable during the mid nineteen century because of the high complexity. This problem led to two consequences. Theoreticians tried to simplify the equations and arrive at approximated solutions representing specific cases. Examples of such work are Hermann von Helmholtz's concept of vortexes (1858), Lanchester's concept of circulatory flow (1894), and the Kutta–Joukowski circulation theory of lift (1906). The experimentalists, at the same time proposed many correlations to many fluid mechanics problems, for example, resistance by Darcy, Weisbach, Fanning, Ganguillet, and Manning. The obvious happened without theoretical guidance, the empirical formulas generated by fitting curves to experimental data (even sometime merely presenting the results in tabular form) resulting in formulas that the relationship between the physics and properties made very little sense.
At the end of the twenty century, the demand for vigorous scientific knowledge that can be applied to various liquids as opposed to formula for every fluid was created by the expansion of many industries. This demand coupled with new several novel concepts like the theoretical and experimental researches of Reynolds, the development of dimensional analysis by Rayleigh, and Froude's idea of the use of models change the science of the fluid mechanics. Perhaps the most radical concept that effects the fluid mechanics is of Prandtl's idea of boundary layer which is a combination of the modeling and dimensional analysis that leads to modern fluid mechanics. Therefore, many call Prandtl as the father of modern fluid mechanics. This concept leads to mathematical basis for many approximations. Thus, Prandtl and his students Blasius, von Karman, Meyer, and Blasius and several other individuals as Nikuradse, Rose, Taylor, Bhuckingham, Stanton, and many others, transformed the fluid mechanics to today modern science.
While the understanding of the fundamentals did not change much, after World War Two, the way how it was calculated changed. The introduction of the computers during the 60s and much more powerful personal computer has changed the field. There are many open source programs that can analyze many fluid mechanics situations. Today many problems can be analyzed by using the numerical tools and provide reasonable results. These programs in many cases can capture all the appropriate parameters and adequately provide a reasonable description of the physics. However, there are many other cases that numerical analysis cannot provide any meaningful result (trends). For example, no weather prediction program can produce good engineering quality results (where the snow will fall within 50 kilometers accuracy. Building a car with this accuracy is a disaster). In the best scenario, these programs are as good as the input provided. Thus, assuming turbulent flow for still flow simply provides erroneous results (see for example, EKK, Inc).
1.2: Thermodynamics
This page was auto-generated because a user created a sub-page to this page.
|
textbooks/eng/Civil_Engineering/Book%3A_Fluid_Mechanics_(Bar-Meir)/00%3A_Introduction/1.2%3A_Brief_History.txt
|
Some differentiate fluid from solid by the reaction to shear stress. The fluid continuously and permanently deformed under shear stress while the solid exhibits a finite deformation which does not change with time. It is also said that fluid cannot return to their original state after the deformation. This differentiation leads to three groups of materials: solids and liquids and all material between them. This test creates a new material group that shows dual behaviors; under certain limits; it behaves like solid and under others it behaves like fluid (see Figure 1.1). The study of this kind of material called rheology and it will (almost) not be discussed in this book. It is evident from this discussion that when a fluid is at rest, no shear stress is applied.
The fluid is mainly divided into two categories: liquids and gases. The main difference between the liquids and gases state is that gas will occupy the whole volume while liquids has an almost fix volume. This difference can be, for most practical purposes considered, sharp even though in reality this difference isn't sharp. The difference between a gas phase to a liquid phase above the critical point are practically minor. But below the critical point, the change of water pressure by 1000% only change the volume by less than 1 percent. For example, a change in the volume by more 5% will required tens of thousands percent change of the pressure. So, if the change of pressure is significantly less than that, Hence, the pressure will not affect the volume. In gaseous phase, any change in pressure directly affects the volume. The gas fills the volume and liquid cannot. Gas has no free interface/surface (since it does fill the entire volume).
There are several quantities that have to be addressed in this discussion. The first is force which was reviewed in physics. The unit used to measure is $[N]$. It must be remember that force is a vector, e.g it has a direction. The second quantity discussed here is the area. This quantity was discussed in physics class but here it has an additional meaning, and it is referred to the direction of the area. The direction of area is perpendicular to the area. The area is measured in $[m^2]$. Area of three–dimensional object has no single direction. Thus, these kinds of areas should be addressed infinitesimally and locally.
The traditional quantity, which is force per area has a new meaning. This is a result of division of a vector by a vector and it is referred to as tensor. In this book, the emphasis is on the physics, so at this stage the tensor will have to be broken into its components. Later, the discussion on the mathematical meaning is presented (later version). For the discussion here, the pressure has three components, one in the area direction and two perpendicular to the area. The pressure component in the area direction is called pressure (great way to confuse, isn't it?). The other two components are referred as the shear stresses. The units used for the pressure components is $[N/m^2]$.
Fig. 1.2 Density as a function of the size of sample.
The density is a property which requires that liquid to be continuous. The density can be changed and it is a function of time and space (location) but must have a continues property. It doesn't mean that a sharp and abrupt change in the density cannot occur. It referred to the fact that density is independent of the sampling size. Figure 1.2 shows the density as a function of the sample size. After certain sample size, the density remains constant. Thus, the density is defined as $\rho = \lim_{\Delta V\to \epsilon} \frac{\Delta m}{\Delta V}$
It must be noted that $\epsilon$ is chosen so that the continuous assumption is not broken, that is, it did not reach/reduced to the size where the atoms or molecular statistical calculations are significant (see Figure 1.2 for point where the green lines converge to constant density). When this assumption is broken, then, the principles of statistical mechanics must be utilized.
1.3: Kinds of Fluids
The shear stress is part of the pressure tensor. However, here, and many parts of the book, it will be treated as a separate issue. In solid mechanics, the shear stress is considered as the ratio of the force acting on area in the direction of the forces perpendicular to area. Different from solid, fluid cannot pull directly but through a solid surface. Consider liquid that undergoes a shear stress between a short distance of two plates as shown in Figure 1.3.
Figure 1.3: Schematics to describe the shear stress in fluid mechanics.
The upper plate velocity generally will be
$U = \ff(A, F, h)$
Where $A$ is the area, the $F$ denotes the force, and $h$ is the distance between the plates. From solid mechanics study, it was shown that when the force per area increases, the velocity of the plate increases also. Experiments show that the increase of height will increase the velocity up to a certain range. Consider moving the plate with a zero lubricant $(h \sim 0)$ (results in large force) or a large amount of lubricant (smaller force). In this discussion, the aim is to develop differential equation, thus the small distance analysis is applicable. For cases where the dependency is linear, the following can be written $U \propto \dfrac{h F}{A}$
Equations qref{intro:eq:Upropto} can be rearranged to be
$U h \propto \dfrac{F}{A}$
Shear stress was defined as
$T_{xy} = \dfrac{F}{A}$
The index x represent the direction of the shear stress while the y represent the direction of the area(perpendicular to the area). From equations qref{intro:eq:UproptoA} and qref{intro:eq:tauDef} it follows that ratio of the velocity to height is proportional to shear stress. Hence, applying the coefficient to obtain a new equality as
$T_{xy} = \mu \dfrac{U}{h}$
Where \mu is called the absolute viscosity or dynamic viscosity which will be discussed later in this chapter in a great length.
Fig. 1.4 The deformation of fluid due to shear stress as progression of time.
In steady state, the distance the upper plate moves after small amount of time, \delta t is
$dll = U \delta t$
From Figure 1.4 it can be noticed that for a small angle, $\delta\beta \cong \sin\beta$, the regular approximation provides
$dll = U \delta t = h\delta\beta$
From equation qref{intro:eq:dUdx} it follows that
$U=h\dfrac{\delta\beta}{\delta t}$
Combining equation qref{intro:eq:dbdt} with equation qref{intro:eq:shearS} yields
$T_{xy} = \mu\dfrac{\delta\beta}{\delta t}$
If the velocity profile is linear between the plate (it will be shown later that it is consistent with derivations of velocity), then it can be written for small a angel that
$\dfrac{\delta \beta}{\delta t} = \dfrac{dU}{dy}$
Materials which obey equation qref{intro:eq:tau_dbdt} referred to as Newtonian fluid. For this kind of substance
$T_{xy} = \mu\dfrac{dU}{dy}$
Newtonian fluids are fluids which the ratio is constant. Many fluids fall into this category such as air, water etc. This approximation is appropriate for many other fluids but only within some ranges. Equation qref{intro:eq:dbdt} can be interpreted as momentum in the \x direction transferred into the \y direction. Thus, the viscosity is the resistance to the flow (flux) or the movement. The property of viscosity, which is exhibited by all fluids, is due to the existence of cohesion and interaction between fluid molecules. These cohesion and interactions hamper the flux in y–direction. Some referred to shear stress as viscous flux of x–momentum in the y–direction. The units of shear stress are the same as flux per time as following
$\dfrac{F}{A}\left\dfrac{kg m}{sec^2}\dfrac{1}{m^2}\right = \dfrac{mU}{A}\left\dfrac{kg}{sec}\dfrac{m}{sec}\dfrac{1}{m^2}\righttag$
dll=Uδt(7)
τxy=FA(5)
τxy=FA(5)jUh∝FA(4)
Thus, the notation of $T_{xy}$ is easier to understand and visualize. In fact, this interpretation is more suitable to explain the molecular mechanism of the viscosity. The units of absolute viscosity are $[N sec / m^2]$.
|
textbooks/eng/Civil_Engineering/Book%3A_Fluid_Mechanics_(Bar-Meir)/00%3A_Introduction/1.3%3A_Kinds_of_Fluids/Shear_Stress.txt
|
Fig. 1.3 Schematics to describe the shear stress in fluid mechanics.
The shear stress is part of the pressure tensor. However, here, and many parts of the book, it will be treated as a separate issue. In solid mechanics, the shear stress is considered as the ratio of the force acting on area in the direction of the forces perpendicular to area. Different from solid, fluid cannot pull directly but through a solid surface. Consider liquid that undergoes a shear stress between a short distance of two plates as shown in Figure 1.3. The upper plate velocity generally will be
$U = f ( A, F, h) \label{intro:eq:uPlate}$
Where $A$ is the area, the $F$ denotes the force, $h$ is the distance between the plates. From solid mechanics study, it was shown that when the force per area increases, the velocity of the plate increases also. Experiments show that the increase of height will increase the velocity up to a certain range. Consider moving the plate with a zero lubricant ($h \sim 0$) (results in large force) or a large amount of lubricant (smaller force). In this discussion, the aim is to develop differential equation, thus the small distance analysis is applicable. For cases where the dependency is linear, the following can be written
$U \propto \dfrac{h\,F}{A} \label{intro:eq:Upropto}$
Equations (??) can be rearranged to be
\begin{align}
\dfrac{U}{h} \propto \dfrac{F}{A}
\label{intro:eq:UproptoA}
\end{align}
Shear stress was defined as
\begin{align}
\tau_{xy} = \dfrac {F}{A}
\label{intro:eq:tauDef}
\end{align}
The index $x$ represent the "direction of the shear stress while the $y$ represent the direction of the area(perpendicular to the area). From equations (??) and (??) it follows that ratio of the velocity to height is proportional to shear stress. Hence, applying the coefficient to obtain a new equality as
\begin{align}
\tau_{xy} = \mu \,\dfrac{U}{h}
\label{intro:eq:shearS}
\end{align}
Where $\mu$ is called the absolute viscosity or dynamic viscosity which will be discussed later in this chapter in a great length.
Fig. 1.4 The deformation of fluid due to shear stress as progression of time.
In steady state, the distance the upper plate moves after small amount of time, $\delta t$ is
\begin{align}
dll = U \, \delta t
\label{intro:eq:delta_t}
\end{align}
From Figure 1.4 it can be noticed that for a small angle, $\delta\beta \cong \sin\beta$, the regular approximation provides
\begin{align}
dll = U \, \delta t =
\overbrace{h \, \delta\beta}^{geometry}
\label{intro:eq:dUdx}
\end{align}
From equation (??) it follows that
\begin{align}
U = h\, \dfrac{\delta\beta}{\delta t}
\label{intro:eq:dbdt}
\end{align}
Combining equation (??) with equation (??) yields
\begin{align}
\tau_{xy} = \mu \, \dfrac{\delta \beta}{\delta t}
\label{intro:eq:tau_dbdt}
\end{align}
If the velocity profile is linear between the plate (it will be shown later that it is consistent with derivations of velocity), then it can be written for small a angel that
\begin{align}
\dfrac{\delta \beta}{\delta t} =
\dfrac{dU}{dy}
\label{intro:eq:velocityTime}
\end{align}
Materials which obey equation (??) referred to as Newtonian fluid. For this kind of substance
\begin{align}
\tau_{xy} = \mu\,\dfrac{dU}{dy}
\label{intro:eq:tau_xy}
\end{align}
Newtonian fluids are fluids which the ratio is constant. Many fluids fall into this category such as air, water etc. This approximation is appropriate for many other fluids but only within some ranges.
Equation (??) can be interpreted as momentum in the $x$ direction transferred into the y direction. Thus, the viscosity is the resistance to the flow (flux) or the movement. The property of viscosity, which is exhibited by all fluids, is due to the existence of cohesion and interaction between fluid molecules. These cohesion and interactions hamper the flux in $y$–direction. Some referred to shear stress as viscous flux of $x$–momentum in the $y$–direction. The units of shear stress are the same as flux per time as following
$\dfrac{F}{A} \,\left[ \dfrac {kg\, m}{sec^2}\; \dfrac{1}{m^2} \right] = \dfrac{\dot{m}\,U} {A} \left[ \dfrac{kg}{sec}\; \dfrac{m}{sec} \,\, \dfrac{1}{m^2} \right]$
Thus, the notation of $\tau_{xy}$ is easier to understand and visualize. In fact, this interpretation is more suitable to explain the molecular mechanism of the viscosity. The units of absolute viscosity are [$N\,sec/m^2$].
Example 1.1
A space of 1 [cm] width between two large plane surfaces is filled with glycerin. Calculate the force that is required to drag a very thin plate of 1 [$m^2$] at a speed of 0.5 m/sec. It can be assumed that the plates remains in equidistant from each other and steady state is achieved instantly.
Solution 1.1
Assuming Newtonian flow, the following can be written (see equation (??))
$F = \dfrac{A\,\mu U}{h} \sim \dfrac{1\times 1.069 \times 0.5}{0.01} = 53.45 [N]$
Example 1.2
Castor oil at $25^{\circ}C$ fills the space between two concentric cylinders of 0.2[m] and 0.1[m] diameters with height of 0.1 [m]. Calculate the torque required to rotate the inner cylinder at 12 rpm, when the outer cylinder remains stationary. Assume steady state conditions.
Solution 1.2
The velocity is
$U = r\,\dot{\theta} = 2\,\pi\,r_i \,\mbox{rps}= 2\times \pi\times 0.1 \times \overbrace{12 / 60}^{rps} = 0.4\,\pi\,r_i$
Where $rps$ is revolution per second.
The same way as in Example 1.1, the moment can be calculated as the force times the distance as
$M = F\,ll = \dfrac{\overbrace{ll}^{r_i}\, \overbrace{A}^{2\,\pi\,r_i\,h} \,\mu U}{r_o-r_i}$
In this case ${r_o-r_i} = h$ thus,
$M = \dfrac{2\,\pi^2\, \overbrace{{0.1}^3}^{r_i} \, \cancel{h} \, \overbrace{0.986}^\mu \, 0.4 } {\cancel{h} } \sim 0.0078 [N\,m]$
|
textbooks/eng/Civil_Engineering/Book%3A_Fluid_Mechanics_(Bar-Meir)/00%3A_Introduction/1.4%3A_Shear_Stress.txt
|
Estimation of The Viscosity
1.5: Viscosity
Figure 1.5. The difference of power fluid families.
Viscosity varies widely with temperature. However, temperature variation has an opposite effect on the viscosities of liquids and gases. The difference is due to their fundamentally different mechanism creating viscosity characteristics. In gases, molecules are sparse and cohesion is negligible, while in the liquids, the molecules are more compact and cohesion is more dominate. Thus, in gases, the exchange of momentum between layers brought as a result of molecular movement normal to the general direction of flow, and it resists the flow. This molecular activity is known to increase with temperature, thus, the viscosity of gases will increase with temperature. This reasoning is a result of the considerations of the kinetic theory. This theory indicates that gas viscosities vary directly with the square root of temperature. In liquids, the momentum exchange due to molecular movement is small compared to the cohesive forces between the molecules. Thus, the viscosity is primarily dependent on the magnitude of these cohesive forces. Since these forces decrease rapidly with increases of temperature, liquid viscosities decrease as temperature increases.
Figure 1.6. Nitrogen(left) and Argon(right) viscosity as a function of the temperature and pressure after Lemmon and Jacobsen.
Figure 1.6a demonstrates that viscosity increases slightly with pressure, but this variation is negligible for most engineering problems. Well above the critical point, both phases are only a function of the temperature. On the liquid side below the critical point, the pressure has minor effect on the viscosity. It must be stress that the viscosity in the dome is meaningless. There is no such a thing of viscosity at 30% liquid. It simply depends on the structure of the flow as will be discussed in the chapter on multi phase flow. The lines in the above diagrams are only to show constant pressure lines. Oils have the greatest increase of viscosity with pressure which is a good thing for many engineering purposes.
Kinematic Viscosity
Figure 1.7. The shear stress as a function of the sear rate.
In equation ___, the relationship between the velocity and the shear stress was assumed to be linear. Not all the materials obey this relationship. There is a large class of materials which shows a non-linear relationship with velocity for any shear stress. This class of materials can be approximated by a single polynomial term that is $a = bx^n$. From the physical point of view, the coefficient depends on the velocity gradient. This relationship is referred to as power relationship and it can be written as $\tau = \overbrace{K (\frac{dU}{dx})^{n-1}}^{viscosity} \left(\frac{dU}{dx}\right)$ The new coefficients $(n, K)$ in equation ___ are constant. When $n = 1$ equation represent Newtonian fluid and $K$ becomes the familiar $\mu$. The viscosity coefficient is always positive. When $n$ is above one, the liquid is dilettante. When $n$ is below one, the fluid is pseudoplastic. The liquids which satisfy equation ___ are referred to as purely viscous fluids. Many fluids satisfy the above equation. Fluids that show increase in the viscosity (with increase of the shear) referred to as thixotropic and those that show decrease are called rheopectic fluids (see Figure 1.5).
Materials which behave up to a certain shear stress as a solid and above it as a liquid are referred as Bingham liquids. In the simple case, the liquid side'' is like Newtonian fluid for large shear stress. The general relationship for simple Bingham flow is $\tau_{xy} = -\mu \cdot \pm \tau_{0} \quad if \lvert\tau_{yx}\rvert > \tau_{0}$ $\frac{dU_{x}}{dy} = 0 \quad if\lvert\tau_{yx}\rvert < \tau_{0}$ There are materials that simple Bingham model does not provide adequate explanation and a more sophisticate model is required. The Newtonian part of the model has to be replaced by power liquid. For example, according to Ferraris at el concrete behaves as shown in Figure 1.7. However, for most practical purposes, this kind of figures isn't used in regular engineering practice.
1.5.2: Non-Newtonian Fluids
Figure 1.8. Air viscosity as a function of the temperature.
The kinematic viscosity is another way to look at the viscosity. The reason for this new definition is that some experimental data are given in this form. These results also explained better using the new definition. The kinematic viscosity embraces both the viscosity and density properties of a fluid. The above equation shows that the dimensions of $\nu$ to be square meter per second, $[m^2 / sec]$, which are acceleration units (a combination of kinematic terms). This fact explains the name kinematic'' viscosity. The kinematic viscosity is defined as $\nu = \frac{\mu}{\rho}$ The gas density decreases with the temperature. However, The increase of the absolute viscosity with the temperature is enough to overcome the increase of density and thus, the kinematic viscosity also increase with the temperature for many materials.
Fig. 1.9. Water viscosity as a function of temperature.
1.5.3: Kinematic Viscosity
Figure 1.8. Air viscosity as a function of the temperature.
The kinematic viscosity is another way to look at the viscosity. The reason for this new definition is that some experimental data are given in this form. These results also explained better using the new definition. The kinematic viscosity embraces both the viscosity and density properties of a fluid. The above equation shows that the dimensions of $\nu$ to be square meter per second, $[m^2 / sec]$, which are acceleration units (a combination of kinematic terms). This fact explains the name kinematic'' viscosity. The kinematic viscosity is defined as $\nu = \frac{\mu}{\rho}$ The gas density decreases with the temperature. However, The increase of the absolute viscosity with the temperature is enough to overcome the increase of density and thus, the kinematic viscosity also increase with the temperature for many materials.
Fig. 1.9. Water viscosity as a function of temperature.
|
textbooks/eng/Civil_Engineering/Book%3A_Fluid_Mechanics_(Bar-Meir)/00%3A_Introduction/1.5%3A_Viscosity/1.5.1%3A_General_Discussion.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.